Three-dimensional (3D) point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the 3D plant reconstruction reliability. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking.In this paper, a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots is proposed. On this basis, a point cloud annotation toolkit, Label3DMaize, for maize shoot is developed. Further, the toolkit was applied to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes about 4 to 10 minutes to segment a maize shoot, and consumes 10%-20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% of the fine segmentation.Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation researches based on deep learning and is expected to promote automatic point cloud processing of various plants.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Manual annotation for human action recognition with content semantics using 3D Point Cloud (3D-PC) in industrial environments consumes a lot of time and resources. This work aims to recognize, analyze, and model human actions to develop a framework for automatically extracting content semantics. Main Contributions of this work: 1. design a multi-layer structure of various DNN classifiers to detect and extract humans and dynamic objects using 3D-PC preciously, 2. empirical experiments with over 10 subjects for collecting datasets of human actions and activities in one industrial setting, 3. development of an intuitive GUI to verify human actions and its interaction activities with the environment, 4. design and implement a methodology for automatic sequence matching of human actions in 3D-PC. All these procedures are merged in the proposed framework and evaluated in one industrial Use-Case with flexible patch sizes. Comparing the new approach with standard methods has shown that the annotation process can be accelerated by 5.2 times through automation.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Point-wise annotation was conducted on input point clouds to prepare a labeled dataset for segmenting different sorghum plant-organ. Each sorghum plant's leaf, stem, and panicle were manually labeled in 0, 1, and 2, respectively, using the segment module of the CloudCompare software.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Data Labeling Tools market is experiencing robust growth, driven by the escalating demand for high-quality training data in artificial intelligence (AI) and machine learning (ML) applications. The market's expansion is fueled by the increasing adoption of AI across various sectors, including automotive, healthcare, and finance, which necessitates vast amounts of accurately labeled data for model training and improvement. Technological advancements in automation and semi-supervised learning are streamlining the labeling process, improving efficiency and reducing costs, further contributing to market growth. A key trend is the shift towards more sophisticated labeling techniques, including 3D point cloud annotation and video annotation, reflecting the growing complexity of AI applications. Competition is fierce, with established players like Amazon Mechanical Turk and Google LLC coexisting with innovative startups offering specialized labeling solutions. The market is segmented by type of data labeling (image, text, video, audio), annotation method (manual, automated), and industry vertical, reflecting the diverse needs of different AI projects. Challenges include data privacy concerns, ensuring data quality and consistency, and the need for skilled annotators, which are all impacting the overall market growth, requiring continuous innovation and strategic investments to address these issues. Despite these challenges, the Data Labeling Tools market shows strong potential for continued expansion. The forecast period (2025-2033) anticipates a significant increase in market value, fueled by ongoing technological advancements, wider adoption of AI across various sectors, and a rising demand for high-quality data. The market is expected to witness increased consolidation as larger players acquire smaller companies to strengthen their market position and technological capabilities. Furthermore, the development of more sophisticated and automated labeling tools will continue to drive efficiency and reduce costs, making these tools accessible to a broader range of users and further fueling market growth. We anticipate that the focus on improving the accuracy and speed of data labeling will be paramount in shaping the future landscape of this dynamic market.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set contains:
- 5 annotated point clouds of real Chenopodium alba plants obtained from multi-view 2D camera imaging. Annotations consist of 5 classes: leaf blade, petiole, apex, main stem, branch. .txt files contain both 3D coordinates and annotations. .ply files are also provided for raw 3D point data without annotations.
- 24 annotated point clouds of virtual Chenopodium alba that were generated by a L-system simulation program. Annotations consist of 3 classes: leaf blade, petiole, stem. 3D coordinates and annotations are in separated .txt files.
These files have been used in a companion paper.
The proposed dataset, termed PC-Urban (Urban Point Cloud), is captured with an Ouster LiDAR sensor with 64 channels. The sensor is installed on an SUV that drives through the downtown of Perth, Western Australia (WA), Australia. The dataset comprises over 4.3 billion points captured for 66K sensor frames. The labelled data is organized as registered and raw point cloud frames, where the former has a different number of registered consecutive frames. We provide 25 class labels in the dataset covering 23 million points and 5K instances. Labelling is performed with PC-Annotate and can easily be extended by the end-users employing the same tool.The data is organized into unlabelled and labelled 3D point clouds. The unlabelled data is provided in .PCAP file format, which is the direct output format of the used Ouster LiDAR sensor. Raw frames are extracted from the recorded .PCAP files in the form of Ply and Excel files using the Ouster Studio Software. Labelled 3D point cloud data consists of registered or raw point clouds. A labelled point cloud is a combination of Ply, Excel, Labels and Summary files. A point cloud in Ply file contains X, Y, Z values along with color information. An Excel file contains X, Y, Z values, Intensity, Reflectivity, Ring, Noise, and Range of each point. These attributes can be useful in semantic segmentation using deep learning algorithms. The Label and Label Summary files have been explained in the previous section. Our one GB raw data contains nearly 1,300 raw frames, whereas 66,425 frames are provided in the dataset, each comprising 65,536 points. Hence, 4.3 billion points captured with the Ouster LiDAR sensor are provided. Annotation of 25 general outdoor classes is provided, which include car, building, bridge, tree, road, letterbox, traffic signal, light-pole, rubbish bin, cycles, motorcycle, truck, bus, bushes, road sign board, advertising board, road divider, road lane, pedestrians, side-path, wall, bus stop, water, zebra-crossing, and background. With the released data, a total of 143 scenes are annotated which include both raw and registered frames.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
UA_L-DoTT (University of Alabama’s Large Dataset of Trains and Trucks) is a collection of camera images and 3D LiDAR point cloud scans from five different data sites. Four of the data sites targeted trains on railways and the last targeted trucks on a four-lane highway. Low light conditions were present at one of the data sites showcasing unique differences between individual sensor data. The final data site utilized a mobile platform which created a large variety of view points in images and point clouds. The dataset consists of 93,397 raw images, 11,415 corresponding labeled text files, 354,334 raw point clouds, 77,860 corresponding labeled point clouds, and 33 timestamp files. These timestamps correlate images to point cloud scans via POSIX time. The data was collected with a sensor suite consisting of five different LiDAR sensors and a camera. This provides various viewpoints and features of the same targets due to the variance in operational characteristics of the sensors. The inclusion of both raw and labeled data allows users to get started immediately with the labeled subset, or label additional raw data as needed. This large dataset is beneficial to any researcher interested in machine learning using cameras, LiDARs, or both.
The full dataset is too large (~1 Tb) to be uploaded to Mendeley Data. Please see the attached link for access to the full dataset.
https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy
The AI Data Labeling Services market is experiencing rapid growth, driven by the increasing demand for high-quality training data to fuel advancements in artificial intelligence. The market, estimated at $10 billion in 2025, is projected to witness a robust Compound Annual Growth Rate (CAGR) of 25% from 2025 to 2033, reaching a substantial market size. This expansion is fueled by several key factors. The automotive industry leverages AI data labeling for autonomous driving systems, while healthcare utilizes it for medical image analysis and diagnostics. The retail and e-commerce sectors benefit from improved product recommendations and customer service through AI-powered chatbots and image recognition. Agriculture is employing AI data labeling for precision farming and crop monitoring. Furthermore, the increasing adoption of cloud-based solutions offers scalability and cost-effectiveness, bolstering market growth. While data security and privacy concerns present challenges, the ongoing development of innovative techniques and the rising availability of skilled professionals are mitigating these restraints. The market is segmented by application (automotive, healthcare, retail & e-commerce, agriculture, others) and type (cloud-based, on-premises), with cloud-based solutions gaining significant traction due to their flexibility and accessibility. Key players like Scale AI, Labelbox, and Appen are actively shaping market dynamics through technological innovations and strategic partnerships. The North American market currently holds a significant share, but regions like Asia Pacific are poised for substantial growth due to increasing AI adoption and technological advancements. The competitive landscape is dynamic, characterized by both established players and emerging startups. While larger companies possess substantial resources and experience, smaller, agile companies are innovating with specialized solutions and niche applications. Future growth will likely be influenced by advancements in data annotation techniques (e.g., synthetic data generation), increasing demand for specialized labeling services (e.g., 3D point cloud labeling), and the expansion of AI applications across various industries. The continued development of robust data governance frameworks and ethical considerations surrounding data privacy will play a critical role in shaping the market's trajectory in the coming years. Regional growth will be influenced by factors such as government regulations, technological infrastructure, and the availability of skilled labor. Overall, the AI Data Labeling Services market presents a compelling opportunity for growth and investment in the foreseeable future.
https://www.statsndata.org/how-to-orderhttps://www.statsndata.org/how-to-order
The 3D Point Cloud Annotation Services market has emerged as a pivotal segment within the realms of computer vision, artificial intelligence, and geospatial technologies, addressing the increasing demand for accurate data interpretation across various industries. As enterprises strive to leverage 3D data for enhance
S3DIS comprises 6 colored 3D point clouds from 6 large-scale indoor areas, along with semantic instance annotations for 12 object categories (wall, floor, ceiling, beam, column, window, door, sofa, desk, chair, bookcase, and board).
The Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset is composed of the colored 3D point clouds of six large-scale indoor areas from three different buildings, each covering approximately 935, 965, 450, 1700, 870, and 1100 square meters (total of 6020 square meters). These areas show diverse properties in architectural style and appearance and include mainly office areas, educational and exhibition spaces, and conference rooms, personal offices, restrooms, open spaces, lobbies, stairways, and hallways are commonly found therein. The entire point clouds are automatically generated without any manual intervention using the Matterport scanner. The dataset also includes semantic instance annotations on the point clouds for 12 semantic elements, which are structural elements (ceiling, floor, wall, beam, column, window, and door) and commonly found items and furniture (table, chair, sofa, bookcase, and board).
https://redivis.com/fileUploads/5bdaf09c-7d3b-4a91-b192-d98a0f0b0018%3E" alt="S3DIS.png">
%3Cu%3E%3Cstrong%3EImportant Information%3C/strong%3E%3C/u%3E
%3C!-- --%3E
2D-3D-S comprises
The 2D-3D-S dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 and contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. In addition, the dataset contains the raw RGB and Depth imagery along with the corresponding camera information per scan location. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces.
In more detail, the dataset is collected in 6 large-scale indoor areas that originate from 3 different buildings of mainly educational and office use. For each area, all modalities are registered in the same reference system, yielding pixel to pixel correspondences among them. In a nutshell, the presented dataset contains a total of 70,496 regular RGB and 1,413 equirectangular RGB images, along with their corresponding depths, surface normals, semantic annotations, global XYZ OpenEXR format and camera metadata. It also contains the raw sensor data, which comprises of 18 HDR RGB and Depth images (6 looking forward, 6 towards the top, 6 towards the bottom) along with the corresponding camera metadata per each of the 1,413 scan locations, yielding a total of 25,434 RGBD raw images. In addition, we provide whole building 3D reconstructions as textured meshes, as well as the corresponding 3D semantic meshes. It also includes the colored 3D point cloud data of these areas with the total number of 695,878,620 points, that have been previously presented in the Stanford large-scale 3D Indoor Spaces Dataset (S3DIS).
https://redivis.com/fileUploads/7a4dcf34-471b-4dd8-b2dc-dc9842280f76%3E" alt="2D3DS_pano.png">
https://redivis.com/fileUploads/699e543b-cac6-4db0-bf30-77d48e3b2203%3E" alt="3Dmodal.png">
https://redivis.com/fileUploads/43f7c602-202c-48fb-a44e-386b57a22835%3E" alt="equirect.png">%3Cu%3E%3Cstrong%3EImportant Information:%3C/strong%3E%3C/u%3E
%3C!-- --%3E
This dataset contains labeled point cloud data that was captured by a car equipped with a mobile laser scanner.
It is part of the ICMLA publication Large-Scale Curb Extraction Based on 3D Deep Learning and Iterative Refinement Post-Processing
The dataset is supposed to serve as a benchmark for automated large-scale point cloud curb detection approaches.
The dataset is derived from the publicly available KITTI360 dataset published in the paper
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer by Jun Xie et al.
LICENSE
Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) License
If you find this dataset useful, please cite the following publication:
@INPROCEEDINGS{9680221, author={Schmitz, Jan-Christoph and Bauer, Adrian and Kummert, Anton}, booktitle={2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)}, title={Large-Scale Curb Extraction Based on 3D Deep Learning and Iterative Refinement Post-Processing}, year={2021}, volume={}, number={}, pages={558-563}, doi={10.1109/ICMLA52953.2021.00093}}
Supported by the State of North Rhine-Westphalia, Germany
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset was introduced in the 'Accurate 3D Automatic Annotation of Traffic Lights and Signs for Autonomous Driving' paper.
3D detection of traffic management objects, such as traffic lights and road signs, is vital for self-driving cars, particularly for address-to-address navigation where vehicles encounter numerous intersections with these static objects. We introduce a novel method for automatically generating accurate and temporally consistent 3D bounding box annotations for traffic lights and signs, effective up to a range of 200 meters. These annotations are suitable for training real-time models used in self-driving cars, which need a large amount of training data. The proposed method relies only on RGB images with 2D bounding boxes of traffic management objects, which can be automatically obtained using an off-the-shelf image-space detector neural network, along with GNSS/INS data, eliminating the need for LiDAR point cloud data.
The paper describing the dataset can be read here: https://arxiv.org/abs/2409.12620
If you use aiMotive 3D Traffic Light and Sign Dataset in your research, please cite our work by using the following BibTeX entry: @article{kunsagi2024aimotive, title={Accurate Automatic 3D Annotation of Traffic Lights and Signs for Autonomous Driving}, author={Kuns{\'a}gi-M{\'a}t{\'e}, S{\'a}ndor and Pet{\H{o}}, Levente and Seres, Lehel and Matuszka, Tam{\'a}s}, booktitle={European Conference on Computer Vision 2024 Workshop on Vision-Centric Autonomous Driving} }
While modern deep learning algorithms for semantic segmentation of airborne laser scanning (ALS) point clouds have achieved considerable success, the training process often requires a large number of labelled 3D points. Pointwise annotation of 3D point clouds, especially for large scale ALS datasets, is extremely time-consuming work. Weak supervision that only needs a few annotation efforts but can make networks achieve comparable performance is an alternative solution. Assigning a weak label to a subcloud, a group of points, is an efficient annotation strategy. With the supervision of subcloud labels, we first train a classification network that produces pseudo labels for the training data. Then the pseudo labels are taken as the input of a segmentation network which gives the final predictions on the testing data. As the quality of pseudo labels determines the performance of the segmentation network on testing data, we propose an overlap region loss and an elevation attention unit for the classification network to obtain more accurate pseudo labels. The overlap region loss that considers the nearby subcloud semantic information is introduced to enhance the awareness of the semantic heterogeneity within a subcloud. The elevation attention helps the classification network to encode more representative features for ALS point clouds. For the segmentation network, in order to effectively learn representative features from inaccurate pseudo labels, we adopt a supervised contrastive loss that uncovers the underlying correlations of class-specific features. Extensive experiments on three ALS datasets demonstrate the superior performance of our model to the baseline method (Wei et al., 2020).
https://www.techsciresearch.com/privacy-policy.aspxhttps://www.techsciresearch.com/privacy-policy.aspx
Data Annotation and Labeling Market was valued at USD 1.32 Billion in 2024 and is expected to reach USD 2.50 Billion by 2030 with a CAGR of 11.23%.
Pages | 185 |
Market Size | 2024: USD 1.32 Billion |
Forecast Market Size | 2030: USD 2.50 Billion |
CAGR | 2025-2030: 11.23% |
Fastest Growing Segment | Healthcare Providers |
Largest Market | North America |
Key Players | 1. Scale AI, Inc. 2. Appen Limited 3. iMerit Technology Services 4. Labelbox, Inc. 5. Amazon.com, Inc. 6. CloudFactory Ltd. 7. Cogito Tech LLC 8. TELUS International AI 9. SuperAnnotate Inc. 10. Shaip Ltd. |
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2023 |
REGIONS COVERED | North America, Europe, APAC, South America, MEA |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2024 | 3.75(USD Billion) |
MARKET SIZE 2025 | 4.25(USD Billion) |
MARKET SIZE 2035 | 15.0(USD Billion) |
SEGMENTS COVERED | Application, Labeling Type, Deployment Type, End User, Regional |
COUNTRIES COVERED | US, Canada, Germany, UK, France, Russia, Italy, Spain, Rest of Europe, China, India, Japan, South Korea, Malaysia, Thailand, Indonesia, Rest of APAC, Brazil, Mexico, Argentina, Rest of South America, GCC, South Africa, Rest of MEA |
KEY MARKET DYNAMICS | increasing AI adoption, demand for accurate datasets, growing automation in workflows, rise of cloud-based solutions, emphasis on data privacy regulations |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Lionbridge, Scale AI, Google Cloud, Amazon Web Services, DataSoring, CloudFactory, Mighty AI, Samasource, TrinityAI, Microsoft Azure, Clickworker, Pimlico, Hive, iMerit, Appen |
MARKET FORECAST PERIOD | 2025 - 2035 |
KEY MARKET OPPORTUNITIES | AI-driven automation integration, Expansion in machine learning applications, Increasing demand for annotated datasets, Growth in autonomous vehicles sector, Rising focus on data privacy compliance |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 13.4% (2025 - 2035) |
The dataset comprises the pretraining and testing data for our work: Terrain-Informed Self-Supervised Learning: Enhancing Building Footprint Extraction from LiDAR Data with Limited Annotations. The pretaining data consists of images corresponding to the Digital Surface Models (DSM) and Digital Terrain Models (DTM) obtained from Norway, with a ground resolution of 1 meter, utilizing the UTM 33N projection. The primary data source for this dataset is the Norwegian Mapping Authority (Kartverket), which has made the data freely available on their website under the CC BY 4.0 license (Source: https://hoydedata.no/, License terms: https://creativecommons.org/licenses/by/4.0/) The DSM and DTM models are generated from 3D LiDAR point clouds collected through periodic aerial campaigns. During these campaigns, the LiDAR sensors capture data with a maximum offset of 20 degrees from the nadir. Additionally, a subset of data also includes building footprints/labels created using the OpenStreetMap (OSM) database. Specifically, building footprints extracted from the OSM database were rasterized to match the grid of the DTM and DSM models. These rasterized labels are made available under the Open Database License (ODbL) in compliance with the OSM license requirements. We hope this dataset facilitates various applications in geographic analysis, remote sensing, and machine learning research.
This project is a matlab implementation for fruit detection in 3D point clouds acquired with LiDAR sensor Velodyne VLP-16 (Velodyne LIDAR Inc., San Jose, CA, USA). This implementation was used to evaluate the LFuji-air dataset, which contains 3D LiDAR data of 11Fuji apple trees with the corresponding fruit position annotations. MATLAB, R2018 This software is stored and maintained in the following github repository: https://github.com/GRAP-UdL-AT/fruit_detection_in_LiDAR_pointClouds Computer Vision System Toolbox Statistics and Machine Learning Toolbox
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The PFuji-Size dataset includes a total of 615 Fuji apples scanned in-field conditions and an additional 25 apples scanned in laboratory conditions. Structure-from-motion and multi-view stereo techniques were used to generate the 3D point clouds of the captured scene. Apple locations and ground truth diameter annotations are provided for assessing fruit detection and size estimation algorithms.
The reader is referred to visit articles [1] and [2] for a description of methodology and further information about this dataset.
This database is available only for research and educational purpose and not for any commercial use. If you use the database in any publications or reports, please consider citing the following papers:
[1] Gené-Mola J, Sanz-Cortiella R, Rosell-Polo JR, Escolà A, Gregorio E. 2021. In-field apple size estimation using photogrammetry-derived 3D point clouds: comparison of 4 different methods considering fruit occlusions. (Submitted) [2] Gené-Mola J, Sanz-Cortiella R, Rosell-Polo JR, Escolà A, Gregorio E. 2021. PFuji-Size dataset: a collection of photogrammetry-derived 3D point clouds with ground truth annotations for Fuji apple detection and size estimation in field conditions. (Submitted)
https://www.htfmarketinsights.com/privacy-policyhttps://www.htfmarketinsights.com/privacy-policy
Global Generative AI in Data Labeling Solution and Services is segmented by Application (Autonomous driving, NLP, Medical imaging, Retail AI, Robotics), Type (Text Annotation, Image/Video Tagging, Audio Labeling, 3D Point Cloud Labeling, Synthetic Data Generation) and Geography(North America, LATAM, West Europe, Central & Eastern Europe, Northern Europe, Southern Europe, East Asia, Southeast Asia, South Asia, Central Asia, Oceania, MEA)
Three-dimensional (3D) point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the 3D plant reconstruction reliability. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking.In this paper, a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots is proposed. On this basis, a point cloud annotation toolkit, Label3DMaize, for maize shoot is developed. Further, the toolkit was applied to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes about 4 to 10 minutes to segment a maize shoot, and consumes 10%-20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% of the fine segmentation.Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation researches based on deep learning and is expected to promote automatic point cloud processing of various plants.