3 datasets found
  1. o

    Supporting data for "Label3DMaize: toolkit for 3D point cloud data...

    • explore.openaire.eu
    Updated Jan 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Teng Miao; Weiliang Wen; Yinglun Li; Sheng Wu; Chao Zhu; Xinyu Guo (2021). Supporting data for "Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots" [Dataset]. http://doi.org/10.5524/100884
    Explore at:
    Dataset updated
    Jan 1, 2021
    Authors
    Teng Miao; Weiliang Wen; Yinglun Li; Sheng Wu; Chao Zhu; Xinyu Guo
    Description

    Three-dimensional (3D) point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the 3D plant reconstruction reliability. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking.In this paper, a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots is proposed. On this basis, a point cloud annotation toolkit, Label3DMaize, for maize shoot is developed. Further, the toolkit was applied to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes about 4 to 10 minutes to segment a maize shoot, and consumes 10%-20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% of the fine segmentation.Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation researches based on deep learning and is expected to promote automatic point cloud processing of various plants.

  2. D

    Data Labeling Tools Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Jun 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Data Labeling Tools Report [Dataset]. https://www.datainsightsmarket.com/reports/data-labeling-tools-1368998
    Explore at:
    doc, pdf, pptAvailable download formats
    Dataset updated
    Jun 19, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Data Labeling Tools market is experiencing robust growth, driven by the escalating demand for high-quality training data in artificial intelligence (AI) and machine learning (ML) applications. The market's expansion is fueled by the increasing adoption of AI across various sectors, including automotive, healthcare, and finance, which necessitates vast amounts of accurately labeled data for model training and improvement. Technological advancements in automation and semi-supervised learning are streamlining the labeling process, improving efficiency and reducing costs, further contributing to market growth. A key trend is the shift towards more sophisticated labeling techniques, including 3D point cloud annotation and video annotation, reflecting the growing complexity of AI applications. Competition is fierce, with established players like Amazon Mechanical Turk and Google LLC coexisting with innovative startups offering specialized labeling solutions. The market is segmented by type of data labeling (image, text, video, audio), annotation method (manual, automated), and industry vertical, reflecting the diverse needs of different AI projects. Challenges include data privacy concerns, ensuring data quality and consistency, and the need for skilled annotators, which are all impacting the overall market growth, requiring continuous innovation and strategic investments to address these issues. Despite these challenges, the Data Labeling Tools market shows strong potential for continued expansion. The forecast period (2025-2033) anticipates a significant increase in market value, fueled by ongoing technological advancements, wider adoption of AI across various sectors, and a rising demand for high-quality data. The market is expected to witness increased consolidation as larger players acquire smaller companies to strengthen their market position and technological capabilities. Furthermore, the development of more sophisticated and automated labeling tools will continue to drive efficiency and reduce costs, making these tools accessible to a broader range of users and further fueling market growth. We anticipate that the focus on improving the accuracy and speed of data labeling will be paramount in shaping the future landscape of this dynamic market.

  3. r

    PC-Urban Outdoordataset for 3D Point Cloud semantic segmentation

    • researchdata.edu.au
    Updated 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ajmal Mian; Micheal Wise; Naveed Akhtar; Muhammad Ibrahim; Computer Science and Software Engineering (2021). PC-Urban Outdoordataset for 3D Point Cloud semantic segmentation [Dataset]. http://doi.org/10.21227/FVQD-K603
    Explore at:
    Dataset updated
    2021
    Dataset provided by
    The University of Western Australia
    IEEE DataPort
    Authors
    Ajmal Mian; Micheal Wise; Naveed Akhtar; Muhammad Ibrahim; Computer Science and Software Engineering
    Description

    The proposed dataset, termed PC-Urban (Urban Point Cloud), is captured with an Ouster LiDAR sensor with 64 channels. The sensor is installed on an SUV that drives through the downtown of Perth, Western Australia (WA), Australia. The dataset comprises over 4.3 billion points captured for 66K sensor frames. The labelled data is organized as registered and raw point cloud frames, where the former has a different number of registered consecutive frames. We provide 25 class labels in the dataset covering 23 million points and 5K instances. Labelling is performed with PC-Annotate and can easily be extended by the end-users employing the same tool.The data is organized into unlabelled and labelled 3D point clouds. The unlabelled data is provided in .PCAP file format, which is the direct output format of the used Ouster LiDAR sensor. Raw frames are extracted from the recorded .PCAP files in the form of Ply and Excel files using the Ouster Studio Software. Labelled 3D point cloud data consists of registered or raw point clouds. A labelled point cloud is a combination of Ply, Excel, Labels and Summary files. A point cloud in Ply file contains X, Y, Z values along with color information. An Excel file contains X, Y, Z values, Intensity, Reflectivity, Ring, Noise, and Range of each point. These attributes can be useful in semantic segmentation using deep learning algorithms. The Label and Label Summary files have been explained in the previous section. Our one GB raw data contains nearly 1,300 raw frames, whereas 66,425 frames are provided in the dataset, each comprising 65,536 points. Hence, 4.3 billion points captured with the Ouster LiDAR sensor are provided. Annotation of 25 general outdoor classes is provided, which include car, building, bridge, tree, road, letterbox, traffic signal, light-pole, rubbish bin, cycles, motorcycle, truck, bus, bushes, road sign board, advertising board, road divider, road lane, pedestrians, side-path, wall, bus stop, water, zebra-crossing, and background. With the released data, a total of 143 scenes are annotated which include both raw and registered frames.

  4. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Teng Miao; Weiliang Wen; Yinglun Li; Sheng Wu; Chao Zhu; Xinyu Guo (2021). Supporting data for "Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots" [Dataset]. http://doi.org/10.5524/100884

Supporting data for "Label3DMaize: toolkit for 3D point cloud data annotation of maize shoots"

Explore at:
26 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jan 1, 2021
Authors
Teng Miao; Weiliang Wen; Yinglun Li; Sheng Wu; Chao Zhu; Xinyu Guo
Description

Three-dimensional (3D) point cloud is the most direct and effective data form for studying plant structure and morphology. In point cloud studies, the point cloud segmentation of individual plants to organs directly determines the accuracy of organ-level phenotype estimation and the 3D plant reconstruction reliability. However, highly accurate, automatic, and robust point cloud segmentation approaches for plants are unavailable. Thus, the high-throughput segmentation of many shoots is challenging. Although deep learning can feasibly solve this issue, software tools for 3D point cloud annotation to construct the training dataset are lacking.In this paper, a top-to-down point cloud segmentation algorithm using optimal transportation distance for maize shoots is proposed. On this basis, a point cloud annotation toolkit, Label3DMaize, for maize shoot is developed. Further, the toolkit was applied to achieve semi-automatic point cloud segmentation and annotation of maize shoots at different growth stages, through a series of operations, including stem segmentation, coarse segmentation, fine segmentation, and sample-based segmentation. The toolkit takes about 4 to 10 minutes to segment a maize shoot, and consumes 10%-20% of the total time if only coarse segmentation is required. Fine segmentation is more detailed than coarse segmentation, especially at the organ connection regions. The accuracy of coarse segmentation can reach 97.2% of the fine segmentation.Label3DMaize integrates point cloud segmentation algorithms and manual interactive operations, realizing semi-automatic point cloud segmentation of maize shoots at different growth stages. The toolkit provides a practical data annotation tool for further online segmentation researches based on deep learning and is expected to promote automatic point cloud processing of various plants.

Search
Clear search
Close search
Google apps
Main menu