14 datasets found
  1. CarDA - Car door Assembly Activities Dataset

    • zenodo.org
    bin, pdf
    Updated Jan 15, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konstantinos Papoutsakis; Konstantinos Papoutsakis; Nikolaos Bakalos; Nikolaos Bakalos; Athena Zacharia; Athena Zacharia; Maria Pateraki; Maria Pateraki (2025). CarDA - Car door Assembly Activities Dataset [Dataset]. http://doi.org/10.5281/zenodo.14644367
    Explore at:
    pdf, binAvailable download formats
    Dataset updated
    Jan 15, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Konstantinos Papoutsakis; Konstantinos Papoutsakis; Nikolaos Bakalos; Nikolaos Bakalos; Athena Zacharia; Athena Zacharia; Maria Pateraki; Maria Pateraki
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The CarDA dataset [1] (Car Door Assembly dataset) has been designed and captured to provide a comprehensive, multi-modal resource for analyzing car door assembly activities performed by trained line workers in realistic assembly lines.

    It comprises a set of time-synchronized multi-camera RGB-D videos and human motion capture data acquired during car door assembly activities performed by real-line workers in a real manufacturing environment.

    Deployment environment:

    The use-case scenario concerns a real-world assembly line workplace in an automotive manufacturing industry, as the deployment environment. In this context,
    line workers simulate the real car door assembly workflow using the prompts, sequences, and tools under very similar ergonomic and environmental conditions
    as in existing factory shop floors.

    The assembly line involves a conveyor belt that is separated into three virtually separated work areas that correspond to three assembly workstations. It moves at a low, constant speed, supporting cart-mounted car doors and material storage. A line worker is assigned to each workstation. All workers assemble car doors as the belt moves, with each station (WS10, WS20, and WS30). A worker completes a workstation-specific set of assembly actions, noted as a task cycle, lasting about 4 minutes before the cart proceeds to the next workstation for further assembly. Upon the successful completion of the task cycle, the cart is left to travel to the virtually defined area of the subsequent workstation where another line worker will continue the assembly process during the new task cycle. Each task cycle lasts approximately 4 minutes and is continuously repeated during the worker’s shift.

    Data acquisition:

    Data acquisition involves low-cost, passive RGB-D camera sensors that are installed at stationary locations alongside the car door assembly line and a motion
    capture system for capturing time-synchronized sequences of images and motion capture data during car door assembly activities performed by real line workers.

    Two stationary StereoLabs ZED2 stereo cameras were installed in each of the three workstations of the car door assembly line. The two stationary, workstation-specific cameras are located at bilateral positions on the two sides of the conveyor belt at the center of the area concerning that specific workstation.

    The pair of RGB-D sensors were utilized to acquire stereo color and depth image sequences during car door task cycle executions. Each recording comprises
    time-synchronized RGB (color) and depth image sequences captured throughout a task cycle execution at 30 frames per second (fps).

    At the same time, the line worker used a wearable XSens MVN Link suit during work activities to acquire time-synced 3D motion capture data at 60 fps.

    Note: Time synchronization between pairs of RGB-D (.svo) recordings (pairs captured during an assembly task cycle simultaneously from the inXX and outXX cameras installed by the wsXX) is guaranteed and relies on the StereoLabs ZED SDK acquisition software. Time synchronization between samples of the RGB-D and mp4 videos (30 fps) and the acquired motion capture data (60 fps) was performed manually with the starting frame/time of the video as a reference time. We have observed some time discrepancies between data samples of the two modalities that might occur after the first 40-50 seconds in some recordings.

    CarDA Dataset:

    The dataset has been split into two subsets, A and B.

    Each comprises data acquired at different periods using the same multicamera system in the same manufacturing environment.

    Subset A contains recordings of RGB-D videos, mp4 videos, and 3d human motion capture data (using the XSens MVN Link suit) acquired during car door assembly activities in all three workstations.

    Subset B contains recordings of RGB-D videos and mp4 videos acquired during car door assembly activities in all three workstations.

    CarDA subset Α

    It contains:

      • RGB-D was acquired using StereoLabs ZED 2 sensors in .svo format
      • mp4 videos (30fps) extracted from the .svo files (using the left camera of the stereo pair of each camera).
      • 3D human pose data (ground truth) captured using the Movella Xsens MVN Link motion capture system (60 fps) in .bvh format
      • Annotation data (xls file format):
        • Ground truth related to temporal segmentation and classification of car door assembly actions (subgoals) during task cycle executions, performed by personnel working directly on the assembly line for the CarDA dataset.
        • Ground truth data on the duration of basic ergonomic postures based on the EAWS ergonomic screening tool: Two experts in manufacturing and ergonomics performed manual annotations related to the EAWS screening tool.

    CarDA subset Α files:

      • ws10 - svo - mp4 - bvh.rar
        Five assembly task cycle executions are recorded in WS10 containing pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras, .bvh motion capture data acquired using the XSens Link system. Annotation data are also available.
      • ws20 - svo - mp4 - bvh.rar
        Four assembly task cycle executions are recorded in WS20 containing pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras, .bvh motion capture data acquired using the XSens Link system. Annotation data are also available.

      • ws30 - svo - mp4 - bvh.rar
        Four assembly task cycle executions are recorded in WS30 containing pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras, .bvh motion capture data acquired using the XSens Link system. Annotation data are also available.

    CarDA subset B

    It contains:

      • RGB-D was acquired using StereoLabs ZED 2 sensors in .svo format
      • mp4 videos (30fps) extracted from the .svo files (using the left camera of the stereo pair of each camera).
      • Annotation data (xls file format):

        • Ground truth related to temporal segmentation and classification of car door assembly actions (subgoals) during task cycle executions, performed by personnel working directly on the assembly line for the CarDA dataset.
        • Ground truth data on the duration of basic ergonomic postures based on the EAWS ergonomic screening tool: Two experts in manufacturing and ergonomics performed manual annotations related to the EAWS screening tool.

    CarDA subset B files:

      • ws10 - svo - mp4.rar
        Three pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided.

      • ws20 - svo - mp4.rar
        Six pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided.

      • ws30 - svo - mp4.rar
        Three pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided.

    Contact:

    Konstantinos Papoutsakis, PhD: papoutsa@ics.forth.gr

    Maria Pateraki: mpateraki@mail.ntua.gr
    Assistant Professor | National Technical University of Athens
    Affiliated Researcher | Institute of Computer Science | FORTH

    References:

    [1] Konstantinos Papoutsakis, Nikolaos Bakalos, Konstantinos Fragkoulis, Athena Zacharia, Georgia Kapetadimitri, and Maria Pateraki. A vision-based framework for human behavior understanding in industrial assembly lines. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops - T-CAP 2024 Towards a Complete Analysis of People: Fine-grained Understanding for Real-World Applications, 2024.

  2. o

    CarDA - Car door Assembly Activities Dataset

    • explore.openaire.eu
    Updated Aug 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konstantinos Papoutsakis; Nikolaos Bakalos; Athena Zacharia; Maria Pateraki (2024). CarDA - Car door Assembly Activities Dataset [Dataset]. http://doi.org/10.5281/zenodo.13370888
    Explore at:
    Dataset updated
    Aug 25, 2024
    Authors
    Konstantinos Papoutsakis; Nikolaos Bakalos; Athena Zacharia; Maria Pateraki
    Description

    The proposed multi-modal dataset for car door assembly activities, noted as CarDA [1], comprises a set of time-synchronized multi-camera RGB-D videos andmotion capture data acquired during car door assembly activities performed by real-line workers in a real manufacturing environment. [1] Konstantinos Papoutsakis, Nikolaos Bakalos, Konstantinos Fragkoulis, Athena Zacharia, Georgia Kapetadimitri, and Maria Pateraki. A vision-based framework for human behavior understanding in industrial assembly lines. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops - T-CAP 2024 Towards a Complete Analysis of People: Fine-grained Understanding for Real-World Applications, 2024. CarDA subset Α It contains visual data in the form of .svo (RGB-D acquired using StereoLabs ZED 2 sensors), mp4 videos, .bvh files for 3D human pose data (ground truth), and annotation data (to be added in v2 of the dataset). CarDA subset B Contains visual data in the form of .svo (RGB-D acquired using StereoLabs ZED 2 sensors), mp4 videos, and annotation data. ws10 - svo - mp4 Three pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided. Each pair of RGB-D videos demonstrates a complete car door task cycle for workstation WS10 of the assembly line. MP4 videos are also available. Extracted using the left camera of the stereo pair of each camera. Annotation data for the task cycles are provided in the xls file related to the temporal segmentation and semantics of the assembly activities performed and the duration any of the supported EAWS-based postures occurred during an assembly activity. ws20 - svo - mp4 Six pairs of RGB-D videos (svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided. MP4 videos are also available. Extracted using the left camera of the stereo pair of each camera. Each pair of RGB-D videos demonstrates a complete car door task cycle for workstation WS20 of the assembly line. Annotation data for the task cycles are provided in the xls file related to the temporal segmentation and semantics of the assembly activities performed and the duration any of the supported EAWS-based postures occurred during an assembly activity. ws30 - svo - mp4 Three pairs of RGB-D videos (svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided. Each pair of RGB-D videos demonstrates a complete car door task cycle for workstation WS30 of the assembly line. MP4 videos are also available. Extracted using the left camera of the stereo pair of each camera. Annotation data for the task cycles are provided in the xls file related to the temporal segmentation and semantics of the assembly activities performed and the duration any of the supported EAWS-based postures occurred during an assembly activity.

  3. R

    Yolov5_seg Dataset

    • universe.roboflow.com
    zip
    Updated Sep 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Segmentation yolov5 (2023). Yolov5_seg Dataset [Dataset]. https://universe.roboflow.com/segmentation-yolov5/yolov5_seg-tm3yy/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 2, 2023
    Dataset authored and provided by
    Segmentation yolov5
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Screw Polygons
    Description

    Here are a few use cases for this project:

    1. Hardware Manufacturing Quality Control: The model can be used to help hardware manufacturing companies efficiently sort and identify screws during production and post-production stages. Using "Yolov5_seg", they can ensure the right type of screws are packaged and shipped - eliminating human error.

    2. Automated Assembly Lines: The model can support automated assembly lines, especially those involving machines that need to pick up and use specific types of screws. It could distinguish between vertical and horizontal screws, assisting in more precise and efficient production processes.

    3. Inventory Management in Construction and Engineering Fields: This model can facilitate automatic counting and classification of screws, aiding in maintaining an accurate inventory. Stored images of screws can be used to identify type and calculate quantities, helping to prevent supply shortages or overstocking.

    4. Education and Training Tools: Computer vision models such as "Yolov5_seg" can be used in educational resources or training tools to help students or new workers learn to identify different classes of screws easily.

    5. Recycling Processes: The model could be used to sort screws during the disassembly of discarded appliances or machinery, supporting recycling processes by identifying types of screws and segregating them for reuse or proper disposal.

  4. Manufacturing Defects - Industry Dataset

    • kaggle.com
    Updated Oct 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Gabriel Santello (2023). Manufacturing Defects - Industry Dataset [Dataset]. https://www.kaggle.com/datasets/gabrielsantello/manufacturing-defects-industry-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 19, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Gabriel Santello
    Description

    Defect sampling is used in industrial settings to determine the types and amounts of defects in manufactured items. Items at various stages of production are removed from the process and inspected for defects. Sustained testing allows operations managers to discover whether some part of the manufacturing process is failing to meet performance criteria and product standards. To minimize manufacturing defects, early detection and problem resolution are critical.

    In the current sampling plan, one component from the production line is randomly selected every 15 minutes. Each component is inspected and tested for major and minor defects. Major defects, which affect component performance, must be addressed immediately. Fortunately, major defects are rare and are generally contained and corrected early in the process. Minor defects, such as nicks and scratches, are those that affect the appearance of a component but not its functionality.

    The data set contains ten days of data on minor defects. Each day, one item is tested every fifteen minutes during an eight-hour shift. The variables in the data set are: - Day - Day of the test: 1 – 10 - Sample - Time of the day that sample was taken in military time (e.g., 13:00 is 1pm) - Defects - Number of minor defects detected on the sampled item

  5. c

    California State Assembly Districts Map 2020

    • gis.data.ca.gov
    • data.ca.gov
    Updated Feb 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Department of Technology (2023). California State Assembly Districts Map 2020 [Dataset]. https://gis.data.ca.gov/datasets/california-state-assembly-districts-map-2020/about
    Explore at:
    Dataset updated
    Feb 9, 2023
    Dataset authored and provided by
    California Department of Technology
    Area covered
    Description

    Final approved map by the 2020 California Citizens Redistricting Commission for the California State Assembly; the authoritative and official delineations of the California State Assembly drawn during the 2020 redistricting cycle. The Citizens Redistricting Commission for the State of California has created statewide district maps for the State Assembly, State Senate, State Board of Equalization, and United States Congress in accordance, with the provisions of Article XXI of the California Constitution. The Commission has approved the final maps and certified them to the Secretary of State.Line drawing criteria included population equality as required by the U.S. Constitution, the Federal Voting Rights Act, geographic contiguity, geographic integrity, geographic compactness, and nesting. Geography was defined by U.S. Census Block geometry.80 Assembly districts have an ideal population of around 500,000 people each, and in consideration of population equality, the Commission chose to limit the population deviation range to as close to zero percent as practicable. With these districts, the Commission was able to respect many local communities of interest and group similar communities; however, it was more difficult to keep densely populated counties, cities, neighborhoods, and larger communities of interest whole due to the district size and correspondingly smaller number allowable in the population deviation percentage.

  6. R

    Defect Detection Dataset

    • universe.roboflow.com
    zip
    Updated Sep 3, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dent (2024). Defect Detection Dataset [Dataset]. https://universe.roboflow.com/dent-ydn9e/defect-detection-rhju6
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 3, 2024
    Dataset authored and provided by
    Dent
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Dents Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Manufacturing Quality Control: The "Defect Detection" model can be used in production lines to inspect products, such as bottles or metallic containers, in real-time. The model can automatically flag or reject items with unacceptable dents or identify those with marginal dents for further inspection, ensuring only those with acceptable dents reach the consumers.

    2. Automotive Industry: This model can be employed in the automotive sector for detecting dents and assessing the extent of damage in vehicles post-manufacturing or after collisions. It can help workshops and insurance companies estimate repair costs by classifying the severity of the dent.

    3. Warehousing and Storage: The "Defect Detection" model can be used to monitor the quality and integrity of products during storage and handling in warehouses. Items with severe or marginal dents can be separated, and the cause of the damage can be investigated to prevent similar issues in the future.

    4. Packaging Industry: The model can be applied to check the quality of packaging materials, such as cans or cardboard boxes, before they are used to package products. By identifying the dent class, businesses can decide whether to use or discard the packaging material, ensuring a better customer experience.

    5. Public Transportation Maintenance: The "Defect Detection" model can assist in identifying dents and damages on the exteriors of trains, buses, or trams. By classifying the dents, maintenance teams can prioritize repairs and replacements of the affected parts, ensuring the safety and appearance of public transport vehicles.

  7. R

    Data from: Derniere Dataset

    • universe.roboflow.com
    zip
    Updated May 8, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dkhili (2022). Derniere Dataset [Dataset]. https://universe.roboflow.com/dkhili-qi9vw/derniere-dataset
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 8, 2022
    Dataset authored and provided by
    dkhili
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Vrai FALSE Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Device Diagnostic Tool: The model can be used in a diagnostic application to analyze images of devices and determine if a particular function or feature is working (TRUE) or not (FALSE), assisting in automation of quality control processes in electronic production lines.

    2. User-Interface Testing: Software developers or testers could use this model to automate the process of user-interface testing. By identifying whether certain elements in the interface meet the specified conditions, the model can help speed up the process of UI validation.

    3. Smart Home Automation: The model could be used on a home automation system to determine the status (ON/OFF or TRUE/FALSE) of various electronic devices or components in a home, such as heaters, refrigerators, lights etc., helping in efficient energy management.

    4. Accessibility Assistance Devices: The AI model can be integrated into assistive technologies for individuals with disabilities. For example, it can be used to detect the state of various household objects, helping to enhance the autonomy of visually impaired people.

    5. E-Learning Platforms: The model can be used in online learning platforms which require students to interact with virtual devices or simulations. It can check and validate students' performed operations as TRUE or FALSE, thereby giving instant feedback.

  8. R

    Amogyfinder Dataset

    • universe.roboflow.com
    zip
    Updated Jan 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    thejmlgame (2023). Amogyfinder Dataset [Dataset]. https://universe.roboflow.com/thejmlgame/amogyfinder/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 10, 2023
    Dataset authored and provided by
    thejmlgame
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Amongus Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Game Development & Enhancement: The AmogyFinder can be used by video game developers who are working on games related to AmongUs characters. By the identification and segmentation of game characters in real-time, it can help to build character-based interactive features or improve current character dynamics.

    2. Animation & Content Creation: Content creators, especially those who are making animations, cartoons or digital artworks related to AmongUs can use this model to quickly identify and segregate characters for creating dynamic storylines or for scene composition.

    3. Educational Tool for Children: Given the popularity of AmongUs among younger generations, this model can be used as an interactive tool within educational apps to help children learn basic concepts of grouping, identifying and distinguishing between different characters.

    4. Security Surveillance in Gaming Conventions: In events like gaming conventions where costumes of game characters like AmongUs are popular, this model can be used in surveillance systems to flag or track specific characters for security or crowd management purposes.

    5. Merchandising & Manufacturing: Toy manufacturers or merchandisers can use this model to identify specific AmongUs characters from their product assembly line for quality control or tracking their production ratio.

  9. R

    Barbee Pharm Dataset

    • universe.roboflow.com
    zip
    Updated Apr 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    venkat (2022). Barbee Pharm Dataset [Dataset]. https://universe.roboflow.com/venkat-da2z9/barbee-pharm
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2022
    Dataset authored and provided by
    venkat
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Boxes Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Retail Inventory Management: The barbee-pharm model can be used by retail stores, especially pharmacies, to track and manage their inventory levels. By identifying the different color-coded boxes, the system can determine which products are well-stocked or running low, making it easier for store employees to maintain accurate inventory records.

    2. Package Sorting in Warehouses: Warehouses dealing with color-coded packages can integrate the barbee-pharm model to automate their sorting process. This would help to increase the efficiency and speed of sorting, and reduce manual labor requirements.

    3. Pharmaceutical Production Quality Control: The model could be employed for quality control in pharmaceutical production lines. By detecting any inconsistencies in box color-coding, the system could prevent packaging errors and ensure that only correctly labeled products are shipped to retailers and customers.

    4. Visual Aid for Visually Impaired Individuals: The barbee-pharm model could be integrated into a mobile app or wearable device to help visually impaired individuals navigate through environments such as grocery stores and pharmacies. Using the model to identify color-coded boxes, the system could provide audio guidance to assist users in finding specific products.

    5. Disaster Relief Logistics: The model can be employed in disaster relief operations to identify and categorize medical supplies quickly. The color-coded boxes can be used to prioritize critical items such as medication, first aid kits, and other essential medical resources, enabling more efficient allocation and distribution of supplies during crisis situations.

  10. R

    Induction Dataset

    • universe.roboflow.com
    zip
    Updated Feb 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ea (2023). Induction Dataset [Dataset]. https://universe.roboflow.com/ea/induction/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 10, 2023
    Dataset authored and provided by
    ea
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Stator Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Industrial Automation: INDUCTION can be used in factories and production lines where motors are produced. By automating the process of identifying different types of stators, it can streamline the assembly process and reduce the chance of manual errors.

    2. Maintenance and Repairs: The model can be used by field engineers and technicians to correctly identify stators in the electric motors they are repairing or maintaining, thus reducing the risk of using wrong parts or methods.

    3. Stator Quality Control: Manufacturers can use INDUCTION to quickly identify any faults or defects during the production process, ensuring the quality of their products and reducing waste.

    4. Training and Education: Educational institutions and companies could leverage the model in training sessions for teaching students and new employees about various types of stators, aiding the learning process through visual identification.

    5. Recycling and Disposal: INDUCTION could assist in sorting stators during recycling processes, improving efficiency and ensuring the components are properly categorized for disposal or reuse.

  11. R

    Iiwa_krc4 Dataset

    • universe.roboflow.com
    zip
    Updated Apr 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    yolo training (2023). Iiwa_krc4 Dataset [Dataset]. https://universe.roboflow.com/yolo-training-dwcmd/iiwa_krc4/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 6, 2023
    Dataset authored and provided by
    yolo training
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Iiwa Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Industrial Automation: This model can be used for identifying, monitoring, and controlling different parts of a KUKA iiwa robot. This includes tracking the status of the control pendant, the locking mechanism, and the overall robot, supporting real-time optimization of industrial automation processes.

    2. Robotics Research & Education: Using the 'iiwa_krc4' model, students and researchers can gain a better understanding of different classes of iiwa robots and their components. This application can be particularly helpful in robotics lab sessions, where learners need to visually identify and differentiate the components and operations, such as unlocking or activating the robot controller.

    3. Inspection & Maintenance: This model can be used to develop systems for remote or automatic inspection and maintenance of iiwa robots. For example, it could detect when locks are engaged or not, or if the teach pendant (used for programming and controlling the robots) is in place or not, thereby improving servicing efficiency.

    4. Assembly Line Optimization: The model can help optimize assembly line operations by visually tracking and identifying if the iiwa robot is active, if the robot controller is functioning well, or if the teach pendant is being utilized effectively.

    5. Advanced Manufacturing Training Simulations: This model can be used to create training simulations for new employees in advanced manufacturing settings. Trainees can learn how to identify iiwa robots and their components and understand their operations, such as how to lock/unlock the robot and use the teach pendant and robot controller.

  12. R

    Dataset 1 Con Segmentation Dataset

    • universe.roboflow.com
    zip
    Updated Oct 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dataset1 (2023). Dataset 1 Con Segmentation Dataset [Dataset]. https://universe.roboflow.com/dataset1-yxabc/dataset-1-con-segmentation/dataset/5
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 4, 2023
    Dataset authored and provided by
    dataset1
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Crack Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Aviation Maintenance: The model can be used to automate the inspection routine of aircraft cockpits and other parts of the aircraft, detecting those five common structural damages. Early detection and subsequent repair can contribute to safer and more efficient aviation operations.

    2. Automobile Industry: The AI model can be applied to assess and inspect the condition of cars in production lines or used cars, identifying any imperfections such as dents, cracks, scratches or paint-offs before the car goes to market.

    3. Building Inspection: In civil engineering, the model could be used to monitor the structural health of buildings or bridges, using the crack and dent detection capabilities to timely identify potential structural issues.

    4. Insurance Claim Processing: Insurance companies could use this model to streamline their claim processing by automatically identifying damage in pictures of insured properties like cars, homes or commercial properties, that have been submitted for claims.

    5. Artwork Preservation: Art galleries and museums could use this model to identify early signs of damage on art pieces (paint-off or cracks) and take preventative measures to help save valuable pieces of art.

  13. R

    Kp_2 Dataset

    • universe.roboflow.com
    zip
    Updated Apr 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    dimgo1979gmailcom (2023). Kp_2 Dataset [Dataset]. https://universe.roboflow.com/dimgo1979gmailcom/kp_2/model/19
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 14, 2023
    Dataset authored and provided by
    dimgo1979gmailcom
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    KP Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Medical Imaging: KP_2 can be applied in medical imaging for the identification and classification of different diseases represented by 'NS', 'GB', 'PB', etc. For instance, these could represent different types of brain anomalies, and the 'FULL' designation could denote a severe stage of the disease. Its use can help doctors and medical personnel in making more accurate diagnosis and treatment plans.

    2. Quality Control in Manufacturing: This model can be used to identify and classify different components or parts in a manufacturing assembly line represented by 'KP', 'PB', 'NTB', etc., detecting any faulty pieces. The 'FULL' subcategory could represent the completed product. KP_2 could greatly enhance the production efficiency and quality control.

    3. Astronomy and Space Research: If these abbreviations refer to different celestial bodies or phenomena like 'NS' for neutron star, 'GB' for globular cluster, 'PB' for pulsar binary etc., this model can be used to classify images of space, helping in more efficient astronomical research and discovery.

    4. Document Classification: In cases where 'KP', 'NPB', 'NTB' etc. represent different categories of documents or textual data, KP_2 can assist in sorting and categorizing documents for better information management.

    5. Ecological Conservation: If these classes represent different species or groups of animals ('T' for tigers, 'GB' for grizzly bears, 'PB' for polar bears, etc.), KP_2 could be used in wildlife monitoring programs to identify and track various animal populations, contributing to their conservation.

  14. Del_xb Dataset

    • universe.roboflow.com
    zip
    Updated Jan 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FOCR (2024). Del_xb Dataset [Dataset]. https://universe.roboflow.com/focr/del_xb
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 20, 2024
    Dataset provided by
    Friends of Cancer Researchhttps://friendsofcancerresearch.org/
    Authors
    FOCR
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Bhjkre Polygons
    Description

    Here are a few use cases for this project:

    1. Waste Sorting: This model could be employed in automated waste sorting systems for recognizing and classifying different waste types based on the labels detected on white plastic bags, enhancing the efficiency of recycling efforts.

    2. Retail Inventory Management: The "del_xb" model can be used for identifying and tracking items in a retail environment based on their labels. It can assist in maintaining accurate stock counts, preventing theft, and reordering inventory.

    3. Product Comprehension for Visually Impaired: The model can be integrated into applications designed to help visually impaired individuals identify products by reading and describing the labels on them.

    4. Quality Control in Manufacturing: The model can be used on production lines to ensure that products are correctly labeled before shipment, helping detect and correct mislabeling errors in real time.

    5. Shelf Stocking Assistance: The model can be utilized to assist in the efficient restocking of shelves in a supermarket or warehouse setting by identifying different products via their labels.

  15. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Konstantinos Papoutsakis; Konstantinos Papoutsakis; Nikolaos Bakalos; Nikolaos Bakalos; Athena Zacharia; Athena Zacharia; Maria Pateraki; Maria Pateraki (2025). CarDA - Car door Assembly Activities Dataset [Dataset]. http://doi.org/10.5281/zenodo.14644367
Organization logo

CarDA - Car door Assembly Activities Dataset

Explore at:
pdf, binAvailable download formats
Dataset updated
Jan 15, 2025
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Konstantinos Papoutsakis; Konstantinos Papoutsakis; Nikolaos Bakalos; Nikolaos Bakalos; Athena Zacharia; Athena Zacharia; Maria Pateraki; Maria Pateraki
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

The CarDA dataset [1] (Car Door Assembly dataset) has been designed and captured to provide a comprehensive, multi-modal resource for analyzing car door assembly activities performed by trained line workers in realistic assembly lines.

It comprises a set of time-synchronized multi-camera RGB-D videos and human motion capture data acquired during car door assembly activities performed by real-line workers in a real manufacturing environment.

Deployment environment:

The use-case scenario concerns a real-world assembly line workplace in an automotive manufacturing industry, as the deployment environment. In this context,
line workers simulate the real car door assembly workflow using the prompts, sequences, and tools under very similar ergonomic and environmental conditions
as in existing factory shop floors.

The assembly line involves a conveyor belt that is separated into three virtually separated work areas that correspond to three assembly workstations. It moves at a low, constant speed, supporting cart-mounted car doors and material storage. A line worker is assigned to each workstation. All workers assemble car doors as the belt moves, with each station (WS10, WS20, and WS30). A worker completes a workstation-specific set of assembly actions, noted as a task cycle, lasting about 4 minutes before the cart proceeds to the next workstation for further assembly. Upon the successful completion of the task cycle, the cart is left to travel to the virtually defined area of the subsequent workstation where another line worker will continue the assembly process during the new task cycle. Each task cycle lasts approximately 4 minutes and is continuously repeated during the worker’s shift.

Data acquisition:

Data acquisition involves low-cost, passive RGB-D camera sensors that are installed at stationary locations alongside the car door assembly line and a motion
capture system for capturing time-synchronized sequences of images and motion capture data during car door assembly activities performed by real line workers.

Two stationary StereoLabs ZED2 stereo cameras were installed in each of the three workstations of the car door assembly line. The two stationary, workstation-specific cameras are located at bilateral positions on the two sides of the conveyor belt at the center of the area concerning that specific workstation.

The pair of RGB-D sensors were utilized to acquire stereo color and depth image sequences during car door task cycle executions. Each recording comprises
time-synchronized RGB (color) and depth image sequences captured throughout a task cycle execution at 30 frames per second (fps).

At the same time, the line worker used a wearable XSens MVN Link suit during work activities to acquire time-synced 3D motion capture data at 60 fps.

Note: Time synchronization between pairs of RGB-D (.svo) recordings (pairs captured during an assembly task cycle simultaneously from the inXX and outXX cameras installed by the wsXX) is guaranteed and relies on the StereoLabs ZED SDK acquisition software. Time synchronization between samples of the RGB-D and mp4 videos (30 fps) and the acquired motion capture data (60 fps) was performed manually with the starting frame/time of the video as a reference time. We have observed some time discrepancies between data samples of the two modalities that might occur after the first 40-50 seconds in some recordings.

CarDA Dataset:

The dataset has been split into two subsets, A and B.

Each comprises data acquired at different periods using the same multicamera system in the same manufacturing environment.

Subset A contains recordings of RGB-D videos, mp4 videos, and 3d human motion capture data (using the XSens MVN Link suit) acquired during car door assembly activities in all three workstations.

Subset B contains recordings of RGB-D videos and mp4 videos acquired during car door assembly activities in all three workstations.

CarDA subset Α

It contains:

    • RGB-D was acquired using StereoLabs ZED 2 sensors in .svo format
    • mp4 videos (30fps) extracted from the .svo files (using the left camera of the stereo pair of each camera).
    • 3D human pose data (ground truth) captured using the Movella Xsens MVN Link motion capture system (60 fps) in .bvh format
    • Annotation data (xls file format):
      • Ground truth related to temporal segmentation and classification of car door assembly actions (subgoals) during task cycle executions, performed by personnel working directly on the assembly line for the CarDA dataset.
      • Ground truth data on the duration of basic ergonomic postures based on the EAWS ergonomic screening tool: Two experts in manufacturing and ergonomics performed manual annotations related to the EAWS screening tool.

CarDA subset Α files:

    • ws10 - svo - mp4 - bvh.rar
      Five assembly task cycle executions are recorded in WS10 containing pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras, .bvh motion capture data acquired using the XSens Link system. Annotation data are also available.
    • ws20 - svo - mp4 - bvh.rar
      Four assembly task cycle executions are recorded in WS20 containing pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras, .bvh motion capture data acquired using the XSens Link system. Annotation data are also available.

    • ws30 - svo - mp4 - bvh.rar
      Four assembly task cycle executions are recorded in WS30 containing pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras, .bvh motion capture data acquired using the XSens Link system. Annotation data are also available.

CarDA subset B

It contains:

    • RGB-D was acquired using StereoLabs ZED 2 sensors in .svo format
    • mp4 videos (30fps) extracted from the .svo files (using the left camera of the stereo pair of each camera).
    • Annotation data (xls file format):

      • Ground truth related to temporal segmentation and classification of car door assembly actions (subgoals) during task cycle executions, performed by personnel working directly on the assembly line for the CarDA dataset.
      • Ground truth data on the duration of basic ergonomic postures based on the EAWS ergonomic screening tool: Two experts in manufacturing and ergonomics performed manual annotations related to the EAWS screening tool.

CarDA subset B files:

    • ws10 - svo - mp4.rar
      Three pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided.

    • ws20 - svo - mp4.rar
      Six pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided.

    • ws30 - svo - mp4.rar
      Three pairs of RGB-D videos (.svo) acquired by two StereoLabs ZED 2 different stereo cameras placed in the real workplace are provided.

Contact:

Konstantinos Papoutsakis, PhD: papoutsa@ics.forth.gr

Maria Pateraki: mpateraki@mail.ntua.gr
Assistant Professor | National Technical University of Athens
Affiliated Researcher | Institute of Computer Science | FORTH

References:

[1] Konstantinos Papoutsakis, Nikolaos Bakalos, Konstantinos Fragkoulis, Athena Zacharia, Georgia Kapetadimitri, and Maria Pateraki. A vision-based framework for human behavior understanding in industrial assembly lines. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops - T-CAP 2024 Towards a Complete Analysis of People: Fine-grained Understanding for Real-World Applications, 2024.

Search
Clear search
Close search
Google apps
Main menu