5 datasets found
  1. u

    PedX Dataset

    • deepblue.lib.umich.edu
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kim, Wonhui; Ramanagopal, Manikandasriram Srinivasan; Barto, Charles; Yu, Ming-Yuan; Rosaen, Karl; Goumas, Nick; Vasudevan, Ram; Johnson-Roberson, Matthew, PedX Dataset [Dataset]. http://doi.org/10.7302/0fv2-nn47
    Explore at:
    Dataset provided by
    Deep Blue Data
    Authors
    Kim, Wonhui; Ramanagopal, Manikandasriram Srinivasan; Barto, Charles; Yu, Ming-Yuan; Rosaen, Karl; Goumas, Nick; Vasudevan, Ram; Johnson-Roberson, Matthew
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Time period covered
    Nov 30, 2017
    Description

    PedX is a large-scale multi-modal collection of pedestrians at complex urban intersections. The dataset provides high-resolution stereo images and LiDAR data with manual 2D and automatic 3D annotations. The data was captured using two pairs of stereo cameras and four Velodyne LiDAR sensors.

  2. h

    Ped-X-Bench

    • huggingface.co
    Updated May 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Apoorva Srinivasan (2025). Ped-X-Bench [Dataset]. https://huggingface.co/datasets/apoorvasrinivasan/Ped-X-Bench
    Explore at:
    Dataset updated
    May 24, 2025
    Authors
    Apoorva Srinivasan
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    apoorvasrinivasan/Ped-X-Bench dataset hosted on Hugging Face and contributed by the HF Datasets community

  3. q

    SAIVT-BuildingMonitoring

    • researchdatafinder.qut.edu.au
    Updated Jul 22, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Simon Denman (2016). SAIVT-BuildingMonitoring [Dataset]. https://researchdatafinder.qut.edu.au/individual/n47576
    Explore at:
    Dataset updated
    Jul 22, 2016
    Dataset provided by
    Queensland University of Technology (QUT)
    Authors
    Dr Simon Denman
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    SAIVT-BuildingMonitoring

    Overview

    The SAIVT-BuildingMonitoring database contains footage from 12 cameras capturing a single work day at a busy university campus building. A portion of the database has been annotated for crowd counting and pedestrian throughput estimation, and is freely available for download. Contact Dr Simon Denman for more information.

    Licensing

    The SAIVT-BuildingMonitoring database is © 2015 QUT, and is licensed under the Creative Commons Attribution-ShareAlike 4.0 License.

    Attribution

    To attribute this database, use the citation provided on our publication at eprints:

    S. Denman, C. Fookes, D. Ryan, & S. Sridharan (2015) Large scale monitoring of crowds and building utilisation: A new database and distributed approach. In 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, 25-28 August 2015, Karlsruhe, Germany.

    Acknowledgement in publications

    In addition to citing our paper, we kindly request that the following text be included in an acknowledgements section at the end of your publications:

    'We would like to thank the SAIVT Research Labs at Queensland University of Technology (QUT) for freely supplying us with the SAIVT-BuildingMonitoring database for our research'.

    Installing the SAIVT-BuildingMonitoring Database

    Download, join, and unzip the following archives

    Annotated Data
    
      Part 1 (2GB, md5sum: 50e63a6ee394751fad75dc43017710e8)
      Part 2 (2GB, md5sum: 49859f0046f0b15d4cf0cfafceb9e88f)
      Part 3 (2GB, md5sum: b3c7386204930bc9d8545c1f4eb0c972)
      Part 4 (2GB, md5sum: 4606fc090f6020b771f74d565fc73f6d)
      Part 5 (632 MB, md5sum: 116aade568ccfeaefcdd07b5110b815a)
    
    
    Full Sequences
    
      Part 1 (2 GB, md5sum: 068ed015e057afb98b404dd95dc8fbb3)
      Part 2 (2GB, md5sum: 763f46fc1251a2301cb63b697c881db2)
      Part 3 (2GB, md5sum: 75e7090c6035b0962e2b05a3a8e4c59e)
      Part 4 (2GB, md5sum: 34481b1e81e06310238d9ed3a57b25af)
      Part 5 (2GB, md5sum: 9ef895c2def141d712a557a6a72d3bcc)
      Part 6 (2GB, md5sum: 2a76e6b199dccae0113a8fd509bf8a04)
      Part 7 (2GB, md5sum: 77c659ab6002767cc13794aa1279f2dd)
      Part 8 (2GB, md5sum: 703f54f297b4c93e53c662c83e42372c)
      Part 9 (2GB, md5sum: 65ebdab38367cf22b057a8667b76068d)
      Part 10 (2GB, md5sum: bb5f6527f65760717cd819b826674d83)
      Part 11 (2GB, md5sum: 01a562f7bd659fb9b81362c44838bfb1)
      Part 12 (2GB, md5sum: 5e4a0d4bb99cde17158c1f346bbbdad8)
      Part 13 (2GB, md5sum: 9c454d9381a1c8a4e8dc68cfaeaf4622)
      Part 14 (2GB, md5sum: 8ff2b03b22d0c9ca528544193599dc18)
      Part 15 (2GB, md5sum: 86efac1962e2bef3afd3867f8dda1437)
    

    To rejoin the invidual parts, use:

    cat SAIVT-BuildingMonitoring-AnnotatedData.tar.gz.* > SAIVT-BuildingMonitoring-AnnotatedData.tar.gz

    cat SAIVT-BuildingMonitoring-FullSequences.tar.gz.* > SAIVT-BuildingMonitoring-FullSequences.tar.gz

    At this point, you should have the following data structure and the SAIVT-BuildingMonitoring database is installed:

    SAIVT-BuildingMonitoring +-- AnnotatedData +-- P_Lev_4_Entry_Way_ip_107 +-- Frames +-- Entry_ip107_00000.png +-- Entry_ip107_00001.png +-- ... +-- GroundTruth.xml +-- P_Lev_4_Entry_Way_ip_107-20140730-090000.avi +-- perspectivemap.xml +-- ROI.xml

    +-- P_Lev_4_external_419_ip_52 +-- ...

    +-- P_Lev_4_External_Lift_foyer_ip_70 +-- Frames +-- Entry_ip107_00000.png +-- Entry_ip107_00001.png +-- ... +-- GroundTruth.xml +-- P_Lev_4_External_Lift_foyer_ip_70-20140730-090000.avi +-- perspectivemap.xml +-- ROI.xml +-- VG-GroundTruth.xml +-- VG-ROI.xml

    +-- ...

    +-- Calibration +-- Lev4Entry_ip107.xml +-- Lev4Ext_ip51.xml +-- ...

    +-- FullSequences +-- P_Lev_4_Entry_Way_ip_107-20140730-090000.avi +-- P_Lev_4_external_419_ip_52-20140730-090000.avi +-- ...

    +-- MotionSegmentation +-- Lev4Entry_ip107.avi +-- Lev4Entry_ip107-Full.avi +-- Lev4Ext_ip51.avi +-- Lev4Ext_ip51-Full.avi +-- ...

    +-- Denman 2015 - Large scale monitoring of crowds and building utilisation.pdf +-- LICENSE.txt +-- README.txt

    Data is organised into two sections, AnnotatedData and FullSequences. Additional data that may be of use is provided in Calibration and MotionSegmentation.

    AnnotatedData contains the two hour sections that have been annotated (from 11am to 1pm), alongside the ground truth and any other data generated during the annotation process. Each camera has a directory, the contents of which depends on what the camera has been annotated for.

    All cameras will have:

    a video file, such as P_Lev_4_Entry_Way_ip_107-20140730-090000.avi, which is the 2 hour video from 11am to 1pm
    a Frames directory, that has 120 frames taken at minute intervals from the sequence. There are the frames that have been annotated for crowd counting. Even if the camera has not been annotated for crowd counting (i.e. P_Lev_4_Main_Entry_ip_54), this directory is included.
    

    The following files exist for crowd counting cameras:

    GroundTruth.xml, which contains the ground truth in the following format: 
    

    The file contains a list of annotated frames, and the location of the approximate centre of mass of any people within the frame. The interval-scale attribute indicates the distance between the annotated frames in the original video.

    perspectivemap.xml, a file that defines the perspective map used to correct for perspective distortion. Parameters for a bilinear perspective map are included along with the original annotations that were used to generate the map.
    ROI.xml, which defines the region of interest as follows:
    

    This defines a polygon within the image that is used for crowd counting. Only people within this region are annotated.

    For cameras that have been annotated with a virtual gate, the following additional files are present:

    VG-GroundTruth.xml, which contains ground truth in the following format: 
    

    The ROI is repeated within the ground truth, and a direction of interest (the tag) is also included, which indicates the primary direction for the gait (i.e. the direction that denotes a positive count. Each pedestrian crossing is represented by a tag, which contains the approximate frame the crossing occurred in (when the centre of mass was at the centre of the gait region), the x and y location of the centre of mass of the person during the crossing, and the direction (0 being the primary direction, 1 being the secondary). VG-ROI.xml, which contains the region of interest for the virtual gate

    The Calibration directory contains camera calibration for the cameras (with the exception of ip107, which has an uneven ground plane and is thus difficult to calibrate). All calibration is done using Tsai's method.

    FullSequences contains the full sequences (9am - 5pm) for each of the cameras.

    MotionSegmentation contains motion segmentation videos for all clips. Segmentation videos for both the full sequences and the 2 hour annotated segments are provided. Motion segmentation is done using the ViBE algorithm. Motion videos for the entire sequence have Full in the file name before the extension (i.e. Lev4Entry_ip107-Full.avi).

    Further information on the SAIVT-BuildingMonitoring database in our paper: S. Denman, C. Fookes, D. Ryan, & S. Sridharan (2015) Large scale monitoring of crowds and building utilisation: A new database and distributed approach. In 12th IEEE International Conference on Advanced Video and Signal Based Surveillance, 25-28 August 2015, Karlsruhe, Germany.

    This paper is also available alongside this document in the file: 'Denman 2015 - Large scale monitoring of crowds and building utilisation.pdf'.

  4. PEDS datasets and figure data

    • zenodo.org
    csv
    Updated Oct 17, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raphaël Pestourie; Raphaël Pestourie; Payel Das; Payel Das (2023). PEDS datasets and figure data [Dataset]. http://doi.org/10.5281/zenodo.10011958
    Explore at:
    csvAvailable download formats
    Dataset updated
    Oct 17, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Raphaël Pestourie; Raphaël Pestourie; Payel Das; Payel Das
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Datasets:

    • y_fisher25.csv: reaction-diffusion equation; the thermal flux corresponding to structures with 25 holes
    • X_fisher25.csv: reaction-diffusion equation; the side lengths of the 25 holes in the structures
    • y_fisher16.csv: reaction-diffusion equation; the thermal flux corresponding to structures with 16 holes
    • X_fisher16.csv: reaction-diffusion equation; the side lengths of the 16 holes in the structures
    • y_fourier25.csv: diffusion equation; the thermal flux corresponding to structures with 25 holes
    • X_fourier25.csv: diffusion equation; the side lengths of the 25 holes in the structures
    • y_fourier16.csv: diffusion equation; the thermal flux corresponding to structures with 16 holes
    • X_fourier16.csv: diffusion equation; the side lengths of the 16 holes in the structures
    • y_maxwell10.csv: Helmholtz equation; the complex transmission through the 10-layered structure
    • X_maxwell10.csv: Helmholtz equation; the side lengths of the 10 holes in each layer of the structure followed by a one-hot encoding of the frequency [0.5, 0.75, 1]

    Figure data:

    • nb_trainingpoints_Fig1.csv: number of training points in the dataset–x-coordinates (Fig 1, S1, and S2)
    • baseline_alFig1.csv: error of the baseline ensemble using a dataset that was generated using active learning (Fig 1, S1, and S2)
    • baseline_noalFig1.csv: error of the baseline ensemble using a dataset that was sampled uniformly at random (Fig 1, S1, and S2)
    • baseline_single_noalFig1.csv: error of the baseline (single model) using a dataset that was sampled uniformly at random (Fig 1, S1, and S2)
    • PEDS_alFig1.csv: error of the PEDS ensemble using a dataset that was generated using active learning (Fig 1, S1, and S2)
    • PEDS_noalFig1.csv: error of the PEDS ensemble using a dataset that was sampled uniformly at random (Fig 1, S1, and S2)
    • PEDS_single_noalFig1.csv: error of the PEDS (single model) using a dataset that was sampled uniformly at random (Fig 1, S1, and S2)
    • SM10_ALFigS1.csv: error of the space mapping ensemble with a resolution of 10 using a dataset that was generated using active learning (Fig S1)
    • SM10_noALFigS1.csv: error of the space mapping ensemble with a resolution of 10 using a dataset that was sampled uniformly at random (Fig S1)
    • SM10_single_noALFigS1.csv: error of the space mapping (single model) with a resolution of 10 using a dataset that was sampled uniformly at random (Fig S1)
    • SM20_single_noalFigS2.csv: error of the space mapping (single model) with a resolution of 20 using a dataset that was sampled uniformly at random (Fig S2)
    • SM20_ALFigS2.csv: error of the space mapping ensemble with a resolution of 20 using a dataset that was generated using active learning (Fig S2)
    • SM20_noALFigS2.csv: error of the space mapping ensemble with a resolution of 20 using a dataset that was sampled uniformly at random (Fig S2)
    • resolutionFigS4.csv: resolution of the middle fidelity model–x-coordinate (Fig. S4)
    • error_midfidFigS4.csv: error of the middle fidelity model (Fig. S4)

  5. f

    Data_Sheet_1_Gastric Point-of-Care Ultrasound in Acutely and Critically Ill...

    • figshare.com
    docx
    Updated Jun 17, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Frederic V. Valla; Lyvonne N. Tume; Corinne Jotterand Chaparro; Philip Arnold; Walid Alrayashi; Claire Morice; Tomasz Nabialek; Aymeric Rouchaud; Eloise Cercueil; Lionel Bouvet (2023). Data_Sheet_1_Gastric Point-of-Care Ultrasound in Acutely and Critically Ill Children (POCUS-ped): A Scoping Review.docx [Dataset]. http://doi.org/10.3389/fped.2022.921863.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 17, 2023
    Dataset provided by
    Frontiers
    Authors
    Frederic V. Valla; Lyvonne N. Tume; Corinne Jotterand Chaparro; Philip Arnold; Walid Alrayashi; Claire Morice; Tomasz Nabialek; Aymeric Rouchaud; Eloise Cercueil; Lionel Bouvet
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionPoint-of-care ultrasound (POCUS) use is increasing in pediatric clinical settings. However, gastric POCUS is rarely used, despite its potential value in optimizing the diagnosis and management in several clinical scenarios (i.e., assessing gastric emptying and gastric volume/content, gastric foreign bodies, confirming nasogastric tube placement, and hypertrophic pyloric stenosis). This review aimed to assess how gastric POCUS may be used in acute and critically ill children.Materials and MethodsAn international expert group was established, composed of pediatricians, pediatric intensivists, anesthesiologists, radiologists, nurses, and a methodologist. A scoping review was conducted with an aim to describe the use of gastric POCUS in pediatrics in acute and critical care settings. A literature search was conducted in three databases, to identify studies published between 1998 and 2022. Abstracts and relevant full texts were screened for eligibility, and data were extracted, according to the JBI methodology (Johanna Briggs Institute).ResultsA total of 70 studies were included. Most studies (n = 47; 67%) were conducted to assess gastric emptying and gastric volume/contents. The studies assessed gastric volume, the impact of different feed types (breast milk, fortifiers, and thickeners) and feed administration modes on gastric emptying, and gastric volume/content prior to sedation or anesthesia or during surgery. Other studies described the use of gastric POCUS in foreign body ingestion (n = 6), nasogastric tube placement (n = 5), hypertrophic pyloric stenosis (n = 8), and gastric insufflation during mechanical ventilatory support (n = 4). POCUS was performed by neonatologists, anesthesiologists, emergency department physicians, and surgeons. Their learning curve was rapid, and the accuracy was high when compared to that of the ultrasound performed by radiologists (RADUS) or other gold standards (e.g., endoscopy, radiography, and MRI). No study conducted in critically ill children was found apart from that in neonatal intensive care in preterms.DiscussionGastric POCUS appears useful and reliable in a variety of pediatric clinical settings. It may help optimize induction in emergency sedation/anesthesia, diagnose foreign bodies and hypertrophic pyloric stenosis, and assist in confirming nasogastric tube placement, avoiding delays in obtaining confirmatory examinations (RADUS, x-rays, etc.) and reducing radiation exposure. It may be useful in pediatric intensive care but requires further investigation.

  6. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Kim, Wonhui; Ramanagopal, Manikandasriram Srinivasan; Barto, Charles; Yu, Ming-Yuan; Rosaen, Karl; Goumas, Nick; Vasudevan, Ram; Johnson-Roberson, Matthew, PedX Dataset [Dataset]. http://doi.org/10.7302/0fv2-nn47

PedX Dataset

Explore at:
Dataset provided by
Deep Blue Data
Authors
Kim, Wonhui; Ramanagopal, Manikandasriram Srinivasan; Barto, Charles; Yu, Ming-Yuan; Rosaen, Karl; Goumas, Nick; Vasudevan, Ram; Johnson-Roberson, Matthew
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Time period covered
Nov 30, 2017
Description

PedX is a large-scale multi-modal collection of pedestrians at complex urban intersections. The dataset provides high-resolution stereo images and LiDAR data with manual 2D and automatic 3D annotations. The data was captured using two pairs of stereo cameras and four Velodyne LiDAR sensors.

Search
Clear search
Close search
Google apps
Main menu