100+ datasets found
  1. b

    Brainlife Paper - MEG [fif] CamCan - maxfilt

    • brainlife.io
    Updated Mar 9, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brad Caron; Franco Pestilli; Julia Guiomar Niso Galan (2023). Brainlife Paper - MEG [fif] CamCan - maxfilt [Dataset]. http://doi.org/10.25663/brainlife.pub.43
    Explore at:
    Dataset updated
    Mar 9, 2023
    Authors
    Brad Caron; Franco Pestilli; Julia Guiomar Niso Galan
    Description

    This is the dataset containing all of the derivatives from the Cambridge Centre for Ageing and Neuroscience dataset to evaluate the validity of the services for MEG data on the brainlife.io platform.

  2. Multi-Camera Action Dataset (MCAD)

    • zenodo.org
    application/gzip +2
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wenhui Li; Yongkang Wong; An-An Liu; Yang Li; Yu-Ting Su; Mohan Kankanhalli; Wenhui Li; Yongkang Wong; An-An Liu; Yang Li; Yu-Ting Su; Mohan Kankanhalli (2020). Multi-Camera Action Dataset (MCAD) [Dataset]. http://doi.org/10.5281/zenodo.884592
    Explore at:
    application/gzip, json, txtAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Wenhui Li; Yongkang Wong; An-An Liu; Yang Li; Yu-Ting Su; Mohan Kankanhalli; Wenhui Li; Yongkang Wong; An-An Liu; Yang Li; Yu-Ting Su; Mohan Kankanhalli
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Action recognition has received increasing attentions from the computer vision and machine learning community in the last decades. Ever since then, the recognition task has evolved from single view recording under controlled laboratory environment to unconstrained environment (i.e., surveillance environment or user generated videos). Furthermore, recent work focused on other aspect of action recognition problem, such as cross-view classification, cross domain learning, multi-modality learning, and action localization. Despite the large variations of studies, we observed limited works that explore the open-set and open-view classification problem, which is a genuine inherited properties in action recognition problem. In other words, a well designed algorithm should robustly identify an unfamiliar action as “unknown” and achieved similar performance across sensors with similar field of view. The Multi-Camera Action Dataset (MCAD) is designed to evaluate the open-view classification problem under surveillance environment.

    In our multi-camera action dataset, different from common action datasets we use a total of five cameras, which can be divided into two types of cameras (StaticandPTZ), to record actions. Particularly, there are three Static cameras (Cam04 & Cam05 & Cam06) with fish eye effect and two PanTilt-Zoom (PTZ) cameras (PTZ04 & PTZ06). Static camera has a resolution of 1280×960 pixels, while PTZ camera has a resolution of 704×576 pixels and a smaller field of view than Static camera. What’s more, we don’t control the illumination environment. We even set two contrasting conditions (Daytime and Nighttime environment) which makes our dataset more challenge than many controlled datasets with strongly controlled illumination environment.The distribution of the cameras is shown in the picture on the right.

    We identified 18 units single person daily actions with/without object which are inherited from the KTH, IXMAS, and TRECIVD datasets etc. The list and the definition of actions are shown in the table. These actions can also be divided into 4 types actions. Micro action without object (action ID of 01, 02 ,05) and with object (action ID of 10, 11, 12 ,13). Intense action with object (action ID of 03, 04 ,06, 07, 08, 09) and with object (action ID of 14, 15, 16, 17, 18). We recruited a total of 20 human subjects. Each candidate repeats 8 times (4 times during the day and 4 times in the evening) of each action under one camera. In the recording process, we use five cameras to record each action sample separately. During recording stage we just tell candidates the action name then they could perform the action freely with their own habit, only if they do the action in the field of view of the current camera. This can make our dataset much closer to reality. As a results there is high intra action class variation among different action samples as shown in picture of action samples.

    URL: http://mmas.comp.nus.edu.sg/MCAD/MCAD.html

    Resources:

    • IDXXXX.mp4.tar.gz contains video data for each individual
    • boundingbox.tar.gz contains person bounding box for all videos
    • protocol.json contains the evaluation protocol
    • img_list.txt contains the download URLs for the images version of the video data
    • idt_list.txt contians the download URLs for the improved Dense Trajectory feature
    • stip_list.txt contians the download URLs for the STIP feature

    How to Cite:

    Please cite the following paper if you use the MCAD dataset in your work (papers, articles, reports, books, software, etc):

    • Wenhui Liu, Yongkang Wong, An-An Liu, Yang Li, Yu-Ting Su, Mohan Kankanhalli
      Multi-Camera Action Dataset for Cross-Camera Action Recognition Benchmarking
      IEEE Winter Conference on Applications of Computer Vision (WACV), 2017.
      http://doi.org/10.1109/WACV.2017.28
  3. R

    Cam Dataset

    • universe.roboflow.com
    zip
    Updated Mar 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cam (2025). Cam Dataset [Dataset]. https://universe.roboflow.com/cam-jmo72/cam-ovbcr/model/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 11, 2025
    Dataset authored and provided by
    Cam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Objects Bounding Boxes
    Description

    Cam

    ## Overview
    
    Cam is a dataset for object detection tasks - it contains Objects annotations for 500 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  4. N

    Language in the aging brain: The network dynamics of cognitive decline and...

    • neurovault.org
    nifti
    Updated Oct 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Language in the aging brain: The network dynamics of cognitive decline and preservation: 620085_AV-freq_AudVid300 [Dataset]. http://identifiers.org/neurovault.image:92513
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Oct 13, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Collection description

    Contrasts from the sensori-motor task of the Camcan dataset

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    single-subject

    Map type

    Z

  5. f

    (Top) Recording center demographics for the N = 887 subjects from the...

    • plos.figshare.com
    xls
    Updated May 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Siamak K. Sorooshyari (2024). (Top) Recording center demographics for the N = 887 subjects from the 1000FCP that were used in the analysis. [Dataset]. http://doi.org/10.1371/journal.pone.0300720.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Siamak K. Sorooshyari
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional details such as the number of slices, voxel size, and subject handedness can be found in Table 1 of [11]. (Middle) Recording center demographics for the N = 709 subjects from SRPBS, with additional information available in Table 5 of [17]. (Bottom) Information on the N = 652 subjects from the camCAN dataset that consisted of a single recording center. Further details about the subjects and study can be found in [18, 19].

  6. EmokineDataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Feb 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julia F. Christensen; Julia F. Christensen; Andrés Fernández; Andrés Fernández; Rebecca A. Smith; Rebecca A. Smith; Georgios Michalareas; Georgios Michalareas; Sina H. N. Yazdi; Fahima Farahi; Eva-Madeleine Schmidt; Nasimeh Bahmanian; Gemma Roig; Gemma Roig; Sina H. N. Yazdi; Fahima Farahi; Eva-Madeleine Schmidt; Nasimeh Bahmanian (2024). EmokineDataset [Dataset]. http://doi.org/10.5281/zenodo.7821844
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Julia F. Christensen; Julia F. Christensen; Andrés Fernández; Andrés Fernández; Rebecca A. Smith; Rebecca A. Smith; Georgios Michalareas; Georgios Michalareas; Sina H. N. Yazdi; Fahima Farahi; Eva-Madeleine Schmidt; Nasimeh Bahmanian; Gemma Roig; Gemma Roig; Sina H. N. Yazdi; Fahima Farahi; Eva-Madeleine Schmidt; Nasimeh Bahmanian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EmokineDataset

    Companion resources
    Paper

    Christensen, Julia F. and Fernandez, Andres and Smith, Rebecca and Michalareas, Georgios and Yazdi, Sina H. N. and Farahi, Fahima and Schmidt, Eva-Madeleine and Bahmanian, Nasimeh and Roig, Gemma (2024): "EMOKINE: A Software Package and Computational Framework for Scaling Up the Creation of Highly Controlled Emotional Full-Body Movement Datasets".

    Codehttps://github.com/andres-fr/emokine

    EmokineDataset is a pilot dataset showcasing the usefulness of the emokine software library. It featuers a single dancer performing 63 short sequences, which have been recorded and analyzed in different ways. This pilot dataset is organized in 3 folders:

    • Stimuli: The sequences are presented in 4 visual presentations that can be used as stimulus in observer experiments:
      1. Silhouette: Videos with a white silhouette of the dancer on black background.
      2. FLD (Full-Light Display): video recordings with the performer's face blurred out.
      3. PLD (Point-Light Display): videos featuring a black background with white circles corresponding to the selected body landmarks.
      4. Avatar: Videos produced by the XSENS motion capture propietary software, featuring a robot-like avatar performing the captured movements on a light blue background.
    • Data: In order to facilitate computation and analysis of the stimuli, this pilot dataset also includes several data formats:
      1. MVNX: Raw motion capture data directly recorded from the XSENS motion capture system.
      2. CSV: Translation of a subset of the MVNX sequences into CSV, included for easier integration with mainstream analysis software tools). The subset includes the following features: acceleration, angularAcceleration, angularVelocity, centerOfMass, footContacts, orientation, position and velocity.
      3. CamPos: While the MVNX provides 3D positions with respect to a global frame of reference, the CamPos [JSON](https://www.json.org/json-en.html) files represent the position from the perspective of the camera used to render the PLD videos. Specifically, their 3D positions are given with respect to the camera as (x, y, z), where (x, y) go from (0, 0) (left, bottom) to (1, 1) (right, top), and z is the distance between the camera and the point in meters. It can be useful to get a 2-dimensional projection of the dancer position (simply ignore z).
      4. Kinematic: Analysis of a selection of relevant kinematic features, using information from MVNX, Silhouette and CamPos, provided in tabular form.
    • Validation: Data and experiments reported in our paper as part of the data validation, to support its meaningfulness and usefulness for downstream tasks.
      1. TechVal: A collection of plots presenting relevant statistics of the pilot dataset.
      2. ObserverExperiment: Results in tabular form of an online study conducted with human participants, tasked to recognize emotions of the stimuli and rate their beauty.

    More specifically, the 63 unique sequences are divided into 9 unique choreographies, each one being performed once as an explanation, and then 6 times with different intended emotions (angry, content, fearful, joy, neutral and sad). Once downloaded, the pilot dataset should have the following structure:


    EmokineDataset
    ├── Stimuli
    │ ├── Avatar
    │ ├── FLD
    │ ├── PLD
    │ └── Silhouette
    ├── Data
    │ ├── CamPos
    │ ├── CSV
    │ ├── Kinematic
    │ ├── MVNX
    │ └── TechVal
    └── Validation
    ├── TechVal
    └── ObserverExperiment

    Where each of the stimuli, MVNX, CamPos and Kinematic have this structure:


    The CSV directory is slightly different, because instead of a single file for each seq and emotion, it features a folder containing a .csv file for each one of the 8 features being extracted (acceleration, velocity...).

  7. N

    Language in the aging brain: The network dynamics of cognitive decline and...

    • neurovault.org
    nifti
    Updated Oct 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Language in the aging brain: The network dynamics of cognitive decline and preservation: 420197_audio-video_AudOnly [Dataset]. http://identifiers.org/neurovault.image:86777
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Oct 13, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Collection description

    Contrasts from the sensori-motor task of the Camcan dataset

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    single-subject

    Map type

    Z

  8. CAMS global reanalysis

    • ecmwf.int
    application\/x-grib
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    European Centre for Medium-Range Weather Forecasts, CAMS global reanalysis [Dataset]. https://www.ecmwf.int/en/forecasts/dataset/cams-global-reanalysis
    Explore at:
    application\/x-grib(1 datasets)Available download formats
    Dataset authored and provided by
    European Centre for Medium-Range Weather Forecastshttp://ecmwf.int/
    License

    http://apps.ecmwf.int/datasets/licences/copernicushttp://apps.ecmwf.int/datasets/licences/copernicus

    Description

    including aerosols

  9. Z

    Annex 4 dataset: OS-CAM Case Studies

    • data.niaid.nih.gov
    • zenodo.org
    Updated Apr 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brasse, Valerie (2021). Annex 4 dataset: OS-CAM Case Studies [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4704424
    Explore at:
    Dataset updated
    Apr 22, 2021
    Dataset provided by
    Ivanović, Dragan
    Brasse, Valerie
    Kesäniemi, Joonas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Purpose of the document is to present case studies of usage of well-known European research eInfrastructures (models and platforms) for responsible academic career assessment based on Open Science outputs, activities and expertise. This dataset is an appendix to Annex 4 of EOSC Co-creation report "Making FAIReR Assessments Possible" https://doi.org/10.5281/zenodo.4701375. This report is a deliverable of EOSC Co-Creation projects (i) “European overview of career merit systems’’ and (ii) “Vision for research data in research careers”, funded by the EOSC Co-Creation funding. Further information on these projects can be found here: https://avointiede.fi/en/networks/eosc-co-creation.

  10. CAMS global reanalysis (EAC4)

    • ads.atmosphere.copernicus.eu
    bin
    Updated Feb 6, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ECMWF (2020). CAMS global reanalysis (EAC4) [Dataset]. https://ads.atmosphere.copernicus.eu/datasets/cams-global-reanalysis-eac4
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 6, 2020
    Dataset provided by
    European Centre for Medium-Range Weather Forecastshttp://ecmwf.int/
    Authors
    ECMWF
    License

    https://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdfhttps://object-store.os-api.cci2.ecmwf.int:443/cci2-prod-catalogue/licences/licence-to-use-copernicus-products/licence-to-use-copernicus-products_b4b9451f54cffa16ecef5c912c9cebd6979925a956e3fa677976e0cf198c2c18.pdf

    Time period covered
    Jan 1, 2003 - Dec 31, 2023
    Description

    EAC4 (ECMWF Atmospheric Composition Reanalysis 4) is the fourth generation ECMWF global reanalysis of atmospheric composition. Reanalysis combines model data with observations from across the world into a globally complete and consistent dataset using a model of the atmosphere based on the laws of physics and chemistry. This principle, called data assimilation, is based on the method used by numerical weather prediction centres and air quality forecasting centres, where every so many hours (12 hours at ECMWF) a previous forecast is combined with newly available observations in an optimal way to produce a new best estimate of the state of the atmosphere, called analysis, from which an updated, improved forecast is issued. Reanalysis works in the same way to allow for the provision of a dataset spanning back more than a decade. Reanalysis does not have the constraint of issuing timely forecasts, so there is more time to collect observations, and when going further back in time, to allow for the ingestion of improved versions of the original observations, which all benefit the quality of the reanalysis product. The assimilation system is able to estimate biases between observations and to sift good-quality data from poor data. The atmosphere model allows for estimates at locations where data coverage is low or for atmospheric pollutants for which no direct observations are available. The provision of estimates at each grid point around the globe for each regular output time, over a long period, always using the same format, makes reanalysis a very convenient and popular dataset to work with. The observing system has changed drastically over time, and although the assimilation system can resolve data holes, the initially much sparser networks will lead to less accurate estimates. For this reason, EAC4 is only available from 2003 onwards. Although the analysis procedure considers chunks of data in a window of 12 hours in one go, EAC4 provides estimates every 3 hours, worldwide. This is made possible by the 4D-Var assimilation method, which takes account of the exact timing of the observations and model evolution within the assimilation window.

  11. R

    Diff Cam Dataset

    • universe.roboflow.com
    zip
    Updated Mar 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    bagalu (2025). Diff Cam Dataset [Dataset]. https://universe.roboflow.com/bagalu/diff-cam/model/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 17, 2025
    Dataset authored and provided by
    bagalu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Cement_bag Polygons
    Description

    Diff Cam

    ## Overview
    
    Diff Cam is a dataset for instance segmentation tasks - it contains Cement_bag annotations for 499 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  12. MICA - Muskrat and coypu camera trap observations in Belgium, the...

    • gbif.org
    • data.niaid.nih.gov
    • +2more
    Updated Nov 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emma Cartuyvels; Tim Adriaens; Kristof Baert; Warre Baert; Gust Boiten; Dimitri Brosens; Jim Casaer; Bram D'hondt; Abel De Boer; Manon Debrabandere; Sander Devisscher; Dennis Donckers; Silke Dupont; Wouter Franceus; Heiko Fritz; Lilja Fromme; Friederike Gethöffer; Jan Gouwy; Casper Herbots; Frank Huysentruyt; Leo Kehl; Liam Letheren; Lydia Liebgott; Yorick Liefting; Jan Lodewijkx; Claudia Maistrelli; Björn Matthies; Kelly Meijvisch; Dolf Moerkens; Axel Neukermans; Brecht Neukermans; Jelle Ronsijn; Kurt Schamp; Dan Slootmaekers; Linda Tiggelman; Sanne Van Donink; Danny Van der beeck; Peter Desmet; Emma Cartuyvels; Tim Adriaens; Kristof Baert; Warre Baert; Gust Boiten; Dimitri Brosens; Jim Casaer; Bram D'hondt; Abel De Boer; Manon Debrabandere; Sander Devisscher; Dennis Donckers; Silke Dupont; Wouter Franceus; Heiko Fritz; Lilja Fromme; Friederike Gethöffer; Jan Gouwy; Casper Herbots; Frank Huysentruyt; Leo Kehl; Liam Letheren; Lydia Liebgott; Yorick Liefting; Jan Lodewijkx; Claudia Maistrelli; Björn Matthies; Kelly Meijvisch; Dolf Moerkens; Axel Neukermans; Brecht Neukermans; Jelle Ronsijn; Kurt Schamp; Dan Slootmaekers; Linda Tiggelman; Sanne Van Donink; Danny Van der beeck; Peter Desmet (2023). MICA - Muskrat and coypu camera trap observations in Belgium, the Netherlands and Germany [Dataset]. http://doi.org/10.15468/5tb6ze
    Explore at:
    Dataset updated
    Nov 23, 2023
    Dataset provided by
    Global Biodiversity Information Facilityhttps://www.gbif.org/
    Research Institute for Nature and Forest (INBO)
    Authors
    Emma Cartuyvels; Tim Adriaens; Kristof Baert; Warre Baert; Gust Boiten; Dimitri Brosens; Jim Casaer; Bram D'hondt; Abel De Boer; Manon Debrabandere; Sander Devisscher; Dennis Donckers; Silke Dupont; Wouter Franceus; Heiko Fritz; Lilja Fromme; Friederike Gethöffer; Jan Gouwy; Casper Herbots; Frank Huysentruyt; Leo Kehl; Liam Letheren; Lydia Liebgott; Yorick Liefting; Jan Lodewijkx; Claudia Maistrelli; Björn Matthies; Kelly Meijvisch; Dolf Moerkens; Axel Neukermans; Brecht Neukermans; Jelle Ronsijn; Kurt Schamp; Dan Slootmaekers; Linda Tiggelman; Sanne Van Donink; Danny Van der beeck; Peter Desmet; Emma Cartuyvels; Tim Adriaens; Kristof Baert; Warre Baert; Gust Boiten; Dimitri Brosens; Jim Casaer; Bram D'hondt; Abel De Boer; Manon Debrabandere; Sander Devisscher; Dennis Donckers; Silke Dupont; Wouter Franceus; Heiko Fritz; Lilja Fromme; Friederike Gethöffer; Jan Gouwy; Casper Herbots; Frank Huysentruyt; Leo Kehl; Liam Letheren; Lydia Liebgott; Yorick Liefting; Jan Lodewijkx; Claudia Maistrelli; Björn Matthies; Kelly Meijvisch; Dolf Moerkens; Axel Neukermans; Brecht Neukermans; Jelle Ronsijn; Kurt Schamp; Dan Slootmaekers; Linda Tiggelman; Sanne Van Donink; Danny Van der beeck; Peter Desmet
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Sep 18, 2019 - Sep 27, 2023
    Area covered
    Description

    This camera trap dataset is derived from the Agouti project MICA - Management of Invasive Coypu and muskrAt in Europe. Data have been standardized to Darwin Core using the camtraptor R package and only include observations (and associated media) of animals. Excluded are records that document blank or unclassified media, vehicles and observations of humans. Geospatial coordinates are rounded to 0.001 degrees. The original dataset description follows.

    MICA - Muskrat and coypu camera trap observations in Belgium, the Netherlands and Germany is an occurrence dataset published by the Research Institute of Nature and Forest (INBO). It is part of the LIFE project MICA, in which innovative techniques are tested for a more efficient control of muskrat and coypu populations, both invasive species. The dataset contains camera trap observations of muskrat and coypu, as well as many other observed species. Issues with the dataset can be reported at https://github.com/inbo/mica-occurrences/issues

    We have released this dataset to the public domain under a Creative Commons Zero waiver. We would appreciate it if you follow the INBO norms for data use (https://www.inbo.be/en/norms-data-use) when using the data. If you have any questions regarding this dataset, don't hesitate to contact us via the contact information provided in the metadata or via opendata@inbo.be.

    This dataset was collected using infrastructure provided by INBO and funded by Research Foundation - Flanders (FWO) as part of the Belgian contribution to LifeWatch. The data were collected as part of the MICA project, which received funding from the European Union’s LIFE Environment sub-programme under the grant agreement LIFE18 NAT/NL/001047. The dataset was published with funding from Stichting NLBIF - Netherlands Biodiversity Information Facility.

  13. f

    Table_1_Detecting the Information of Functional Connectivity Networks in...

    • frontiersin.figshare.com
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xin Wen; Li Dong; Junjie Chen; Jie Xiang; Jie Yang; Hechun Li; Xiaobo Liu; Cheng Luo; Dezhong Yao (2023). Table_1_Detecting the Information of Functional Connectivity Networks in Normal Aging Using Deep Learning From a Big Data Perspective.DOCX [Dataset]. http://doi.org/10.3389/fnins.2019.01435.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Xin Wen; Li Dong; Junjie Chen; Jie Xiang; Jie Yang; Hechun Li; Xiaobo Liu; Cheng Luo; Dezhong Yao
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A resting-state functional connectivity (rsFC)-constructed functional network (FN) derived from functional magnetic resonance imaging (fMRI) data can effectively mine alterations in brain function during aging due to the non-invasive and effective advantages of fMRI. With global health research focusing on aging, several open fMRI datasets have been made available that combine deep learning with big data and are a new, promising trend and open issue for brain information detection in fMRI studies of brain aging. In this study, we proposed a new method based on deep learning from the perspective of big data, named Deep neural network (DNN) with Autoencoder (AE) pretrained Functional connectivity Analysis (DAFA), to deeply mine the important functional connectivity changes in fMRI during brain aging. First, using resting-state fMRI data from 421 subjects from the CamCAN dataset, functional connectivities were calculated using sliding window method, and the complex functional patterns were mined by an AE. Then, to increase the statistical power and reliability of the results, we used an AE-pretrained DNN to relabel the functional connectivities of each subject to classify them as belonging to the attributes of young or old individuals. A method called search-back analysis was performed to find alterations in brain function during aging according to the relabeled functional connectivities. Finally, behavioral data regarding fluid intelligence and response time were used to verify the revealed functional changes. Compared to traditional methods, DAFA revealed additional, important aged-related changes in FC patterns [e.g., FC connections within the default mode (DMN) and the sensorimotor and cingulo-opercular networks, as well as connections between the frontoparietal and cingulo-opercular networks, between the DMN and the frontoparietal/cingulo-opercular/sensorimotor/occipital/cerebellum networks, and between the sensorimotor and frontoparietal/cingulo-opercular networks], which were correlated to behavioral data. These findings demonstrated that the proposed DAFA method was superior to traditional FC-determining methods in discovering changes in brain functional connectivity during aging. In addition, it may be a promising method for exploring important information in other fMRI studies.

  14. R

    Object Detection Usb Cam Project Dataset

    • universe.roboflow.com
    zip
    Updated Feb 22, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Farouq BENCHALLAL (2022). Object Detection Usb Cam Project Dataset [Dataset]. https://universe.roboflow.com/farouq-benchallal/object-detection-usb-cam-project/dataset/2
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 22, 2022
    Dataset authored and provided by
    Farouq BENCHALLAL
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Object Bounding Boxes
    Description

    Object Detection Usb Cam Project

    ## Overview
    
    Object Detection Usb Cam Project is a dataset for object detection tasks - it contains Object annotations for 310 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  15. Data from: EyeFi: Fast Human Identification Through Vision and WiFi-based...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Dec 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shiwei Fang; Tamzeed Islam; Sirajum Munir; Shahriar Nirjon; Shiwei Fang; Tamzeed Islam; Sirajum Munir; Shahriar Nirjon (2022). EyeFi: Fast Human Identification Through Vision and WiFi-based Trajectory Matching [Dataset]. http://doi.org/10.5281/zenodo.7396485
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 5, 2022
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Shiwei Fang; Tamzeed Islam; Sirajum Munir; Shahriar Nirjon; Shiwei Fang; Tamzeed Islam; Sirajum Munir; Shahriar Nirjon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    EyeFi Dataset

    This dataset is collected as a part of the EyeFi project at Bosch Research and Technology Center, Pittsburgh, PA, USA. The dataset contains WiFi CSI values of human motion trajectories along with ground truth location information captured through a camera. This dataset is used in the following paper "EyeFi: Fast Human Identification Through Vision and WiFi-based Trajectory Matching" that is published in the IEEE International Conference on Distributed Computing in Sensor Systems 2020 (DCOSS '20). We also published a dataset paper titled as "Dataset: Person Tracking and Identification using Cameras and Wi-Fi Channel State Information (CSI) from Smartphones" in Data: Acquisition to Analysis 2020 (DATA '20) workshop describing details of data collection. Please check it out for more information on the dataset.

    Data Collection Setup

    In our experiments, we used Intel 5300 WiFi Network Interface Card (NIC) installed in an Intel NUC and Linux CSI tools [1] to extract the WiFi CSI packets. The (x,y) coordinates of the subjects are collected from Bosch Flexidome IP Panoramic 7000 panoramic camera mounted on the ceiling and Angle of Arrivals (AoAs) are derived from the (x,y) coordinates. Both the WiFi card and camera are located at the same origin coordinates but at different height, the camera is location around 2.85m from the ground and WiFi antennas are around 1.12m above the ground.

    The data collection environment consists of two areas, first one is a rectangular space measured 11.8m x 8.74m, and the second space is an irregularly shaped kitchen area with maximum distances of 19.74m and 14.24m between two walls. The kitchen also has numerous obstacles and different materials that pose different RF reflection characteristics including strong reflectors such as metal refrigerators and dishwashers.

    To collect the WiFi data, we used a Google Pixel 2 XL smartphone as an access point and connect the Intel 5300 NIC to it for WiFi communication. The transmission rate is about 20-25 packets per second. The same WiFi card and phone are used in both lab and kitchen area.

    List of Files
    Here is a list of files included in the dataset:

    |- 1_person
      |- 1_person_1.h5
      |- 1_person_2.h5
    |- 2_people
      |- 2_people_1.h5
      |- 2_people_2.h5
      |- 2_people_3.h5
    |- 3_people
      |- 3_people_1.h5
      |- 3_people_2.h5
      |- 3_people_3.h5
    |- 5_people
      |- 5_people_1.h5
      |- 5_people_2.h5
      |- 5_people_3.h5
      |- 5_people_4.h5
    |- 10_people
      |- 10_people_1.h5
      |- 10_people_2.h5
      |- 10_people_3.h5
    |- Kitchen
      |- 1_person
        |- kitchen_1_person_1.h5
        |- kitchen_1_person_2.h5
        |- kitchen_1_person_3.h5
      |- 3_people
        |- kitchen_3_people_1.h5
    |- training
      |- shuffuled_train.h5
      |- shuffuled_valid.h5
      |- shuffuled_test.h5
    View-Dataset-Example.ipynb
    README.md
    
    

    In this dataset, folder `1_person/` , `2_people/` , `3_people/` , `5_people/`, and `10_people/` contains data collected from the lab area whereas `Kitchen/` folder contains data collected from the kitchen area. To see how the each file is structured, please see below in section Access the data.

    The training folder contains the training dataset we used to train the neural network discussed in our paper. They are generated by shuffling all the data from `1_person/` folder collected in the lab area (`1_person_1.h5` and `1_person_2.h5`).

    Why multiple files in one folder?

    Each folder contains multiple files. For example, `1_person` folder has two files: `1_person_1.h5` and `1_person_2.h5`. Files in the same folder always have the same number of human subjects present simultaneously in the scene. However, the person who is holding the phone can be different. Also, the data could be collected through different days and/or the data collection system needs to be rebooted due to stability issue. As result, we provided different files (like `1_person_1.h5`, `1_person_2.h5`) to distinguish different person who is holding the phone and possible system reboot that introduces different phase offsets (see below) in the system.

    Special note:

    For `1_person_1.h5`, this file is generated by the same person who is holding the phone, and `1_person_2.h5` contains different people holding the phone but only one person is present in the area at a time. Boths files are collected in different days as well.


    Access the data
    To access the data, hdf5 library is needed to open the dataset. There are free HDF5 viewer available on the official website: https://www.hdfgroup.org/downloads/hdfview/. We also provide an example Python code View-Dataset-Example.ipynb to demonstrate how to access the data.

    Each file is structured as (except the files under *"training/"* folder):

    |- csi_imag
    |- csi_real
    |- nPaths_1
      |- offset_00
        |- spotfi_aoa
      |- offset_11
        |- spotfi_aoa
      |- offset_12
        |- spotfi_aoa
      |- offset_21
        |- spotfi_aoa
      |- offset_22
        |- spotfi_aoa
    |- nPaths_2
      |- offset_00
        |- spotfi_aoa
      |- offset_11
        |- spotfi_aoa
      |- offset_12
        |- spotfi_aoa
      |- offset_21
        |- spotfi_aoa
      |- offset_22
        |- spotfi_aoa
    |- nPaths_3
      |- offset_00
        |- spotfi_aoa
      |- offset_11
        |- spotfi_aoa
      |- offset_12
        |- spotfi_aoa
      |- offset_21
        |- spotfi_aoa
      |- offset_22
        |- spotfi_aoa
    |- nPaths_4
      |- offset_00
        |- spotfi_aoa
      |- offset_11
        |- spotfi_aoa
      |- offset_12
        |- spotfi_aoa
      |- offset_21
        |- spotfi_aoa
      |- offset_22
        |- spotfi_aoa
    |- num_obj
    |- obj_0
      |- cam_aoa
      |- coordinates
    |- obj_1
      |- cam_aoa
      |- coordinates
    ...
    |- timestamp
    

    The `csi_real` and `csi_imag` are the real and imagenary part of the CSI measurements. The order of antennas and subcarriers are as follows for the 90 `csi_real` and `csi_imag` values : [subcarrier1-antenna1, subcarrier1-antenna2, subcarrier1-antenna3, subcarrier2-antenna1, subcarrier2-antenna2, subcarrier2-antenna3,… subcarrier30-antenna1, subcarrier30-antenna2, subcarrier30-antenna3]. `nPaths_x` group are SpotFi [2] calculated WiFi Angle of Arrival (AoA) with `x` number of multiple paths specified during calculation. Under the `nPath_x` group are `offset_xx` subgroup where `xx` stands for the offset combination used to correct the phase offset during the SpotFi calculation. We measured the offsets as:

    |Antennas | Offset 1 (rad) | Offset 2 (rad) |
    |:-------:|:---------------:|:-------------:|
    | 1 & 2 |   1.1899   |   -2.0071
    | 1 & 3 |   1.3883   |   -1.8129
    
    

    The measurement is based on the work [3], where the authors state there are two possible offsets between two antennas which we measured by booting the device multiple times. The combination of the offset are used for the `offset_xx` naming. For example, `offset_12` is offset 1 between antenna 1 & 2 and offset 2 between antenna 1 & 3 are used in the SpotFi calculation.

    The `num_obj` field is used to store the number of human subjects present in the scene. The `obj_0` is always the subject who is holding the phone. In each file, there are `num_obj` of `obj_x`. For each `obj_x1`, we have the `coordinates` reported from the camera and `cam_aoa`, which is estimated AoA from the camera reported coordinates. The (x,y) coordinates and AoA listed here are chronologically ordered (except the files in the `training` folder) . It reflects the way the person carried the phone moved in the space (for `obj_0`) and everyone else walked (for other `obj_y`, where `y` > 0).

    The `timestamp` is provided here for time reference for each WiFi packets.

    To access the data (Python):

    import h5py
    
    data = h5py.File('3_people_3.h5','r')
    
    csi_real = data['csi_real'][()]
    csi_imag = data['csi_imag'][()]
    
    cam_aoa = data['obj_0/cam_aoa'][()] 
    cam_loc = data['obj_0/coordinates'][()] 
    

    For file inside `training/` folder:

    Files inside training folder has a different data structure:

    
    |- nPath-1
      |- aoa
      |- csi_imag
      |- csi_real
      |- spotfi
    |- nPath-2
      |- aoa
      |- csi_imag
      |- csi_real
      |- spotfi
    |- nPath-3
      |- aoa
      |- csi_imag
      |- csi_real
      |- spotfi
    |- nPath-4
      |- aoa
      |- csi_imag
      |- csi_real
      |- spotfi
    


    The group `nPath-x` is the number of multiple path specified during the SpotFi calculation. `aoa` is the camera generated angle of arrival (AoA) (can be considered as ground truth), `csi_image` and `csi_real` is the imaginary and real component of the CSI value. `spotfi` is the SpotFi calculated AoA values. The SpotFi values are chosen based on the lowest median and mean error from across `1_person_1.h5` and `1_person_2.h5`. All the rows under the same `nPath-x` group are aligned (i.e., first row of `aoa` corresponds to the first row of `csi_imag`, `csi_real`, and `spotfi`. There is no timestamp recorded and the sequence of the data is not chronological as they are randomly shuffled from the `1_person_1.h5` and `1_person_2.h5` files.

    Citation
    If you use the dataset, please cite our paper:

    @inproceedings{eyefi2020,
     title={EyeFi: Fast Human Identification Through Vision and WiFi-based Trajectory Matching},
     author={Fang, Shiwei and Islam, Tamzeed and Munir, Sirajum and Nirjon, Shahriar},
     booktitle={2020 IEEE International Conference on Distributed Computing in Sensor Systems (DCOSS)},
     year={2020},

  16. Z

    ChokePoint Dataset

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mau, Sandra (2020). ChokePoint Dataset [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_815656
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Lovell, Brian
    Sanderson, Conrad
    Mau, Sandra
    Chen, Shaokang
    Wong, Yongkang
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The ChokePoint dataset is designed for experiments in person identification/verification under real-world surveillance conditions using existing technologies. An array of three cameras was placed above several portals (natural choke points in terms of pedestrian traffic) to capture subjects walking through each portal in a natural way. While a person is walking through a portal, a sequence of face images (ie. a face set) can be captured. Faces in such sets will have variations in terms of illumination conditions, pose, sharpness, as well as misalignment due to automatic face localisation/detection. Due to the three camera configuration, one of the cameras is likely to capture a face set where a subset of the faces is near-frontal.

    The dataset consists of 25 subjects (19 male and 6 female) in portal 1 and 29 subjects (23 male and 6 female) in portal 2. The recording of portal 1 and portal 2 are one month apart. The dataset has frame rate of 30 fps and the image resolution is 800X600 pixels. In total, the dataset consists of 48 video sequences and 64,204 face images. In all sequences, only one subject is presented in the image at a time. The first 100 frames of each sequence are for background modelling where no foreground objects were presented.

    Each sequence was named according to the recording conditions (eg. P2E_S1_C3) where P, S, and C stand for portal, sequence and camera, respectively. E and L indicate subjects either entering or leaving the portal. The numbers indicate the respective portal, sequence and camera label. For example, P2L_S1_C3 indicates that the recording was done in Portal 2, with people leaving the portal, and captured by camera 3 in the first recorded sequence.

    To pose a more challenging real-world surveillance problems, two seqeunces (P2E_S5 and P2L_S5) were recorded with crowded scenario. In additional to the aforementioned variations, the sequences were presented with continuous occlusion. This phenomenon presents challenges in identidy tracking and face verification.

    This dataset can be applied, but not limited, to the following research areas:

    person re-identification

    image set matching

    face quality measurement

    face clustering

    3D face reconstruction

    pedestrian/face tracking

    background estimation and subtraction

    Please cite the following paper if you use the ChokePoint dataset in your work (papers, articles, reports, books, software, etc):

    Y. Wong, S. Chen, S. Mau, C. Sanderson, B.C. Lovell Patch-based Probabilistic Image Quality Assessment for Face Selection and Improved Video-based Face Recognition IEEE Biometrics Workshop, Computer Vision and Pattern Recognition (CVPR) Workshops, pages 81-88, 2011. http://doi.org/10.1109/CVPRW.2011.5981881

  17. m

    Uncal apex positions observed in public neuroimaging datasets

    • data.mendeley.com
    Updated Feb 9, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jordan Poppenk (2020). Uncal apex positions observed in public neuroimaging datasets [Dataset]. http://doi.org/10.17632/c4gyz845yx.1
    Explore at:
    Dataset updated
    Feb 9, 2020
    Authors
    Jordan Poppenk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The current dataset consists of coordinate locations for the uncal apex, as observed within 1,092 different brain images (each containing two hippocampi, for 2,184 different coordinate locations) in the ADNI1, ADNI2, and CamCAN datasets. The coordinates were manually recorded by three raters, who in each case achieved a DICE inter-rater agreement coefficient 0.8 or greater with another rater before recording coordinates for purposes of the current dataset. The coordinates refer to locations in native coordinate space. The brain images referred to by the coordinates may be obtained separately from the owners of the above open data repositories. These data were originally used in the research article "Uncal apex position varies with normal aging" (Poppenk, 2020).

  18. d

    Speed Camera Violations

    • catalog.data.gov
    • data.cityofchicago.org
    • +2more
    Updated Mar 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.cityofchicago.org (2025). Speed Camera Violations [Dataset]. https://catalog.data.gov/dataset/speed-camera-violations
    Explore at:
    Dataset updated
    Mar 14, 2025
    Dataset provided by
    data.cityofchicago.org
    Description

    This dataset reflects the daily volume of violations that have occurred in Children's Safety Zones for each camera. The data reflects violations that occurred from July 1, 2014 until present, minus the most recent 14 days. This data may change due to occasional time lags between the capturing of a potential violation and the processing and determination of a violation. The most recent 14 days are not shown due to revised data being submitted to the City of Chicago. The reported violations are those that have been collected by the camera and radar system and reviewed by two separate City contractors. In some instances, due to the inability the registered owner of the offending vehicle, the violation may not be issued as a citation. However, this dataset contains all violations regardless of whether a citation was issued, which provides an accurate view into the Automated Speed Enforcement Program violations taking place in Children's Safety Zones. More information on the Safety Zone Program can be found here: http://www.cityofchicago.org/city/en/depts/cdot/supp_info/children_s_safetyzoneporgramautomaticspeedenforcement.html. The corresponding dataset for red light camera violations is https://data.cityofchicago.org/id/spqx-js37.

  19. N

    Language in the aging brain: The network dynamics of cognitive decline and...

    • neurovault.org
    nifti
    Updated Oct 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Language in the aging brain: The network dynamics of cognitive decline and preservation: 321899_AV-freq_AudVid300 [Dataset]. http://identifiers.org/neurovault.image:88192
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Oct 13, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Collection description

    Contrasts from the sensori-motor task of the Camcan dataset

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    single-subject

    Map type

    Z

  20. N

    Language in the aging brain: The network dynamics of cognitive decline and...

    • neurovault.org
    nifti
    Updated Oct 13, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Language in the aging brain: The network dynamics of cognitive decline and preservation: 620073_audio-video_AudOnly [Dataset]. http://identifiers.org/neurovault.image:92446
    Explore at:
    niftiAvailable download formats
    Dataset updated
    Oct 13, 2018
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Collection description

    Contrasts from the sensori-motor task of the Camcan dataset

    Subject species

    homo sapiens

    Modality

    fMRI-BOLD

    Analysis level

    single-subject

    Map type

    Z

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Brad Caron; Franco Pestilli; Julia Guiomar Niso Galan (2023). Brainlife Paper - MEG [fif] CamCan - maxfilt [Dataset]. http://doi.org/10.25663/brainlife.pub.43

Brainlife Paper - MEG [fif] CamCan - maxfilt

Explore at:
Dataset updated
Mar 9, 2023
Authors
Brad Caron; Franco Pestilli; Julia Guiomar Niso Galan
Description

This is the dataset containing all of the derivatives from the Cambridge Centre for Ageing and Neuroscience dataset to evaluate the validity of the services for MEG data on the brainlife.io platform.

Search
Clear search
Close search
Google apps
Main menu