100+ datasets found
  1. n

    NINDS Common Data Elements

    • neuinfo.org
    • scicrunch.org
    • +2more
    Updated Mar 15, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). NINDS Common Data Elements [Dataset]. http://identifiers.org/RRID:SCR_006577
    Explore at:
    Dataset updated
    Mar 15, 2018
    Description

    The purpose of the NINDS Common Data Elements (CDEs) Project is to standardize the collection of investigational data in order to facilitate comparison of results across studies and more effectively aggregate information into significant metadata results. The goal of the National Institute of Neurological Disorders and Stroke (NINDS) CDE Project specifically is to develop data standards for clinical research within the neurological community. Central to this Project is the creation of common definitions and data sets so that information (data) is consistently captured and recorded across studies. To harmonize data collected from clinical studies, the NINDS Office of Clinical Research is spearheading the effort to develop CDEs in neuroscience. This Web site outlines these data standards and provides accompanying tools to help investigators and research teams collect and record standardized clinical data. The Institute still encourages creativity and uniqueness by allowing investigators to independently identify and add their own critical variables. The CDEs have been identified through review of the documentation of numerous studies funded by NINDS, review of the literature and regulatory requirements, and review of other Institute''s common data efforts. Other data standards such as those of the Clinical Data Interchange Standards Consortium (CDISC), the Clinical Data Acquisition Standards Harmonization (CDASH) Initiative, ClinicalTrials.gov, the NINDS Genetics Repository, and the NIH Roadmap efforts have also been followed to ensure that the NINDS CDEs are comprehensive and as compatible as possible with those standards. CDEs now available: * General (CDEs that cross diseases) Updated Feb. 2011! * Congenital Muscular Dystrophy * Epilepsy (Updated Sept 2011) * Friedreich''s Ataxia * Parkinson''s Disease * Spinal Cord Injury * Stroke * Traumatic Brain Injury CDEs in development: * Amyotrophic Lateral Sclerosis (Public review Sept 15 through Nov 15) * Frontotemporal Dementia * Headache * Huntington''s Disease * Multiple Sclerosis * Neuromuscular Diseases ** Adult and pediatric working groups are being finalized and these groups will focus on: Duchenne Muscular Dystrophy, Facioscapulohumeral Muscular Dystrophy, Myasthenia Gravis, Myotonic Dystrophy, and Spinal Muscular Atrophy The following tools are available through this portal: * CDE Catalog - includes the universe of all CDEs. Users are able to search the full universe to isolate a subset of the CDEs (e.g., all stroke-specific CDEs, all pediatric epilepsy CDEs, etc.) and download details about those CDEs. * CRF Library - (a.k.a., Library of Case Report Form Modules and Guidelines) contains all the CRF Modules that have been created through the NINDS CDE Project as well as various guideline documents. Users are able to search the library to find CRF Modules and Guidelines of interest. * Form Builder - enables users to start the process of assembling a CRF or form by allowing them to choose the CDEs they would like to include on the form. This tool is intended to assist data managers and database developers to create data dictionaries for their study forms.

  2. CMS Synthetic Patient Data OMOP

    • redivis.com
    application/jsonl +7
    Updated Aug 19, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Redivis Demo Organization (2020). CMS Synthetic Patient Data OMOP [Dataset]. https://redivis.com/datasets/ye2v-6skh7wdr7
    Explore at:
    sas, avro, parquet, stata, application/jsonl, arrow, csv, spssAvailable download formats
    Dataset updated
    Aug 19, 2020
    Dataset provided by
    Redivis Inc.
    Authors
    Redivis Demo Organization
    Time period covered
    Jan 1, 2008 - Dec 31, 2010
    Description

    Abstract

    This is a synthetic patient dataset in the OMOP Common Data Model v5.2, originally released by the CMS and accessed via BigQuery. The dataset includes 24 tables and records for 2 million synthetic patients from 2008 to 2010.

    Methodology

    This dataset takes on the format of the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM). As shown in the diagram below, the purpose of the Common Data Model is to convert various distinctly-formatted datasets into a well-known, universal format with a set of standardized vocabularies. See the diagram below from the Observational Health Data Sciences and Informatics (OHDSI) webpage.

    https://redivis.com/fileUploads/d1a95a4e-074a-44d1-92e5-9adfd2f4068a%3E" alt="Why-CDM.png">

    Such universal data models ultimately enable researchers to streamline the analysis of observational medical data. For more information regarding the OMOP CDM, refer to the OHSDI OMOP site.

    Usage

    %3Cli%3EFor documentation regarding the source data format from the Center for Medicare and Medicaid Services (CMS), refer to the %3Ca href="https://www.cms.gov/Research-Statistics-Data-and-Systems/Downloadable-Public-Use-Files/SynPUFs/DE_Syn_PUF"%3ECMS Synthetic Public Use File%3C/a%3E.%3C/li%3E

    %3Cli%3EFor information regarding the conversion of the CMS data file to the OMOP CDM v5.2, refer to %3Ca href="https://github.com/OHDSI/ETL-CMS"%3Ethis OHDSI GitHub page%3C/a%3E. %3C/li%3E

    %3Cli%3EFor information regarding each of the 24 tables in this dataset, including more detailed variable metadata, see %3Ca href="https://github.com/OHDSI/CommonDataModel/wiki"%3Ethe OHDSI CDM GitHub Wiki page%3C/a%3E. All variable labels and descriptions as well as table descriptions come from this Wiki page. Note that this GitHub page includes information primarily regarding the 6.0 version of the CDM and that this dataset works with the 5.2 version. %3C/li%3E

  3. Data set.

    • plos.figshare.com
    bin
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diane M. Quinn; Amy Canevello; Jennifer K. Crocker (2023). Data set. [Dataset]. http://doi.org/10.1371/journal.pone.0286709.s006
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Diane M. Quinn; Amy Canevello; Jennifer K. Crocker
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Rising rates of depression among adolescents raise many questions about the role of depressive symptoms in academic outcomes for college students and their roommates. In the current longitudinal study, we follow previously unacquainted roommate dyads over their first year in college (N = 245 dyads). We examine the role of depressive symptoms of incoming students and their roommates on their GPAs and class withdrawals (provided by university registrars) at the end of the Fall and Spring semesters. We test contagion between the roommates on both academic outcomes and depressive symptoms over time. Finally, we examine the moderating role of relationship closeness. Whereas students’ own initial levels of depressive symptoms predicted their own lower GPA and more course withdrawals, they did not directly predict the academic outcomes of their roommates. For roommates who form close relationships, there was evidence of contagion of both GPAs and depressive symptoms at the end of Fall and Spring semesters. Finally, a longitudinal path model showed that as depressive symptoms spread from the student to their roommate, the roommate’s GPA decreased. The current work sheds light on a common college experience with implications for the role of interventions to increase the academic and mental health of college students.

  4. CommonCrawl WET Sample

    • kaggle.com
    zip
    Updated May 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jye (2023). CommonCrawl WET Sample [Dataset]. https://www.kaggle.com/datasets/jyesawtellrickson/commoncrawl
    Explore at:
    zip(109213996 bytes)Available download formats
    Dataset updated
    May 1, 2023
    Authors
    Jye
    Description

    A sample of the Common Crawl dataset. The archive has 38,079 rows, and is one of 80,000 samples.

    "The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata and text extractions."

    https://commoncrawl.org/

    WET Response Format: "As many tasks only require textual information, the Common Crawl dataset provides WET files that only contain extracted plaintext. The way in which this textual data is stored in the WET format is quite simple. The WARC metadata contains various details, including the URL and the length of the plaintext data, with the plaintext data following immediately afterwards."

  5. Model output and data used for analysis

    • catalog.data.gov
    Updated Nov 12, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. EPA Office of Research and Development (ORD) (2020). Model output and data used for analysis [Dataset]. https://catalog.data.gov/dataset/model-output-and-data-used-for-analysis
    Explore at:
    Dataset updated
    Nov 12, 2020
    Dataset provided by
    United States Environmental Protection Agencyhttp://www.epa.gov/
    Description

    The modeled data in these archives are in the NetCDF format (https://www.unidata.ucar.edu/software/netcdf/). NetCDF (Network Common Data Form) is a set of software libraries and machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data. It is also a community standard for sharing scientific data. The Unidata Program Center supports and maintains netCDF programming interfaces for C, C++, Java, and Fortran. Programming interfaces are also available for Python, IDL, MATLAB, R, Ruby, and Perl. Data in netCDF format is: • Self-Describing. A netCDF file includes information about the data it contains. • Portable. A netCDF file can be accessed by computers with different ways of storing integers, characters, and floating-point numbers. • Scalable. Small subsets of large datasets in various formats may be accessed efficiently through netCDF interfaces, even from remote servers. • Appendable. Data may be appended to a properly structured netCDF file without copying the dataset or redefining its structure. • Sharable. One writer and multiple readers may simultaneously access the same netCDF file. • Archivable. Access to all earlier forms of netCDF data will be supported by current and future versions of the software. Pub_figures.tar.zip Contains the NCL scripts for figures 1-5 and Chesapeake Bay Airshed shapefile. The directory structure of the archive is ./Pub_figures/Fig#_data. Where # is the figure number from 1-5. EMISS.data.tar.zip This archive contains two NetCDF files that contain the emission totals for 2011ec and 2040ei emission inventories. The name of the files contain the year of the inventory and the file header contains a description of each variable and the variable units. EPIC.data.tar.zip contains the monthly mean EPIC data in NetCDF format for ammonium fertilizer application (files with ANH3 in the name) and soil ammonium concentration (files with NH3 in the name) for historical (Hist directory) and future (RCP-4.5 directory) simulations. WRF.data.tar.zip contains mean monthly and seasonal data from the 36km downscaled WRF simulations in the NetCDF format for the historical (Hist directory) and future (RCP-4.5 directory) simulations. CMAQ.data.tar.zip contains the mean monthly and seasonal data in NetCDF format from the 36km CMAQ simulations for the historical (Hist directory), future (RCP-4.5 directory) and future with historical emissions (RCP-4.5-hist-emiss directory). This dataset is associated with the following publication: Campbell, P., J. Bash, C. Nolte, T. Spero, E. Cooter, K. Hinson, and L. Linker. Projections of Atmospheric Nitrogen Deposition to the Chesapeake Bay Watershed. Journal of Geophysical Research - Biogeosciences. American Geophysical Union, Washington, DC, USA, 12(11): 3307-3326, (2019).

  6. A common standard template for representing stable isotope results and...

    • data.csiro.au
    • researchdata.edu.au
    Updated May 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nina Welti; Lian Flick; Stephanie Hawkins; Geoff Fraser; Kathryn Waltenberg; Jagoda Crawford; Cath Hughes; Athina Puccini; Steve Szarvas; Christoph Gerber; Axel Suckow; Paul Abhijit; Fong Liu (2025). A common standard template for representing stable isotope results and associated metadata [Dataset]. http://doi.org/10.25919/aa8d-yw93
    Explore at:
    Dataset updated
    May 12, 2025
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Nina Welti; Lian Flick; Stephanie Hawkins; Geoff Fraser; Kathryn Waltenberg; Jagoda Crawford; Cath Hughes; Athina Puccini; Steve Szarvas; Christoph Gerber; Axel Suckow; Paul Abhijit; Fong Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    CSIROhttp://www.csiro.au/
    ANSTO
    National Measurement Institute
    Geoscience Australia
    Description

    This dataset contains a common standard template for representing the metadata of stable isotope results environmental samples (e.g., soils, rocks, water, gases) and a CSIRO-specific vocabulary for use across CSIRO research activities. The templates includes core properties of stable isotope results, analytical methods, and uncertainty of analyses, as well as associated metadata such as such as their name, identifier, type, and location. The templates enables users with disparate data to find common ground regardless of differences within the data itself i.e. sample types, collections. The standardized templates can prevent duplicate sample metadata entry and lower metadata redundancy, thereby improving the stable isotope data curation and discovery. They have been developed iteratively, revised, and improved based on feedback from researchers and lab technicians. Use of this template and vocabularies will facilitate interoperable and machine-readable platform-ready data collections.

    Lineage: CSIRO, in partnership with the Australian Nuclear Science and Technology Organisation (ANSTO), Geoscience Australia, and the National Measurement Institute, has developed a common metadata template for reporting stable isotope results. The common template was designed to provide a shared language for stable isotope data so that the data can be unified for reuse. Using a simplified data structure, the common template allows for the supply of data from different organisations with different corporate goals, data infrastructure, operating models and different specialist skills. The common ontology describes the different concepts present in the data, giving meaning to the stable isotope observations or measurements of (isotopic) properties of physical samples of the environment. It coordinates this description of samples with standardised metadata and vocabularies, which facilitate machine-readability and semantic cross-linking of resources for interoperability between multiple domains and systems. This is to assist in reducing the need for human data manipulation which can be prone to errors, to provide a machine-readable format for new and emerging technology use-cases, and to also help stable isotope data align with Australia public data FAIR. In addition to the common template, the partners have developed a platform for making unified stable isotope data available for reuse, co- funded by the Australian Research Data Commons (ARDC). The aim of IsotopesAU is to repurpose existing publicly available environmental stable isotope data into a federated data platform, allowing single point access to the data collections. The IsotopesAU platform currently harmonises and federates stable isotopes data from the partner agencies' existing public collections, translating metadata templates to the common template.

    The templates have been developed iteratively, revised, and improved based on feedback from project participants, researchers, and lab technicians.

  7. International Comprehensive Ocean-Atmosphere Data Set (ICOADS) Release 3.0...

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Sep 19, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DOC/NOAA/NESDIS/NCEI > National Centers for Environmental Information, NESDIS, NOAA, U.S. Department of Commerce (Point of Contact) (2023). International Comprehensive Ocean-Atmosphere Data Set (ICOADS) Release 3.0 Final, Individual Reports in the International Maritime Meteorological Archive Format version 1 (IMMA1) [Dataset]. https://catalog.data.gov/dataset/international-comprehensive-ocean-atmosphere-data-set-icoads-release-3-0-final-individual-repor1
    Explore at:
    Dataset updated
    Sep 19, 2023
    Dataset provided by
    United States Department of Commercehttp://commerce.gov/
    National Oceanic and Atmospheric Administrationhttp://www.noaa.gov/
    National Centers for Environmental Informationhttps://www.ncei.noaa.gov/
    Description

    This dataset, the International Comprehensive Ocean-Atmosphere Data Set (ICOADS), is the most widely-used freely available collection of surface marine observations, with over 455 million individual marine reports spanning 1662-2014-each containing the observations and metadata reported from a given ship, buoy, coastal platform, or oceanographic instrument, providing data for the construction of gridded analyses of sea surface temperature, estimates of air-sea interaction and other meteorological variables. ICOADS observations are assimilated into all major atmospheric, oceanic and coupled reanalyses, further widening its impact. R3, therefore includes changes designed to enable the effective exchange of information describing data quality between ICOADS, reanalysis centres, data set developers, scientists, and the public. These user-driven innovations include the assignment of a unique identifier (UID) to each marine report to enable tracing of observations, linking with reports and improved data sharing. Other revisions and extensions of the ICOADS' International Maritime Meteorological Archive (IMMA) common data format incorporate new near-surface oceanographic data elements and cloud parameters. Many new input data sources have been assembled, and updates and improvements to existing data sources, or removal of erroneous data, made. Additionally, these data are offered in NetCDF with useful metadata added in global and variable attributes of each file to make the NetCDF self-contained. This dataset includes 2 versions of the official ICOADS Release 3 dataset: 1) the 'Total' product (denoted by 'T' in the filename) which contains all duplicates and is used for verification and research purposes; and 2) 'Final' R3, with duplicates removed, where all reports have been compared for matching dates, id's and elements observed and the best duplicate retained as the final report.

  8. d

    U.S. Geological Survey Oceanographic Time Series Data Collection

    • catalog.data.gov
    • data.usgs.gov
    • +4more
    Updated Oct 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). U.S. Geological Survey Oceanographic Time Series Data Collection [Dataset]. https://catalog.data.gov/dataset/u-s-geological-survey-oceanographic-time-series-data-collection
    Explore at:
    Dataset updated
    Oct 30, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The oceanographic time series data collected by U.S. Geological Survey scientists and collaborators are served in an online database at http://stellwagen.er.usgs.gov/index.html. These data were collected as part of research experiments investigating circulation and sediment transport in the coastal ocean. The experiments (projects, research programs) are typically one month to several years long and have been carried out since 1975. New experiments will be conducted, and the data from them will be added to the collection. As of 2016, all but one of the experiments were conducted in waters abutting the U.S. coast; the exception was conducted in the Adriatic Sea. Measurements acquired vary by site and experiment; they usually include current velocity, wave statistics, water temperature, salinity, pressure, turbidity, and light transmission from one or more depths over a time period. The measurements are concentrated near the sea floor but may also include data from the water column. The user interface provides an interactive map, a tabular summary of the experiments, and a separate page for each experiment. Each experiment page has documentation and maps that provide details of what data were collected at each site. Links to related publications with additional information about the research are also provided. The data are stored in Network Common Data Format (netCDF) files using the Equatorial Pacific Information Collection (EPIC) conventions defined by the National Oceanic and Atmospheric Administration (NOAA) Pacific Marine Environmental Laboratory. NetCDF is a general, self-documenting, machine-independent, open source data format created and supported by the University Corporation for Atmospheric Research (UCAR). EPIC is an early set of standards designed to allow researchers from different organizations to share oceanographic data. The files may be downloaded or accessed online using the Open-source Project for a Network Data Access Protocol (OPeNDAP). The OPeNDAP framework allows users to access data from anywhere on the Internet using a variety of Web services including Thematic Realtime Environmental Distributed Data Services (THREDDS). A subset of the data compliant with the Climate and Forecast convention (CF, currently version 1.6) is also available.

  9. d

    Coral Reef Evaluation and Monitoring Project Dry Tortugas 2012

    • dataone.org
    • obis.org
    • +1more
    Updated Sep 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Texas A&M University, College Station – Department of Oceanography; Florida Fish and Wildlife Conservation Commission, Fish and Wildlife Research Institute; U.S. Geological Survey HQ; University of Georgia, Odum School of Ecology (2025). Coral Reef Evaluation and Monitoring Project Dry Tortugas 2012 [Dataset]. https://dataone.org/datasets/sha256%3A6834265a8a377adc9fe922fdc3d27dc4b2fc7a6ef7f2ea222b37a85b0b04df43
    Explore at:
    Dataset updated
    Sep 16, 2025
    Dataset provided by
    Ocean Biodiversity Information System (OBIS)
    Authors
    Texas A&M University, College Station – Department of Oceanography; Florida Fish and Wildlife Conservation Commission, Fish and Wildlife Research Institute; U.S. Geological Survey HQ; University of Georgia, Odum School of Ecology
    Time period covered
    Jan 1, 2012
    Area covered
    Description

    The purpose of the Coral Reef Evaluation and Monitoring Project (CREMP) is to monitor the status and trends of selected reefs in the Florida Keys National Marine Sanctuary(FKNMS). CREMP assessments have been conducted annually at fixed sites since 1996 and data collected provides information on the temporal changes in benthic cover and diversity of stony corals and associated marine flora and fauna. The core field methods continue to be underwater videography and timed coral species inventories. Findings presented in this report include data from 109 stations at 37 sites sampled from 1996 through 2008 in the Florida Keys and 1999 through 2008 in the Dry Tortugas. The report describes the annual differences (between 2007 and 2008) in the percent cover of major benthic taxa (stony corals, octocorals, sponges, and macroalgae), mean coral species richness and the incidence of stony coral conditions. Additionally, it examines the long-term trends of the major benthic taxa, five coral complex, Montastraea cavernosa, Colpophyllia natans, Siderastrea siderea, and Porites astreoides) and the clionaid sponge, Cliona delitrix. It is one of the longest running coral reef monitoring projects in south Florida and has been extremely important in documenting the temporal changes that have occurred in recent years.

  10. r

    Sydney Harbour Environmental Data Facility Sydney Harbour Model Data 11046

    • researchdata.edu.au
    Updated Sep 6, 2013
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The University of Sydney (2013). Sydney Harbour Environmental Data Facility Sydney Harbour Model Data 11046 [Dataset]. https://researchdata.edu.au/sydney-harbour-environmental-model-11046/189582
    Explore at:
    Dataset updated
    Sep 6, 2013
    Dataset provided by
    The University of Sydney
    License

    Attribution-NonCommercial-ShareAlike 3.0 (CC BY-NC-SA 3.0)https://creativecommons.org/licenses/by-nc-sa/3.0/
    License information was derived automatically

    Time period covered
    Sep 5, 2013 - May 13, 2014
    Area covered
    Description

    This data collection contains Hydrodynamic Model output data produced by the Sydney Harbour Hydrodynamic Model.

    The Sydney Harbour (real-time) model collates observations from the Bureau of Meteorology, Macquarie University, Sydney Ports Authority and the Manly Hydraulics Laboratory offshore buoy. The Sydney Harbour Model is contained within the Sydney Harbour Observatory (SHO) system.

    The Sydney Harbour Hydrodynamic Model divides the Harbour water into a number of boxes or voxels. Each voxel is less than 60m x 60m x 1m in depth. In narrow parts of the Harbour, or in shallower regions, the voxels are smaller. Layers are numbered - so the sea floor is number 1 and the surface is number 24.

    The model is driven by the conditions on the boundaries. It uses rainfall rates at 13 sites in the Sydney catchment, the wind speed, tide height, the solar radiation and astronomical tides. Every hour the display is refreshed.

    The model utilizes the following environmental data inputs;

    • Dr Serena Lee provide the following: 24 layer grid of the Sydney Harbour Estuary, bathymetry inputs, and the run-off coefficient formula used to convert rainfall readings provided by the Bureau of Meteorology into boundary input data.
    • The Bureau of Meteorology provides the following model inputs; rainfall from 13 individual rain gauges, air temperature, humidity, barometric pressure, cloud cover, evaporation, wind speed, wind direction and forecast data
    • Sydney Ports Authority provides tidal input data.
    • The Office of Environment and Heritage, and the Manly Hydraulics Laboratory provides ocean boundary temperature input data.
    • Macquarie University provides solar radiation input data.

    The hydrodynamic modeling system models the following environmental variables:

    • Salinity
    • Temperature
    • Depth average salinity
    • Horizontal water velocity
    • Vertical water velocity
    • Depth average north velocity
    • Depth average east velocity
    • Water elevation

    This dataset is available in Network Common Data Form – Climate and Forecast (NetCDF-CF) format.

  11. a

    Common Data Indicators American Community Survey Tracts (2018-2022)

    • denver-data-library-mappingjustice.hub.arcgis.com
    Updated Dec 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    geospatialDENVER: Putting Denver on the map. (2023). Common Data Indicators American Community Survey Tracts (2018-2022) [Dataset]. https://denver-data-library-mappingjustice.hub.arcgis.com/datasets/geospatialDenver::common-data-indicators-american-community-survey-tracts-2018-2022
    Explore at:
    Dataset updated
    Dec 31, 2023
    Dataset authored and provided by
    geospatialDENVER: Putting Denver on the map.
    Area covered
    Description

    These data are the common data fields from the American Community Survey agreed upon by the Data Indicators GIS Subcommittee. The data source is Census Tract level data from the American Community Survey; 5 year average, years 2018-2022. The original census tract group boundaries have been adjusted to various Denver GIS data layers to increase the spatial accuracy of this data. Although every effort was made to ensure the accurate rectification of the data, due to geographic problems inherent in the original 2010 census block group data, errors may exist. The data-set does not contain data for any enclaves administered by other jurisdictions that are located within the City and County of Denver's boundary. This data is a sample, not a complete census. Data should be considered estimates and a margin of error table is located on the city network that can be used in conjunction with this dataset.

  12. H

    Cooperative Election Study Common Content, 2020

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Feb 14, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Brian Schaffner; Stephen Ansolabehere; Sam Luks (2022). Cooperative Election Study Common Content, 2020 [Dataset]. http://doi.org/10.7910/DVN/E9N6PH
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 14, 2022
    Dataset provided by
    Harvard Dataverse
    Authors
    Brian Schaffner; Stephen Ansolabehere; Sam Luks
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This is the final release of the 2020 CES Common Content Dataset. The data includes a nationally representative sample of 61,000 American adults. This release includes the data from the survey, a full guide to the data, and the questionnaires. The dataset includes vote validation performed by Catalist. Please consult the guide and the study website (https://cces.gov.harvard.edu/frequently-asked-questions) if you have questions about the study. Special thanks to Marissa Shih and Rebecca Phillips for their work in preparing this data for release.

  13. F

    English Chain of Thought Prompt & Response Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). English Chain of Thought Prompt & Response Dataset [Dataset]. https://www.futurebeeai.com/dataset/prompt-response-dataset/english-chain-of-thought-text-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Welcome to the English Chain of Thought prompt-response dataset, a meticulously curated collection containing 3000 comprehensive prompt and response pairs. This dataset is an invaluable resource for training Language Models (LMs) to generate well-reasoned answers and minimize inaccuracies. Its primary utility lies in enhancing LLMs' reasoning skills for solving arithmetic, common sense, symbolic reasoning, and complex problems.

    Dataset Content

    This COT dataset comprises a diverse set of instructions and questions paired with corresponding answers and rationales in the English language. These prompts and completions cover a broad range of topics and questions, including mathematical concepts, common sense reasoning, complex problem-solving, scientific inquiries, puzzles, and more.

    Each prompt is meticulously accompanied by a response and rationale, providing essential information and insights to enhance the language model training process. These prompts, completions, and rationales were manually curated by native English people, drawing references from various sources, including open-source datasets, news articles, websites, and other reliable references.

    Our chain-of-thought prompt-completion dataset includes various prompt types, such as instructional prompts, continuations, and in-context learning (zero-shot, few-shot) prompts. Additionally, the dataset contains prompts and completions enriched with various forms of rich text, such as lists, tables, code snippets, JSON, and more, with proper markdown format.

    Prompt Diversity

    To ensure a wide-ranging dataset, we have included prompts from a plethora of topics related to mathematics, common sense reasoning, and symbolic reasoning. These topics encompass arithmetic, percentages, ratios, geometry, analogies, spatial reasoning, temporal reasoning, logic puzzles, patterns, and sequences, among others.

    These prompts vary in complexity, spanning easy, medium, and hard levels. Various question types are included, such as multiple-choice, direct queries, and true/false assessments.

    Response Formats

    To accommodate diverse learning experiences, our dataset incorporates different types of answers depending on the prompt and provides step-by-step rationales. The detailed rationale aids the language model in building reasoning process for complex questions.

    These responses encompass text strings, numerical values, and date and time formats, enhancing the language model's ability to generate reliable, coherent, and contextually appropriate answers.

    Data Format and Annotation Details

    This fully labeled English Chain of Thought Prompt Completion Dataset is available in JSON and CSV formats. It includes annotation details such as a unique ID, prompt, prompt type, prompt complexity, prompt category, domain, response, rationale, response type, and rich text presence.

    Quality and Accuracy

    Our dataset upholds the highest standards of quality and accuracy. Each prompt undergoes meticulous validation, and the corresponding responses and rationales are thoroughly verified. We prioritize inclusivity, ensuring that the dataset incorporates prompts and completions representing diverse perspectives and writing styles, maintaining an unbiased and discrimination-free stance.

    The English version is grammatically accurate without any spelling or grammatical errors. No copyrighted, toxic, or harmful content is used during the construction of this dataset.

    Continuous Updates and Customization

    The entire dataset was prepared with the assistance of human curators from the FutureBeeAI crowd community. Ongoing efforts are made to add more assets to this dataset, ensuring its growth and relevance. Additionally, FutureBeeAI offers the ability to gather custom chain of thought prompt completion data tailored to specific needs, providing flexibility and customization options.

    License

    The dataset, created by FutureBeeAI, is now available for commercial use. Researchers, data scientists, and developers can leverage this fully labeled and ready-to-deploy English Chain of Thought Prompt Completion Dataset to enhance the rationale and accurate response generation capabilities of their generative AI models and explore new approaches to NLP tasks.

  14. u

    University of Cape Town Student Admissions Data 2015-2019 - South Africa

    • datafirst.uct.ac.za
    Updated Jul 28, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UCT Student Administration (2020). University of Cape Town Student Admissions Data 2015-2019 - South Africa [Dataset]. https://www.datafirst.uct.ac.za/dataportal/index.php/catalog/787
    Explore at:
    Dataset updated
    Jul 28, 2020
    Dataset authored and provided by
    UCT Student Administration
    Time period covered
    2015 - 2019
    Area covered
    South Africa
    Description

    Abstract

    The dataset was generated from a set of Excel spreadsheets extracted from an Information and Communication Technology Services (ICTS) administrative database on student applications to the University of Cape Town (UCT). The data in this second part of the series contain information on applications to UCT made between January 2015 and September 2019.

    In the original form received by DataFirst the data were ill suited to research purposes. The series represents an attempt at cleaning and organizing the data into a more tractable format.

    Analysis unit

    Individuals, applications

    Universe

    All applications to study at the University of Cape Town

    Kind of data

    Administrative records data

    Mode of data collection

    Other [oth]

    Cleaning operations

    In order to lessen computation times the main applications file was split by year - this part contains the years 2014-2019. Note however that the other 3 files released with the application file (that can be merged into it for additional detail) did not need to be split. As such, the four files can be used to produce a series for 2014-2019 and are labelled as such, even though the person, secondary schooling and tertiary education files all span a longer time period.

    Here is additional information about the files:

    1. Application file: the "finest" or most disaggregated unit of analysis. Individuals may have multiple applications. Uniquely identified by an application ID variable. There are a total of 1,540,129 applications between 2015 and 2019. As mentioned, it was this application file that was split to reduce computation times. It was not necessary or logical to split the other files.
    2. Person file: Each individual is uniquely identified by an individual ID variable. Each individual is associated with information on "key subjects" from a separate data file also contained in the database. These key subjects are all separate variables in the individual level data file. It is important to note that because individuals may have multiple applications, potentially spanning over many years, it was decided not to split the person level datafile. Rather, the person file spans the full data range from 2006 to 2019.
    3. Secondary Education Information: Individuals can also be associated with row entries for each subject. This data file does not have a unique identifier. Instead, each row entry represents a specific secondary school subject for a specific individual. These subjects are quite specific and the data allows the user to distinguish between, for example, higher grade accounting and standard grade accounting. It also allows the user to identify the educational authority issuing the qualification e.g. Cambridge Internal Examinations (CIE) versus National Senior Certificate (NSC). This file spans 2006 to 2019.
    4. Tertiary Education Information: the smallest of the four data files. There are multiple entries for each individual in this dataset. Each row entry contains information on the year, institution and transcript information and can be associated with individuals.This file spans 2006 to 2019.

    Further information on the processing of the original data files is summarised in a document entitled "Notes on preparing the UCT Student Admissions Data" accompanying the data.

  15. Sample predictors as presented in the common data element guidelines.

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alishah Mawji; Edmond Li; Arjun Chandna; Teresa Kortz; Samuel Akech; Matthew O. Wiens; Niranjan Kissoon; Mark Ansermino (2023). Sample predictors as presented in the common data element guidelines. [Dataset]. http://doi.org/10.1371/journal.pone.0253051.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Alishah Mawji; Edmond Li; Arjun Chandna; Teresa Kortz; Samuel Akech; Matthew O. Wiens; Niranjan Kissoon; Mark Ansermino
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Sample predictors as presented in the common data element guidelines.

  16. Forest Inventory and Analysis Database

    • data-usfs.hub.arcgis.com
    • agdatacommons.nal.usda.gov
    • +8more
    Updated Apr 14, 2017
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Forest Service (2017). Forest Inventory and Analysis Database [Dataset]. https://data-usfs.hub.arcgis.com/documents/bc09d4e07dbb4d539a8e46dd3639b5fe
    Explore at:
    Dataset updated
    Apr 14, 2017
    Dataset provided by
    U.S. Department of Agriculture Forest Servicehttp://fs.fed.us/
    Authors
    U.S. Forest Service
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Description

    The Forest Inventory and Analysis (FIA) research program has been in existence since mandated by Congress in 1928. FIA's primary objective is to determine the extent, condition, volume, growth, and depletion of timber on the Nation's forest land. Before 1999, all inventories were conducted on a periodic basis. The passage of the 1998 Farm Bill requires FIA to collect data annually on plots within each State. This kind of up-to-date information is essential to frame realistic forest policies and programs. Summary reports for individual States are published but the Forest Service also provides data collected in each inventory to those interested in further analysis. Data is distributed via the FIA DataMart in a standard format. This standard format, referred to as the Forest Inventory and Analysis Database (FIADB) structure, was developed to provide users with as much data as possible in a consistent manner among States. A number of inventories conducted prior to the implementation of the annual inventory are available in the FIADB. However, various data attributes may be empty or the items may have been collected or computed differently. Annual inventories use a common plot design and common data collection procedures nationwide, resulting in greater consistency among FIA work units than earlier inventories. Links to field collection manuals and the FIADB user's manual are provided in the FIA DataMart.

  17. Defra Business Plan Quarterly Data Summary - Dataset - data.gov.uk

    • ckan.publishing.service.gov.uk
    Updated Sep 22, 2011
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ckan.publishing.service.gov.uk (2011). Defra Business Plan Quarterly Data Summary - Dataset - data.gov.uk [Dataset]. https://ckan.publishing.service.gov.uk/dataset/defra-business-plan-quarterly-data-summary
    Explore at:
    Dataset updated
    Sep 22, 2011
    Dataset provided by
    CKANhttps://ckan.org/
    License

    Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
    License information was derived automatically

    Description

    The Business Plan Quarterly Data Summaries (QDS) are a core part of the transparency agenda. They provide the latest data on indicators included in Departmental Business Plans. The QDS comprises the QDS template and a complementary measurement annex. The QDS template includes metrics in the following three key headline areas: Spending, Results and People. Spending. This section sets out the outturn (actual spending) for each department, along with details of common areas of spend, major projects and some financial indicators relating to performance on financial management. Results. This section sets out each departments input and impact indicators, additional data sets showing progress against key reforms, and progress against the actions in the Department’s Structural Reform Plan. People. This section set out information on each Department’s workforce in terms of its size, composition (including diversity), attendance, and people survey results. The measurement annex provides information on the indicator methodology, coverage and the period the data relates to. September 2012 The QDS has been withdrawn from publication while the style and content are revised. In the meantime the latest data for the Business plan indicators will be published on this site quarterly. December 2012 Quarterly Data Summary (QDS) Under the new QDS framework departments’ spending data is published every quarter; to show the taxpayer how the Government is spending their money. The QDS grew out of commitments made in the 2011 Budget and the Written Ministerial Statement on Business Plans. For the financial year 2012/13 the QDS has been revised and improved in line with Action 9 of the Civil Service Reform Plan to provide a common set of data that will enable comparisons of operational performance across Government so that departments and individuals can be held to account. The QDS breaks down the total spend of the department in three ways: by Budget, by Internal Operation and by Transaction. At the moment this data is published by individual departments in Excel format, however, in the future the intention is to make this data available centrally through an online application. Over time we will be making further improvements to the quality of the data and its timeliness. We expect that with time this process will allow the public to better understand the performance of each department and government operations in a meaningful way. The QDS template is the same for all departments, though the individual detail of grants and policy will differ from department to department. In using this data: 1. People should ensure they take full note of the caveats noted in each Department’s return. 2. As the improvement of the QDS is an ongoing process data quality and completeness will be developed over time and therefore necessary caution should be applied to any comparative analysis undertaken. Departmental Commentary This is the first time that data have been requested in this format, covering just the core Department. Defra is working closely with Cabinet Office to ensure consistency and completeness of its data across the full range of revised requirements. Further improvements will be evident in future QDS publications.

  18. u

    Data from: Training dataset from the Da Vinci Research Kit

    • portaldelainvestigacion.uma.es
    • data.niaid.nih.gov
    Updated 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Irene Rivas-Blanco; Carlos Pérez-del-Pulgar; Andrea Mariani; Giuseppe Tortora; Irene Rivas-Blanco; Carlos Pérez-del-Pulgar; Andrea Mariani; Giuseppe Tortora (2020). Training dataset from the Da Vinci Research Kit [Dataset]. https://portaldelainvestigacion.uma.es/documentos/67321ed6aea56d4af0485dfe
    Explore at:
    Dataset updated
    2020
    Authors
    Irene Rivas-Blanco; Carlos Pérez-del-Pulgar; Andrea Mariani; Giuseppe Tortora; Irene Rivas-Blanco; Carlos Pérez-del-Pulgar; Andrea Mariani; Giuseppe Tortora
    Description

    The use of data sets are getting more relevance in surgical robotics since they can be used to recognise and automate tasks in the lab. Also, it allows to use a common data set to compare different algorithms and methods. The objective of this work is to provide a complete data set of several training tasks that surgeons perform to improve their skills. For this purpose, the Da Vinci research kit has been used to perform a different training tasks. The obtained data set includes all the information provided by the da Vinci robot together with the corresponding video from the camera. Kinematic data has been collected at 50 frames per seconds, and images at 15 frames per seconds. All the information has been carefully timestamped and provided in a readable csv format. The application used to retrieve the information from the da Vinci research kit, as well as tools to access the information are also provided.

  19. Fundraising Data

    • kaggle.com
    zip
    Updated Aug 17, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael Pawlus (2018). Fundraising Data [Dataset]. https://www.kaggle.com/michaelpawlus/fundraising-data
    Explore at:
    zip(1087024 bytes)Available download formats
    Dataset updated
    Aug 17, 2018
    Authors
    Michael Pawlus
    Description

    Context

    This data set is a collection of anonymized sample fundraising data sets so that practitioners within our field can practice and share examples using a common data source

    Open Call for More Content

    If you have any anonymous data that you would like to include here let me know: Michael Pawlus (pawlus@usc.edu)

    Acknowledgements

    Thanks to everyone who has shared data so far to make this possible.

  20. z

    mmWave-based Fitness Activity Recognition Dataset

    • zenodo.org
    png, zip
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yucheng Xie; Xiaonan Guo; Yan Wang; Jerry Cheng; Yingying Chen; Yucheng Xie; Xiaonan Guo; Yan Wang; Jerry Cheng; Yingying Chen (2024). mmWave-based Fitness Activity Recognition Dataset [Dataset]. http://doi.org/10.5281/zenodo.7793613
    Explore at:
    zip, pngAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodo
    Authors
    Yucheng Xie; Xiaonan Guo; Yan Wang; Jerry Cheng; Yingying Chen; Yucheng Xie; Xiaonan Guo; Yan Wang; Jerry Cheng; Yingying Chen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Description:

    This mmWave Datasets are used for fitness activity identification. This dataset (FA Dataset) contains 14 common fitness daily activities. The data are captured by the mmWave radar TI-AWR1642. The dataset can be used by fellow researchers to reproduce the original work or to further explore other machine-learning problems in the domain of mmWave signals.

    Format: .png format

    Section 1: Device Configuration

    Section 2: Data Format

    We provide our mmWave data in heatmaps for this dataset. The data file is in the png format. The details are shown in the following:

    • 14 activities are included in the FA Dataset.
    • 2 participants are included in the FA Dataset.
    • FA_d_p_i_u_j.png:
      • d represents the date to collect the fitness data.
      • p represents the environment to collect the fitness data.
      • i represents fitness activity type index
      • u represents user id
      • j represents sample index
    • Example:
      • FA_20220101_lab_1_2_3 represents the 3rd data sample of user 2 of activity 1 collected in the lab

    Section 3: Experimental Setup

    • We place the mmWave device on a table with a height of 60cm.
    • The participants are asked to perform fitness activity in front of a mmWave device with a distance of 2m.
    • The data are collected at an lab with a size of (5.0m×3.0m).

    Section 4: Data Description

    • We develop a spatial-temporal heatmap to integrates multiple activity features, including the range of movement, velocity, and time duration of each activity repetition.

    • We first derive the Doppler-range map of the users' activity by calculating Range-FFT and Doppler-FFT. Then, we generate the spatial-temporal heatmap by accumulating the velocity of every distance in every Doppler-range map together. Next, we normalize the derived velocity information and present the velocity-distance relationship in time dimension. In this way, we transfer the original instantaneous velocity-distance relationship to a more comprehensive spatial-temporal heatmap which describes the process of a whole activity.

    • As shown in Figure attached, in each spatial-temporal heatmap, the horizontal axis represents the time duration of an activity repetition while the vertical axis represents the range of movement. The velocity is represented by color.

    • We create 14 zip files to store the the dataset. There are 14 zip files starting with "FA", each contains repetitions from the same fitness activity.

    14 common daily activities and their corresponding files

    File Name Activity Type File Name Activity Type

    FA1 Crunches FA8 Squats

    FA2 Elbow plank and reach FA9 Burpees

    FA3 Leg raise FA10 Chest squeezes

    FA4 Lunges FA11 High knees

    FA5 Mountain climber FA12 Side leg raise

    FA6 Punches FA13 Side to side chops

    FA7 Push ups FA14 Turning kicks

    Section 5: Raw Data and Data Processing Algorithms

    • We also provide the mmWave raw data (.mat format) stored in the same zip file corresponding to the heatmap datasets. Each .mat file can store one set of activity repetitions (e.g., 4 repetations) from a same user.
      • For example: FA_d_p_i_u_j.mat:
        • d represents the data to collect the data.
        • p represents the environment to collect the data.
        • i represents the activity type index
        • u represents the user id
        • j represents the set index
    • We plan to provide the data processing algorithms (heatmap_generation.py) to load the mmWave raw data and generate the corresponding heatmap data.

    Section 6: Citations

    If your paper is related to our works, please cite our papers as follows.

    https://ieeexplore.ieee.org/document/9868878/

    Xie, Yucheng, Ruizhe Jiang, Xiaonan Guo, Yan Wang, Jerry Cheng, and Yingying Chen. "mmFit: Low-Effort Personalized Fitness Monitoring Using Millimeter Wave." In 2022 International Conference on Computer Communications and Networks (ICCCN), pp. 1-10. IEEE, 2022.

    Bibtex:

    @inproceedings{xie2022mmfit,

    title={mmFit: Low-Effort Personalized Fitness Monitoring Using Millimeter Wave},

    author={Xie, Yucheng and Jiang, Ruizhe and Guo, Xiaonan and Wang, Yan and Cheng, Jerry and Chen, Yingying},

    booktitle={2022 International Conference on Computer Communications and Networks (ICCCN)},

    pages={1--10},

    year={2022},

    organization={IEEE}

    }

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2018). NINDS Common Data Elements [Dataset]. http://identifiers.org/RRID:SCR_006577

NINDS Common Data Elements

RRID:SCR_006577, nif-0000-10000, NINDS Common Data Elements (RRID:SCR_006577), NINDS CDEs, National Institute of Neurological Disorders and Stroke CDEs, NINDS NINDS Common Data Elements: Harmonizing information. Streamlining research.

Explore at:
Dataset updated
Mar 15, 2018
Description

The purpose of the NINDS Common Data Elements (CDEs) Project is to standardize the collection of investigational data in order to facilitate comparison of results across studies and more effectively aggregate information into significant metadata results. The goal of the National Institute of Neurological Disorders and Stroke (NINDS) CDE Project specifically is to develop data standards for clinical research within the neurological community. Central to this Project is the creation of common definitions and data sets so that information (data) is consistently captured and recorded across studies. To harmonize data collected from clinical studies, the NINDS Office of Clinical Research is spearheading the effort to develop CDEs in neuroscience. This Web site outlines these data standards and provides accompanying tools to help investigators and research teams collect and record standardized clinical data. The Institute still encourages creativity and uniqueness by allowing investigators to independently identify and add their own critical variables. The CDEs have been identified through review of the documentation of numerous studies funded by NINDS, review of the literature and regulatory requirements, and review of other Institute''s common data efforts. Other data standards such as those of the Clinical Data Interchange Standards Consortium (CDISC), the Clinical Data Acquisition Standards Harmonization (CDASH) Initiative, ClinicalTrials.gov, the NINDS Genetics Repository, and the NIH Roadmap efforts have also been followed to ensure that the NINDS CDEs are comprehensive and as compatible as possible with those standards. CDEs now available: * General (CDEs that cross diseases) Updated Feb. 2011! * Congenital Muscular Dystrophy * Epilepsy (Updated Sept 2011) * Friedreich''s Ataxia * Parkinson''s Disease * Spinal Cord Injury * Stroke * Traumatic Brain Injury CDEs in development: * Amyotrophic Lateral Sclerosis (Public review Sept 15 through Nov 15) * Frontotemporal Dementia * Headache * Huntington''s Disease * Multiple Sclerosis * Neuromuscular Diseases ** Adult and pediatric working groups are being finalized and these groups will focus on: Duchenne Muscular Dystrophy, Facioscapulohumeral Muscular Dystrophy, Myasthenia Gravis, Myotonic Dystrophy, and Spinal Muscular Atrophy The following tools are available through this portal: * CDE Catalog - includes the universe of all CDEs. Users are able to search the full universe to isolate a subset of the CDEs (e.g., all stroke-specific CDEs, all pediatric epilepsy CDEs, etc.) and download details about those CDEs. * CRF Library - (a.k.a., Library of Case Report Form Modules and Guidelines) contains all the CRF Modules that have been created through the NINDS CDE Project as well as various guideline documents. Users are able to search the library to find CRF Modules and Guidelines of interest. * Form Builder - enables users to start the process of assembling a CRF or form by allowing them to choose the CDEs they would like to include on the form. This tool is intended to assist data managers and database developers to create data dictionaries for their study forms.

Search
Clear search
Close search
Google apps
Main menu