38 datasets found
  1. f

    Count of CDEs from each initiative.

    • plos.figshare.com
    xls
    Updated Jul 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Craig S. Mayer; Vojtech Huser (2023). Count of CDEs from each initiative. [Dataset]. http://doi.org/10.1371/journal.pone.0283601.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 7, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Craig S. Mayer; Vojtech Huser
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    There are many initiatives attempting to harmonize data collection across human clinical studies using common data elements (CDEs). The increased use of CDEs in large prior studies can guide researchers planning new studies. For that purpose, we analyzed the All of Us (AoU) program, an ongoing US study intending to enroll one million participants and serve as a platform for numerous observational analyses. AoU adopted the OMOP Common Data Model to standardize both research (Case Report Form [CRF]) and real-world (imported from Electronic Health Records [EHRs]) data. AoU standardized specific data elements and values by including CDEs from terminologies such as LOINC and SNOMED CT. For this study, we defined all elements from established terminologies as CDEs and all custom concepts created in the Participant Provided Information (PPI) terminology as unique data elements (UDEs). We found 1 033 research elements, 4 592 element-value combinations and 932 distinct values. Most elements were UDEs (869, 84.1%), while most CDEs were from LOINC (103 elements, 10.0%) or SNOMED CT (60, 5.8%). Of the LOINC CDEs, 87 (53.1% of 164 CDEs) originated from previous data collection initiatives, such as PhenX (17 CDEs) and PROMIS (15 CDEs). On a CRF level, The Basics (12 of 21 elements, 57.1%) and Lifestyle (10 of 14, 71.4%) were the only CRFs with multiple CDEs. On a value level, 61.7% of distinct values are from an established terminology. AoU demonstrates the use of the OMOP model for integrating research and routine healthcare data (64 elements in both contexts), which allows for monitoring lifestyle and health changes outside the research setting. The increased inclusion of CDEs in large studies (like AoU) is important in facilitating the use of existing tools and improving the ease of understanding and analyzing the data collected, which is more challenging when using study specific formats.

  2. d

    NINDS Common Data Elements

    • dknet.org
    • scicrunch.org
    • +1more
    Updated Mar 15, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). NINDS Common Data Elements [Dataset]. http://identifiers.org/RRID:SCR_006577
    Explore at:
    Dataset updated
    Mar 15, 2018
    Description

    The purpose of the NINDS Common Data Elements (CDEs) Project is to standardize the collection of investigational data in order to facilitate comparison of results across studies and more effectively aggregate information into significant metadata results. The goal of the National Institute of Neurological Disorders and Stroke (NINDS) CDE Project specifically is to develop data standards for clinical research within the neurological community. Central to this Project is the creation of common definitions and data sets so that information (data) is consistently captured and recorded across studies. To harmonize data collected from clinical studies, the NINDS Office of Clinical Research is spearheading the effort to develop CDEs in neuroscience. This Web site outlines these data standards and provides accompanying tools to help investigators and research teams collect and record standardized clinical data. The Institute still encourages creativity and uniqueness by allowing investigators to independently identify and add their own critical variables. The CDEs have been identified through review of the documentation of numerous studies funded by NINDS, review of the literature and regulatory requirements, and review of other Institute''s common data efforts. Other data standards such as those of the Clinical Data Interchange Standards Consortium (CDISC), the Clinical Data Acquisition Standards Harmonization (CDASH) Initiative, ClinicalTrials.gov, the NINDS Genetics Repository, and the NIH Roadmap efforts have also been followed to ensure that the NINDS CDEs are comprehensive and as compatible as possible with those standards. CDEs now available: * General (CDEs that cross diseases) Updated Feb. 2011! * Congenital Muscular Dystrophy * Epilepsy (Updated Sept 2011) * Friedreich''s Ataxia * Parkinson''s Disease * Spinal Cord Injury * Stroke * Traumatic Brain Injury CDEs in development: * Amyotrophic Lateral Sclerosis (Public review Sept 15 through Nov 15) * Frontotemporal Dementia * Headache * Huntington''s Disease * Multiple Sclerosis * Neuromuscular Diseases ** Adult and pediatric working groups are being finalized and these groups will focus on: Duchenne Muscular Dystrophy, Facioscapulohumeral Muscular Dystrophy, Myasthenia Gravis, Myotonic Dystrophy, and Spinal Muscular Atrophy The following tools are available through this portal: * CDE Catalog - includes the universe of all CDEs. Users are able to search the full universe to isolate a subset of the CDEs (e.g., all stroke-specific CDEs, all pediatric epilepsy CDEs, etc.) and download details about those CDEs. * CRF Library - (a.k.a., Library of Case Report Form Modules and Guidelines) contains all the CRF Modules that have been created through the NINDS CDE Project as well as various guideline documents. Users are able to search the library to find CRF Modules and Guidelines of interest. * Form Builder - enables users to start the process of assembling a CRF or form by allowing them to choose the CDEs they would like to include on the form. This tool is intended to assist data managers and database developers to create data dictionaries for their study forms.

  3. a

    Cadastral PLSS Standardized Data - Statewide

    • gis-odnr.opendata.arcgis.com
    • hub.arcgis.com
    • +1more
    Updated Nov 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ohio Department of Natural Resources (2024). Cadastral PLSS Standardized Data - Statewide [Dataset]. https://gis-odnr.opendata.arcgis.com/documents/2743028ac0864ddda7841e73793ea311
    Explore at:
    Dataset updated
    Nov 6, 2024
    Dataset authored and provided by
    Ohio Department of Natural Resources
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Download .zipThis data set represents the GIS Version of the Public Land Survey System including both rectangular and non-rectangular surveys. The metadata describes the lineage, sources and production methods for the data content. The definitions and structure of this data is compliant with FGDC Cadastral Data Content Standards and Guidelines for publication. This coverage was originally created for the accurate location of the oil and gas wells in the state of Ohio. The original data set was developed as an ArcInfo coverage containing the original land subdivision boundaries for Ohio. Ohio has had a long and varied history of its land subdivisions that has led to the use of several subdivision strategies being applied. In general, these different schemes are composed of the Public Land Surveying System (PLSS) subdivisions and the irregular land subdivisions. The PLSS subdivisions contain townships, ranges, and sections. They are found in the following major land subdivisions: Old Seven Ranges, Between the Miamis (parts of which are known as the Symmes Purchase), Congress Lands East of Scioto River, Congress Lands North of Old Seven Ranges, Congress Lands West of Miami River, North and East of the First Principal Meridian, South and East of the First Principal Meridian, and the Michigan Meridian Survey. The irregular subdivisions include the Virginia Military District, the Ohio Company Purchase, the U.S. Military District, the Connecticut Western Reserve, the Twelve-Mile Square Reservation, the Two-Mile Square Reservation, the Refugee Lands, the French Grants, and the Donation Tract. This data set represents the GIS Version of the Public Land Survey System including both rectangular and non-rectangular surveys. The primary source for the data is local records and geographic control coordinates from states, counties as well as federal agencies such as the BLM, USGS and USFS. The data has been converted from source documents to digital form and transferred into a GIS format that is compliant with FGDC Cadastral Data Content Standards and Guidelines for publication. This data is optimized for data publication and sharing rather than for specific "production" or operation and maintenance. This data set includes the following: PLSS Fully Intersected (all of the PLSS feature at the atomic or smallest polygon level), PLSS Townships, First Divisions and Second Divisions (the hierarchical break down of the PLSS Rectangular surveys) PLSS Special surveys (non rectangular components of the PLSS) Meandered Water, Corners and Conflicted Areas (known areas of gaps or overlaps between Townships or state boundaries). The Entity-Attribute section of this metadata describes these components in greater detail.This data set is optimized for data publication and sharing rather than for specific "production" or operation and maintenance. This data set includes the following: PLSS Fully Intersected (all of the PLSS feature at the atomic or smallest polygon level), PLSS Townships, First Divisions and Second Divisions (the hierarchical break down of the PLSS Rectangular surveys) PLSS Special surveys (non rectangular components of the PLSS) Meandered Water, Corners and Conflicted Areas (known areas of gaps or overlaps between Townships or state boundaries). The Entity-Attribute section of this metadata describes these components in greater detail.Contact Information:GIS Support, ODNR GIS ServicesOhio Department of Natural ResourcesOffice of Information TechnologyGIS Records2045 Morse Rd, Bldg I-2Columbus, OH, 43229Telephone: 614-265-6462Email: gis.support@dnr.ohio.gov

  4. f

    SWISS MADE: Standardized WithIn Class Sum of Squares to Evaluate...

    • plos.figshare.com
    pdf
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher R. Cabanski; Yuan Qi; Xiaoying Yin; Eric Bair; Michele C. Hayward; Cheng Fan; Jianying Li; Matthew D. Wilkerson; J. S. Marron; Charles M. Perou; D. Neil Hayes (2023). SWISS MADE: Standardized WithIn Class Sum of Squares to Evaluate Methodologies and Dataset Elements [Dataset]. http://doi.org/10.1371/journal.pone.0009905
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Christopher R. Cabanski; Yuan Qi; Xiaoying Yin; Eric Bair; Michele C. Hayward; Cheng Fan; Jianying Li; Matthew D. Wilkerson; J. S. Marron; Charles M. Perou; D. Neil Hayes
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Contemporary high dimensional biological assays, such as mRNA expression microarrays, regularly involve multiple data processing steps, such as experimental processing, computational processing, sample selection, or feature selection (i.e. gene selection), prior to deriving any biological conclusions. These steps can dramatically change the interpretation of an experiment. Evaluation of processing steps has received limited attention in the literature. It is not straightforward to evaluate different processing methods and investigators are often unsure of the best method. We present a simple statistical tool, Standardized WithIn class Sum of Squares (SWISS), that allows investigators to compare alternate data processing methods, such as different experimental methods, normalizations, or technologies, on a dataset in terms of how well they cluster a priori biological classes. SWISS uses Euclidean distance to determine which method does a better job of clustering the data elements based on a priori classifications. We apply SWISS to three different gene expression applications. The first application uses four different datasets to compare different experimental methods, normalizations, and gene sets. The second application, using data from the MicroArray Quality Control (MAQC) project, compares different microarray platforms. The third application compares different technologies: a single Agilent two-color microarray versus one lane of RNA-Seq. These applications give an indication of the variety of problems that SWISS can be helpful in solving. The SWISS analysis of one-color versus two-color microarrays provides investigators who use two-color arrays the opportunity to review their results in light of a single-channel analysis, with all of the associated benefits offered by this design. Analysis of the MACQ data shows differential intersite reproducibility by array platform. SWISS also shows that one lane of RNA-Seq clusters data by biological phenotypes as well as a single Agilent two-color microarray.

  5. f

    Examples of non-standardized echocardiographic reporting that are not...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Chinmoy Nath; Mazen S. Albaghdadi; Siddhartha R. Jonnalagadda (2023). Examples of non-standardized echocardiographic reporting that are not identified or extracted by EchoInfer. [Dataset]. http://doi.org/10.1371/journal.pone.0153749.t006
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Chinmoy Nath; Mazen S. Albaghdadi; Siddhartha R. Jonnalagadda
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Examples of non-standardized echocardiographic reporting that are not identified or extracted by EchoInfer.

  6. d

    Guidelines for the standardized collection of predictor variables in studies...

    • search.dataone.org
    • borealisdata.ca
    Updated Mar 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li, Edmond; Mawji, Alishah; Akech, Samuel; Chandna, Arjun; Kissoon, Niranjan; Kortz, Teresa; Lubell, Yoel; Turner, Paul; Wiens, Matthew; Ansermino, Mark (2024). Guidelines for the standardized collection of predictor variables in studies for pediatric sepsis (guidelines)~Pediatric Sepsis Predictors Standardization (PS2) Working Group [Dataset]. http://doi.org/10.5683/SP2/02LVVT
    Explore at:
    Dataset updated
    Mar 16, 2024
    Dataset provided by
    Borealis
    Authors
    Li, Edmond; Mawji, Alishah; Akech, Samuel; Chandna, Arjun; Kissoon, Niranjan; Kortz, Teresa; Lubell, Yoel; Turner, Paul; Wiens, Matthew; Ansermino, Mark
    Time period covered
    Jan 9, 2020
    Description

    These guidelines aim to maximize the efficiency of data-sharing collaborations in pediatric sepsis research by facilitating the standardization of data collection in predictors captured in future studies. Feedback may be submitted via a secure survey here: https://rc.bcchr.ca/redcap/surveys/?s=T849WRXYT8, NOTE for restricted files: If you are not yet a CoLab member, please complete our membership application survey to gain access to restricted files within 2 business days. Some files may remain restricted to CoLab members. These files are deemed more sensitive by the file owner and are meant to be shared on a case-by-case basis. Please contact the CoLab coordinator on this page under "collaborate with the pediatric sepsis colab."

  7. f

    Search strategies in three different databases.

    • plos.figshare.com
    xls
    Updated Jan 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Somayeh Paydar; Shahrbanoo Pahlevanynejad; Farkhondeh Asadi; Hamideh Ehtesham; Azam Sabahi (2025). Search strategies in three different databases. [Dataset]. http://doi.org/10.1371/journal.pone.0316791.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset provided by
    PLOS ONE
    Authors
    Somayeh Paydar; Shahrbanoo Pahlevanynejad; Farkhondeh Asadi; Hamideh Ehtesham; Azam Sabahi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Minimum Data Set (MDS) enables integration in data collection, uniform data reporting, and data exchange across clinical and research information systems. The current study was conducted to determine a comprehensive national MDS for the Epidermolysis Bullosa (EB) information management system in Iran. This cross-sectional descriptive study consists of three steps: systematic review, focus group discussion, and the Delphi technique. A systematic review was conducted using relevant databases. Then, a focus group discussion was held to determine the extracted data elements with the help of contributing multidisciplinary experts. Finally, MDSs were selected through the Delphi technique in two rounds. The collected data were analyzed using Microsoft Excel 2019. In total, 103 data elements were included in the Delphi survey. The data elements, based on the experts’ opinions, were classified into two main categories: administrative data and clinical data. The final categories of data elements consisted of 11 administrative items and 92 clinical items. The national MDS, as the core of the EB surveillance program, is essential for enabling appropriate and informed decisions by healthcare policymakers, physicians, and healthcare providers. In this study, a MDS was developed and internally validated for EB. This research generated new knowledge to enable healthcare professionals to collect relevant and meaningful data for use. The use of this standardized approach can help benchmark clinical practice and target improvements worldwide.

  8. Russian Short-Term Mortality Fluctuations database

    • zenodo.org
    • data.niaid.nih.gov
    csv
    Updated Dec 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aleksey Shchur; Aleksey Shchur; Sergei Timonin; Sergei Timonin; Elena Churilova; Elena Churilova; Olga Rodina; Olga Rodina; Egor Sergeev; Egor Sergeev; Dmitri Jdanov; Dmitri Jdanov (2023). Russian Short-Term Mortality Fluctuations database [Dataset]. http://doi.org/10.5281/zenodo.10280664
    Explore at:
    csvAvailable download formats
    Dataset updated
    Dec 7, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Aleksey Shchur; Aleksey Shchur; Sergei Timonin; Sergei Timonin; Elena Churilova; Elena Churilova; Olga Rodina; Olga Rodina; Egor Sergeev; Egor Sergeev; Dmitri Jdanov; Dmitri Jdanov
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    1. Database contents

    The Russian Short-Term Mortality Fluctuations database (RusSTMF) contains a series of standardized and crude death rates for men, women and both sexes for Russia as a whole and its regions for the period from 2000 to 2021.

    All the output indicators presented in the database are calculated based on data of deaths registered by the Vital Registry Office. The weekly death counts are calculated based on depersonalized individual data provided by the Russian Federal State Statistics Service (Rosstat) at the request of the HSE. Time coverage: 03.01.2000 (Week 1) – 31.12.2021 (Week 1148)

    2. A brief description of the input data on deaths

    Date of death: date of occurrence

    Unit of time: week

    First and last days of the week: Monday – Sunday

    First and last week of the year: The weeks are organized according to ISO 8601:2004 guidelines. Each week of the year, including the first and last, contains 7 days. In order to get 7-day weeks, the days of previous years are included in this first week (if January 1 fell on Tuesday, Wednesday or Thursday) or in the last calendar week (if December 31 fell on Thursday, Friday or Saturday).

    Age groups: the entire population

    Sex: men, women, both sexes (men and women combined)

    Restrictions and data changes: data on deaths in the Pskov region were excluded for weeks 9-13 of 2012

    Note: Deaths with an unknown date of occurrence (unknown year, month, or day) account for about 0.3% of all deaths and are excluded from the calculation of week-age-specific and standardized death rates.

    3. Description of the week-specific mortality rates data file

    Week-specific standardized death rates for Russia as a whole and its regions are contained in a single data file presented in .csv format. The format of data allows its uploading into any system for statistical analysis. Each record (row) in the data file contains data for one calendar year, one week, one territory, one sex.

    The decimal point is dot (.)

    The first element of the row is the territory code ("PopCode" column), the second element is the year ("Year" column), the third element ("Week" column) is the week of the year, the fourth element ("Sex" column) is sex (F – female, M – male, B – both sexes combined). This is followed by a column "CDR" with the value of the crude death rate and "SDR" with the value of the standardized death rate. If the indicator cannot be calculated for some combination of year, sex, and territory, then the corresponding meaningful data elements in the data file are replaced with ".".

  9. Open Data Inventory

    • ouvert.canada.ca
    • open.canada.ca
    csv, html, xls
    Updated Dec 9, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Treasury Board of Canada Secretariat (2024). Open Data Inventory [Dataset]. https://ouvert.canada.ca/data/dataset/4ed351cf-95d8-4c10-97ac-6b3511f359b7
    Explore at:
    html, csv, xlsAvailable download formats
    Dataset updated
    Dec 9, 2024
    Dataset provided by
    Treasury Board of Canadahttps://www.canada.ca/en/treasury-board-secretariat/corporate/about-treasury-board.html
    Treasury Board of Canada Secretariathttp://www.tbs-sct.gc.ca/
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Description

    Building a comprehensive data inventory as required by section 6.3 of the Directive on Open Government: “Establishing and maintaining comprehensive inventories of data and information resources of business value held by the department to determine their eligibility and priority, and to plan for their effective release.” Creating a data inventory is among the first steps in identifying federal data that is eligible for release. Departmental data inventories has been published on the Open Government portal, Open.Canada.ca, so that Canadians can see what federal data is collected and have the opportunity to indicate what data is of most interest to them, helping departments to prioritize data releases based on both external demand and internal capacity. The objective of the inventory is to provide a landscape of all federal data. While it is recognized that not all data is eligible for release due to the nature of the content, departments are responsible for identifying and including all datasets of business values as part of the inventory exercise with the exception of datasets whose title contains information that should not be released to be released to the public due to security or privacy concerns. These titles have been excluded from the inventory. Departments were provided with an open data inventory template with standardized elements to populate, and upload in the metadata catalogue, the Open Government Registry. These elements are described in the data dictionary file. Departments are responsible for maintaining up-to-date data inventories that reflect significant additions to their data holdings. For purposes of this open data inventory exercise, a dataset is defined as: “An organized collection of data used to carry out the business of a department or agency, that can be understood alone or in conjunction with other datasets”. Please note that the Open Data Inventory is no longer being maintained by Government of Canada organizations and is therefore not being updated. However, we will continue to provide access to the dataset for review and analysis.

  10. f

    Example of a categorical element with standardized permissible values...

    • plos.figshare.com
    xls
    Updated Jul 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Craig S. Mayer; Vojtech Huser (2023). Example of a categorical element with standardized permissible values (marital status). [Dataset]. http://doi.org/10.1371/journal.pone.0283601.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jul 7, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Craig S. Mayer; Vojtech Huser
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Example of a categorical element with standardized permissible values (marital status).

  11. t

    Data from: Standardized element data of sediment core PG1972-1 from Lake...

    • service.tib.eu
    • doi.pangaea.de
    Updated Nov 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Standardized element data of sediment core PG1972-1 from Lake Lake Bezrybnoe (Russia) [Dataset]. https://service.tib.eu/ldmservice/dataset/png-doi-10-1594-pangaea-953312
    Explore at:
    Dataset updated
    Nov 29, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Russia
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG1972, retrieved in 2009 from Lake Bezrybnoe (Lena Delta, Russia) at 4.7 m water depth. The thermokarst lake Bezrybnoe is small basin in tundra region and has one outflow and three inflows. It lies at an elevation of ca. 6 m a.s.l. with a surface area of ca. 0.77 km2 and a maximum lake water depth of estimated 5.3 m. The 1.08 m sediment core was retrieved by a UWITEC hammer action gravity corer (60mm) during the RU-Land_2009_Lena-transect expedition of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

  12. Data from: Standardized element data of sediment core PG2133 from Lake...

    • doi.pangaea.de
    • service.tib.eu
    html, tsv
    Updated Jan 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Boris K Biskaborn; Bernhard Diekmann; Luidmila A Pestryakova; Gregor Pfalz; Mareike Wieczorek; Birgit Heim; Thomas Löffler (2023). Standardized element data of sediment core PG2133 from Lake Bolshoe Toko (Yakutia, Russia) [Dataset]. http://doi.org/10.1594/PANGAEA.953889
    Explore at:
    tsv, htmlAvailable download formats
    Dataset updated
    Jan 11, 2023
    Dataset provided by
    PANGAEA
    Authors
    Boris K Biskaborn; Bernhard Diekmann; Luidmila A Pestryakova; Gregor Pfalz; Mareike Wieczorek; Birgit Heim; Thomas Löffler
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Variables measured
    Iron, chi-square, Lead, chi-square, Zinc, chi-square, Copper, chi-square, Sulfur, chi-square, Bismuth, chi-square, Bromine, chi-square, Calcium, chi-square, Gallium, chi-square, Niobium, chi-square, and 64 more
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG2133, retrieved in 2013 from Lake Bolshoe Toko (Yakutia, Russia) at 26 m water depth. The glacial lake Bolshoe Toko is in the deciduous forest mountain area. It lies at an elevation of ca. 919 m a.s.l. with a surface area of ca. 83.243 km2 and a maximum lake water depth of estimated 72.5 m. The 3.75 m sediment core was retrieved by a UWITEC piston corer during the RU-Land_2013_Yakutia expedition of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

  13. t

    Standardized element data of sediment core PG2135 from Lake Ulakhan Chabyda...

    • service.tib.eu
    • doi.pangaea.de
    Updated Dec 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Standardized element data of sediment core PG2135 from Lake Ulakhan Chabyda (Yakutia, Russia) [Dataset]. https://service.tib.eu/ldmservice/dataset/png-doi-10-1594-pangaea-954142
    Explore at:
    Dataset updated
    Dec 1, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sakha Republic, Russia
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG2135, retrieved in 2013 from Lake Ulakhan Chabyda (Yakutia, Russia) at 1.7 m water depth. The thermokarst lake Ulakhan Chabyda is in an exorheic basin in the coniferous forest area and has one outflow and several small inflows. It lies at an elevation of ca. 195 m a.s.l. with a surface area of ca. 2.1 km2 and a maximum lake water depth of estimated 2.1 m. The 6.51 m sediment core was retrieved by a Russian peat corer during the RU-Land_2013_Yakutia expedition of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

  14. Standardized element data of sediment core PG2360 from Lake Bety (Yakutia,...

    • doi.pangaea.de
    html, tsv
    Updated Feb 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Boris K Biskaborn; Gregor Pfalz; Yuriiy Kublitskii; Amy Forster; Mareike Wieczorek; Birgit Heim; Luidmila A Pestryakova; Ulrike Herzschuh (2024). Standardized element data of sediment core PG2360 from Lake Bety (Yakutia, Russia) [Dataset]. http://doi.org/10.1594/PANGAEA.965584
    Explore at:
    html, tsvAvailable download formats
    Dataset updated
    Feb 22, 2024
    Dataset provided by
    PANGAEA
    Authors
    Boris K Biskaborn; Gregor Pfalz; Yuriiy Kublitskii; Amy Forster; Mareike Wieczorek; Birgit Heim; Luidmila A Pestryakova; Ulrike Herzschuh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Sep 2, 2016
    Area covered
    Variables measured
    Iron (peak area), Iron, chi-square, Zinc (peak area), Zinc, chi-square, Copper (peak area), Copper, chi-square, Sulfur (peak area), Sulfur, chi-square, Bromine (peak area), Bromine, chi-square, and 37 more
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG2360, retrieved in 2016 from Lake Bety (Yakutia, Russia) at 1.5 m water depth. The thermokarst lake Bety is a small thermkarst lake in the coniferous forest area and has one outflow and three inflows. It lies at an elevation of ca. 160 m a.s.l. with a surface area of ca. 0.66 km2 and a maximum lake water depth of estimated 1.5 m. The 2.31 m sediment core was retrieved by a Russian peat corer during the Expedition Yakutia 2016 of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

  15. Z

    Datasets from An Atlas of Plant Transposable Elements

    • data.niaid.nih.gov
    Updated Nov 8, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Domingues, Doulgas Silva (2021). Datasets from An Atlas of Plant Transposable Elements [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5574527
    Explore at:
    Dataset updated
    Nov 8, 2021
    Dataset provided by
    Pedro, Daniel Longhi Fernandes
    Varani, Alessandro de Mello
    Paschoal, Alexandre Rossi
    Amorim, Tharcisio Soares
    Guyot, Romain
    Domingues, Doulgas Silva
    License

    Attribution 1.0 (CC BY 1.0)https://creativecommons.org/licenses/by/1.0/
    License information was derived automatically

    Description

    In this repository, we deposited support data for the article "An Atlas of Plant Transposable Elements", available at http://apte.cp.utfpr.edu.br/.

    Here, we included:

    1.) Supplementary material data: A) SuppMat_1.xlsx: The genome assembly reference access from Ensembl Plants species used. B) SuppMat_2.docx: A brief transposable elements annotation steps used in this work.

    2.) Code and software: all script code create, third part software, how we used it, are detailed using Arabidopsis thaliana genome as an example in the GitHub: https://github.com/daniellonghi/te_pipeline under the MIT license (please see details in licence.txt file). For third part-software, consult their terms.

    To report bugs, to ask for help, and to give any feedback, please contact Alexandre R. Paschoal (paschoal@utfpr.edu.br) or Douglas S. Domingues (douglas.domingues@unesp.br).

  16. t

    Standardized element data of sediment core PG2023 from Lake Kyutyunda...

    • service.tib.eu
    • doi.pangaea.de
    • +1more
    Updated Nov 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Standardized element data of sediment core PG2023 from Lake Kyutyunda (Yakutia, Russia) [Dataset]. https://service.tib.eu/ldmservice/dataset/png-doi-10-1594-pangaea-953827
    Explore at:
    Dataset updated
    Nov 30, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sakha Republic, Russia
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG2023, retrieved in 2010 from Lake Ozero Kyutyunda (Yakutia, Russia) at 2.9 m water depth. The thermokarst lake Ozero Kyutyunda is in an exorheic basin in the coniferous forest area and has one outflow, one inflow and several smaller inflows. It lies at an elevation of ca. 56 m a.s.l. with a surface area of ca. 4.9 km2 and a maximum lake water depth of estimated 3.5 m. The 7.1 m sediment core was retrieved by a UWITEC 60mm piston corer during the RU-Land_2010_Lena expedition of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

  17. E

    Data from: Frequency lists of word parts from the GOS 1.0 corpus 1.1

    • live.european-language-grid.eu
    binary format
    Updated Oct 27, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). Frequency lists of word parts from the GOS 1.0 corpus 1.1 [Dataset]. https://live.european-language-grid.eu/catalogue/lcr/8351
    Explore at:
    binary formatAvailable download formats
    Dataset updated
    Oct 27, 2020
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Frequency lists of words split into word parts were extracted from the GOS 1.0 Corpus of Spoken Slovene (http://hdl.handle.net/11356/1040) using the LIST corpus extraction tool (http://hdl.handle.net/11356/1227). The lists contain all lemmas, lower-case word forms or standardized word forms occurring in the corpus, split into their initial or final part (i.e. the initial or final string of 1, 2, 3, 4 or 5 characters in the word) and the rest. In addition, the lists also contain absolute and relative frequencies, percentages, and distribution across the text-types included in the corpus taxonomy.

    The lists were extracted for each part-of-speech category. For each part-of-speech, a total of 30 lists were extracted:

    1) 10 lists for initial or final word parts extracted from lemmas,

    2) 10 lists for initial or final word parts extracted from lower-case word forms,

    3) 10 lists for initial or final word parts extracted from standardized word forms.

    In addition, 30 lists were extracted from all words (regardless of their part-of-speech category).

    Compared to the previous version (http://hdl.handle.net/11356/1270), this one includes fixes of several typos and substitutes all instances of "normalized forms" with the more adequate term "standardized forms" (as used in the SSJ project).

  18. Test Data Management Market Analysis, Size, and Forecast 2025-2029: North...

    • technavio.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio, Test Data Management Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, Italy, and UK), APAC (Australia, China, India, and Japan), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/test-data-management-market-industry-analysis
    Explore at:
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    United States, Global
    Description

    Snapshot img

    Test Data Management Market Size 2025-2029

    The test data management market size is forecast to increase by USD 727.3 million, at a CAGR of 10.5% between 2024 and 2029.

    The market is experiencing significant growth, driven by the increasing adoption of automation by enterprises to streamline their testing processes. The automation trend is fueled by the growing consumer spending on technological solutions, as businesses seek to improve efficiency and reduce costs. However, the market faces challenges, including the lack of awareness and standardization in test data management practices. This obstacle hinders the effective implementation of test data management solutions, requiring companies to invest in education and training to ensure successful integration. To capitalize on market opportunities and navigate challenges effectively, businesses must stay informed about emerging trends and best practices in test data management. By doing so, they can optimize their testing processes, reduce risks, and enhance overall quality.

    What will be the Size of the Test Data Management Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
    Request Free SampleThe market continues to evolve, driven by the ever-increasing volume and complexity of data. Data exploration and analysis are at the forefront of this dynamic landscape, with data ethics and governance frameworks ensuring data transparency and integrity. Data masking, cleansing, and validation are crucial components of data management, enabling data warehousing, orchestration, and pipeline development. Data security and privacy remain paramount, with encryption, access control, and anonymization key strategies. Data governance, lineage, and cataloging facilitate data management software automation and reporting. Hybrid data management solutions, including artificial intelligence and machine learning, are transforming data insights and analytics. Data regulations and compliance are shaping the market, driving the need for data accountability and stewardship. Data visualization, mining, and reporting provide valuable insights, while data quality management, archiving, and backup ensure data availability and recovery. Data modeling, data integrity, and data transformation are essential for data warehousing and data lake implementations. Data management platforms are seamlessly integrated into these evolving patterns, enabling organizations to effectively manage their data assets and gain valuable insights. Data management services, cloud and on-premise, are essential for organizations to adapt to the continuous changes in the market and effectively leverage their data resources.

    How is this Test Data Management Industry segmented?

    The test data management industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. ApplicationOn-premisesCloud-basedComponentSolutionsServicesEnd-userInformation technologyTelecomBFSIHealthcare and life sciencesOthersSectorLarge enterpriseSMEsGeographyNorth AmericaUSCanadaEuropeFranceGermanyItalyUKAPACAustraliaChinaIndiaJapanRest of World (ROW).

    By Application Insights

    The on-premises segment is estimated to witness significant growth during the forecast period.In the realm of data management, on-premises testing represents a popular approach for businesses seeking control over their infrastructure and testing process. This approach involves establishing testing facilities within an office or data center, necessitating a dedicated team with the necessary skills. The benefits of on-premises testing extend beyond control, as it enables organizations to upgrade and configure hardware and software at their discretion, providing opportunities for exploration testing. Furthermore, data security is a significant concern for many businesses, and on-premises testing alleviates the risk of compromising sensitive information to third-party companies. Data exploration, a crucial aspect of data analysis, can be carried out more effectively with on-premises testing, ensuring data integrity and security. Data masking, cleansing, and validation are essential data preparation techniques that can be executed efficiently in an on-premises environment. Data warehousing, data pipelines, and data orchestration are integral components of data management, and on-premises testing allows for seamless integration and management of these elements. Data governance frameworks, lineage, catalogs, and metadata are essential for maintaining data transparency and compliance. Data security, encryption, and access control are paramount, and on-premises testing offers greater control over these aspects. Data reporting

  19. t

    Standardized element data of sediment core PG1984 from Lake Sysy-Kyuele...

    • service.tib.eu
    • doi.pangaea.de
    Updated Nov 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Standardized element data of sediment core PG1984 from Lake Sysy-Kyuele (Yakutia, Russia) [Dataset]. https://service.tib.eu/ldmservice/dataset/png-doi-10-1594-pangaea-953688
    Explore at:
    Dataset updated
    Nov 30, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Sakha Republic, Russia
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG1984, retrieved in 2009 from Lake Sysy-Kyuele (Yakutia, Russia) at 2.4 m water depth. The thermokarst lake Sysy-Kyuele is situated in an exorheic basin in a coniferous forest area and has one outflow and no visible inflow. It lies at an elevation of ca. 73m a.s.l. with a surface area of ca. 1 km2 and a maximum lake water depth of estimated 2.5 m. The 1.2 m sediment core was retrieved by a UWITEC hammer action gravity corer during the RU-Land_2009_Lena-transect expedition of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

  20. t

    Standardized element data of sediment core PG1975-1 from Lake Elgene-Kyuele...

    • service.tib.eu
    • doi.pangaea.de
    Updated Nov 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Standardized element data of sediment core PG1975-1 from Lake Elgene-Kyuele (Russia) [Dataset]. https://service.tib.eu/ldmservice/dataset/png-doi-10-1594-pangaea-953322
    Explore at:
    Dataset updated
    Nov 30, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Russia
    Description

    This data set is part of a larger data harmonization effort to make lake sediment core data machine readable and comparable. Here we standardized X-ray fluorescence line scanning (XRF)-based element data of sediment core PG1975-1, retrieved in 2009 from Lake Elgene-Kyuele (lower Lena River, Yakutia, Russia) at 4.8 m water depth. The thermokarst lake Elgene-Kyuele is situated in the forest tundra and has several small outflows and one visible inflow. It lies at an elevation of ca. 147 m a.s.l. with a surface area of ca. 1.34 km2 and a maximum lake water depth of estimated 10.5 m. The 1.26 m sediment core was retrieved by a UWITEC hammer action gravity corer during the RU-Land_2009_Lena-transect expedition of the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI, Germany, Potsdam) in cooperation with the North Eastern Federal State University (NEFU, Russia, Yakutsk). The downcore elemental composition was measured using an AVAATECH x-ray fluorescence core scanner at AWI Bremerhaven.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Craig S. Mayer; Vojtech Huser (2023). Count of CDEs from each initiative. [Dataset]. http://doi.org/10.1371/journal.pone.0283601.t003

Count of CDEs from each initiative.

Related Article
Explore at:
xlsAvailable download formats
Dataset updated
Jul 7, 2023
Dataset provided by
PLOS ONE
Authors
Craig S. Mayer; Vojtech Huser
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

There are many initiatives attempting to harmonize data collection across human clinical studies using common data elements (CDEs). The increased use of CDEs in large prior studies can guide researchers planning new studies. For that purpose, we analyzed the All of Us (AoU) program, an ongoing US study intending to enroll one million participants and serve as a platform for numerous observational analyses. AoU adopted the OMOP Common Data Model to standardize both research (Case Report Form [CRF]) and real-world (imported from Electronic Health Records [EHRs]) data. AoU standardized specific data elements and values by including CDEs from terminologies such as LOINC and SNOMED CT. For this study, we defined all elements from established terminologies as CDEs and all custom concepts created in the Participant Provided Information (PPI) terminology as unique data elements (UDEs). We found 1 033 research elements, 4 592 element-value combinations and 932 distinct values. Most elements were UDEs (869, 84.1%), while most CDEs were from LOINC (103 elements, 10.0%) or SNOMED CT (60, 5.8%). Of the LOINC CDEs, 87 (53.1% of 164 CDEs) originated from previous data collection initiatives, such as PhenX (17 CDEs) and PROMIS (15 CDEs). On a CRF level, The Basics (12 of 21 elements, 57.1%) and Lifestyle (10 of 14, 71.4%) were the only CRFs with multiple CDEs. On a value level, 61.7% of distinct values are from an established terminology. AoU demonstrates the use of the OMOP model for integrating research and routine healthcare data (64 elements in both contexts), which allows for monitoring lifestyle and health changes outside the research setting. The increased inclusion of CDEs in large studies (like AoU) is important in facilitating the use of existing tools and improving the ease of understanding and analyzing the data collected, which is more challenging when using study specific formats.

Search
Clear search
Close search
Google apps
Main menu