92 datasets found
  1. Z

    Dataset for "Good practice versus reality: A landscape analysis of Research...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    El Hounsri, Anas (2025). Dataset for "Good practice versus reality: A landscape analysis of Research Software metadata adoption in European Open Science Clusters" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14770577
    Explore at:
    Dataset updated
    Feb 5, 2025
    Dataset provided by
    El Hounsri, Anas
    Garijo, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset was collected using this GitHub repository Repositories-Extraction, collected the links to the repositories from each scientific cluster, and using the GitHub repository Metadata-Extraction, we were able to extract the relevant information needed to answer our research questions (RQ):

    RQ1: How do communities describe Research Software metadata in their code repositories?

    RQ2: What is the adoption of archival infrastructures across disciplines?

    RQ3: How do software projects adopt versioning?

    RQ4: How comprehensive is the metadata provided in code repositories? Specifically:

    What is the adoption of open licenses?

    Do research projects include a description?

    How well documented are research projects? (i.e., in terms of installation instructions, requirements and documentation availability

    RQ5: What are the most common citation practices among communities?

    The dataset has two types of information, for example for one cluster we can say "ENVRI", for each RQ you will find "analysis_envri_rq1.json" which contains the information extracted using SOMEF and processed to extract the relevant information, and you will find "results_envri_rq1.json" which is the calculations of the percentages of each relevant files to the RQ.

  2. i

    Describing data in image format: Proposal of a metadata model and controlled...

    • rdm.inesctec.pt
    Updated Aug 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2022). Describing data in image format: Proposal of a metadata model and controlled vocabularies - Dataset - CKAN [Dataset]. https://rdm.inesctec.pt/dataset/cs-2022-010
    Explore at:
    Dataset updated
    Aug 29, 2022
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Research data management (RDM) includes people with different needs, specific scientific contexts, and diverse requirements. The description of data is a big RDM challenge. Metadata plays an essential role, allowing the inclusion of essential information for the interpretation of data, enhances the reuse of data and its preservation. The establishment of metadata models can facilitate the process of description and contribute to an improvement in the quality of metadata. When we talk about image data, the task is even more difficult, as there are no explicit recommendations to guide image management. Taking all of this into account, in this dataset, we present a proposal for a metadata model for image description. We also developed controlled vocabularies for some descriptors. These vocabularies aim to improve the image description process, facilitate metadata model interpretation, and reduce the time and effort devoted to data description.

  3. c

    Visual exploration of the attribute space of DANS EASY metadata

    • datacatalogue.cessda.eu
    • ssh.datastations.nl
    Updated Apr 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olav ten Bosch; DrasticData (2023). Visual exploration of the attribute space of DANS EASY metadata [Dataset]. http://doi.org/10.17026/dans-zeq-q3b7
    Explore at:
    Dataset updated
    Apr 11, 2023
    Authors
    Olav ten Bosch; DrasticData
    Description

    Study of the metadata of the Electronic Archiving System (EASY) of Data Archiving and Networked Services (DANS) for the purpose of getting insight in the internal structure of the collection. The visualization contains a dump of the EASY metadata set and all important data files that were generated during this analysis and used for the interactive website. It contains metadata extracted from EASY version I (before January 1, 2012) and from EASY II (extracted January 20th, 2012).

  4. r

    Data warehouse and metadata holdings relevant to Australias North West Shelf...

    • researchdata.edu.au
    Updated Nov 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Ocean Data Network (2017). Data warehouse and metadata holdings relevant to Australias North West Shelf [Dataset]. https://researchdata.edu.au/data-warehouse-metadata-west-shelf/false
    Explore at:
    Dataset updated
    Nov 21, 2017
    Dataset provided by
    Australian Ocean Data Network
    Time period covered
    Jul 1, 2000 - Jun 30, 2007
    Area covered
    Description

    From the earliest stages of planning the North West Shelf Joint Environmental Management Study it was evident that good management of the scientific data to be used in the research would be important for the success of the Study. A comprehensive review of data sets and other information relevant to the marine ecosystems, the geology, infrastructure and industries of the North West Shelf area had been completed (Heyward et al. 2006). The Data Management Project was established to source and prepare existing data sets for use, requiring the development and use of a range of tools: metadata systems, data visualisation and data delivery applications. These were made available to collaborators to allow easy access to data obtained and generated by the Study. The CMAR MarLIN metadata system was used to document the 285 data sets, those which were identified as potentially useful for the Study and the software and information products generated by and for the Study. This report represents a hard copy atlas of all NWSJEMS data products and the existing data sets identified for potential use as inputs to the Study. It comprises summary metadata elements describing the data sets, their custodianship and how the data sets might be obtained. The identifiers of each data set can be used to refer to the full metadata records in the on-line MarLIN system.

  5. Open Data Portal Catalogue

    • open.canada.ca
    • datasets.ai
    • +1more
    csv, json, jsonl, png +2
    Updated Jul 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Treasury Board of Canada Secretariat (2025). Open Data Portal Catalogue [Dataset]. https://open.canada.ca/data/en/dataset/c4c5c7f1-bfa6-4ff6-b4a0-c164cb2060f7
    Explore at:
    csv, sqlite, json, png, jsonl, xlsxAvailable download formats
    Dataset updated
    Jul 13, 2025
    Dataset provided by
    Treasury Board of Canadahttps://www.canada.ca/en/treasury-board-secretariat/corporate/about-treasury-board.html
    Treasury Board of Canada Secretariathttp://www.tbs-sct.gc.ca/
    License

    Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
    License information was derived automatically

    Description

    The open data portal catalogue is a downloadable dataset containing some key metadata for the general datasets available on the Government of Canada's Open Data portal. Resource 1 is generated using the ckanapi tool (external link) Resources 2 - 8 are generated using the Flatterer (external link) utility. ###Description of resources: 1. Dataset is a JSON Lines (external link) file where the metadata of each Dataset/Open Information Record is one line of JSON. The file is compressed with GZip. The file is heavily nested and recommended for users familiar with working with nested JSON. 2. Catalogue is a XLSX workbook where the nested metadata of each Dataset/Open Information Record is flattened into worksheets for each type of metadata. 3. datasets metadata contains metadata at the dataset level. This is also referred to as the package in some CKAN documentation. This is the main table/worksheet in the SQLite database and XLSX output. 4. Resources Metadata contains the metadata for the resources contained within each dataset. 5. resource views metadata contains the metadata for the views applied to each resource, if a resource has a view configured. 6. datastore fields metadata contains the DataStore information for CSV datasets that have been loaded into the DataStore. This information is displayed in the Data Dictionary for DataStore enabled CSVs. 7. Data Package Fields contains a description of the fields available in each of the tables within the Catalogue, as well as the count of the number of records each table contains. 8. data package entity relation diagram Displays the title and format for column, in each table in the Data Package in the form of a ERD Diagram. The Data Package resource offers a text based version. 9. SQLite Database is a .db database, similar in structure to Catalogue. This can be queried with database or analytical software tools for doing analysis.

  6. g

    Data warehouse and metadata holdings relevant to Australias North West Shelf...

    • gimi9.com
    Updated Sep 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Data warehouse and metadata holdings relevant to Australias North West Shelf | gimi9.com [Dataset]. https://gimi9.com/dataset/au_data-warehouse-and-metadata-holdings-relevant-to-australias-north-west-shelf
    Explore at:
    Dataset updated
    Sep 4, 2024
    Description

    From the earliest stages of planning the North West Shelf Joint Environmental Management Study it was evident that good management of the scientific data to be used in the research would be important for the success of the Study. A comprehensive review of data sets and other information relevant to the marine ecosystems, the geology, infrastructure and industries of the North West Shelf area had been completed (Heyward et al. 2006). The Data Management Project was established to source and prepare existing data sets for use, requiring the development and use of a range of tools: metadata systems, data visualisation and data delivery applications. These were made available to collaborators to allow easy access to data obtained and generated by the Study. The CMAR MarLIN metadata system was used to document the 285 data sets, those which were identified as potentially useful for the Study and the software and information products generated by and for the Study. This report represents a hard copy atlas of all NWSJEMS data products and the existing data sets identified for potential use as inputs to the Study. It comprises summary metadata elements describing the data sets, their custodianship and how the data sets might be obtained. The identifiers of each data set can be used to refer to the full metadata records in the on-line MarLIN system.

  7. R

    MAGGOT : Metadata Management Tool for Data Storage Spaces

    • entrepot.recherche.data.gouv.fr
    Updated Feb 5, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    François EHRENMANN; François EHRENMANN; Daniel JACOB; Daniel JACOB; Philippe Chaumeil; Philippe Chaumeil (2025). MAGGOT : Metadata Management Tool for Data Storage Spaces [Dataset]. http://doi.org/10.15454/XF1NEY
    Explore at:
    Dataset updated
    Feb 5, 2025
    Dataset provided by
    Recherche Data Gouv
    Authors
    François EHRENMANN; François EHRENMANN; Daniel JACOB; Daniel JACOB; Philippe Chaumeil; Philippe Chaumeil
    License

    https://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html

    Description

    Sharing descriptive Metadata is the first essential step towards Open Scientific Data. With this in mind, Maggot was specifically designed to annotate datasets by creating a metadata file to attach to the storage space. Indeed, it allows users to easily add descriptive metadata to datasets produced within a collective of people (research unit, platform, multi-partner project, etc.). This approach fits perfectly into a data management plan as it addresses the issues of data organization and documentation, data storage and frictionless metadata sharing within this same collective and beyond. Main features of Maggot The main functionalities of Maggot were established according to a well-defined need (See Background) Documente with Metadata your datasets produced within a collective of people, thus making it possible : o answer certain questions of the Data Management Plan (DMP) concerning the organization, documentation, storage and sharing of data in the data storage space, to meet certain data and metadata requirements, listed for example by the Open Research Europe in accordance with the FAIR principles. Search datasets by their metadata : Indeed, the descriptive metadata thus produced can be associated with the corresponding data directly in the storage space then it is possible to perform a search on the metadata in order to find one or more sets of data. Only descriptive metadata is accessible by default. Publish the metadata of datasets along with their data files into an Europe-approved repository

  8. d

    US Restaurant POI dataset with metadata

    • datarade.ai
    .csv
    Updated Jul 30, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Geolytica (2022). US Restaurant POI dataset with metadata [Dataset]. https://datarade.ai/data-products/us-restaurant-poi-dataset-with-metadata-geolytica
    Explore at:
    .csvAvailable download formats
    Dataset updated
    Jul 30, 2022
    Dataset authored and provided by
    Geolytica
    Area covered
    United States of America
    Description

    Point of Interest (POI) is defined as an entity (such as a business) at a ground location (point) which may be (of interest). We provide high-quality POI data that is fresh, consistent, customizable, easy to use and with high-density coverage for all countries of the world.

    This is our process flow:

    Our machine learning systems continuously crawl for new POI data
    Our geoparsing and geocoding calculates their geo locations
    Our categorization systems cleanup and standardize the datasets
    Our data pipeline API publishes the datasets on our data store
    

    A new POI comes into existence. It could be a bar, a stadium, a museum, a restaurant, a cinema, or store, etc.. In today's interconnected world its information will appear very quickly in social media, pictures, websites, press releases. Soon after that, our systems will pick it up.

    POI Data is in constant flux. Every minute worldwide over 200 businesses will move, over 600 new businesses will open their doors and over 400 businesses will cease to exist. And over 94% of all businesses have a public online presence of some kind tracking such changes. When a business changes, their website and social media presence will change too. We'll then extract and merge the new information, thus creating the most accurate and up-to-date business information dataset across the globe.

    We offer our customers perpetual data licenses for any dataset representing this ever changing information, downloaded at any given point in time. This makes our company's licensing model unique in the current Data as a Service - DaaS Industry. Our customers don't have to delete our data after the expiration of a certain "Term", regardless of whether the data was purchased as a one time snapshot, or via our data update pipeline.

    Customers requiring regularly updated datasets may subscribe to our Annual subscription plans. Our data is continuously being refreshed, therefore subscription plans are recommended for those who need the most up to date data. The main differentiators between us vs the competition are our flexible licensing terms and our data freshness.

    Data samples may be downloaded at https://store.poidata.xyz/us

  9. c

    Hazardous Waste Portal Manifest Metadata

    • s.cnmilf.com
    • data.ct.gov
    • +2more
    Updated Jan 26, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.ct.gov (2024). Hazardous Waste Portal Manifest Metadata [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/hazardous-waste-portal-manifest-metadata
    Explore at:
    Dataset updated
    Jan 26, 2024
    Dataset provided by
    data.ct.gov
    Description

    Note: Please use the following view to be able to see the entire Dataset Description: https://data.ct.gov/Environment-and-Natural-Resources/Hazardous-Waste-Portal-Manifest-Metadata/x2z6-swxe Dataset Description Outline (5 sections) • INTRODUCTION • WHY USE THE CONNECTICUT OPEN DATA PORTAL MANIFEST METADATA DATASET INSTEAD OF THE DEEP DOCUMENT ONLINE SEARCH PORTAL ITSELF? • WHAT MANIFESTS ARE INCLUDED IN DEEP’S MANIFEST PERMANENT RECORDS ARE ALSO AVAILABLE VIA THE DEEP DOCUMENT SEARCH PORTAL AND CT OPEN DATA? • HOW DOES THE PORTAL MANIFEST METADATA DATASET RELATE TO THE OTHER TWO MANIFEST DATASETS PUBLISHED IN CT OPEN DATA? • IMPORTANT NOTES INTRODUCTION • All of DEEP’s paper hazardous waste manifest records were recently scanned and “indexed”. • Indexing consisted of 6 basic pieces of information or “metadata” taken from each manifest about the Generator and stored with the scanned image. The metadata enables searches by: Site Town, Site Address, Generator Name, Generator ID Number, Manifest ID Number and Date of Shipment. • All of the metadata and scanned images are available electronically via DEEP’s Document Online Search Portal at: https://filings.deep.ct.gov/DEEPDocumentSearchPortal/ • Therefore, it is no longer necessary to visit the DEEP Records Center in Hartford for manifest records or information. • This CT Data dataset “Hazardous Waste Portal Manifest Metadata” (or “Portal Manifest Metadata”) was copied from the DEEP Document Online Search Portal, and includes only the metadata – no images. WHY USE THE CONNECTICUT OPEN DATA PORTAL MANIFEST METADATA DATASET INSTEAD OF THE DEEP DOCUMENT ONLINE SEARCH PORTAL ITSELF? The Portal Manifest Metadata is a good search tool to use along with the Portal. Searching the Portal Manifest Metadata can provide the following advantages over searching the Portal: • faster searches, especially for “large searches” - those with a large number of search returns unlimited number of search returns (Portal is limited to 500); • larger display of search returns; • search returns can be sorted and filtered online in CT Data; and • search returns and the entire dataset can be downloaded from CT Data and used offline (e.g. download to Excel format) • metadata from searches can be copied from CT Data and pasted into the Portal search fields to quickly find single scanned images. The main advantages of the Portal are: • it provides access to scanned images of manifest documents (CT Data does not); and • images can be downloaded one or multiple at a time. WHAT MANIFESTS ARE INCLUDED IN DEEP’S MANIFEST PERMANENT RECORDS ARE ALSO AVAILABLE VIA THE DEEP DOCUMENT SEARCH PORTAL AND CT OPEN DATA? All hazardous waste manifest records received and maintained by the DEEP Manifest Program; including: • manifests originating from a Connecticut Generator or sent to a Connecticut Destination Facility including manifests accompanying an exported shipment • manifests with RCRA hazardous waste listed on them (such manifests may also have non-RCRA hazardous waste listed) • manifests from a Generator with a Connecticut Generator ID number (permanent or temporary number) • manifests with sufficient quantities of RCRA hazardous waste listed for DEEP to consider the Generator to be a Small or Large Quantity Generator • manifests with PCBs listed on them from 2016 to 6-29-2018. • Note: manifests sent to a CT Destination Facility were indexed by the Connecticut or Out of State Generator. Searches by CT Designated Facility are not possible unless such facility is the Generator for the purposes of manifesting. All other manifests were considered “non-hazardous” manifests and not scanned. They were discarded after 2 years in accord with DEEP records retention schedule. Non-hazardous manifests include: • Manifests with only non-RCRA hazardous waste listed • Manifests from generators that did not have a permanent or temporary Generator ID number • Sometimes non-hazardous manifests were considered “Hazar

  10. p

    DCAT-AP API endpoints for data.public.lu

    • data.public.lu
    html, rdf, xlsx
    Updated May 27, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Open Data Lëtzebuerg (2024). DCAT-AP API endpoints for data.public.lu [Dataset]. https://data.public.lu/en/datasets/dcat-ap-api-endpoints-for-data-public-lu/
    Explore at:
    rdf, html, xlsx(16280)Available download formats
    Dataset updated
    May 27, 2024
    Dataset authored and provided by
    Open Data Lëtzebuerg
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Data.public.lu provides all its metadata in the DCAT and DCAT-AP formats, i.e. all data about the data stored or referenced on data.public.lu. DCAT (Data Catalog Vocabulary) is a specification designed to facilitate interoperability between data catalogs published on the Web. This specification has been extended via the DCAT-AP (DCAT Application Profile for data portals in Europe) standard, specifically for data portals in Europe. The serialisation of those vocabularies is mainly done in RDF (Resource Description Framework). The implementation of data.public.lu is based on the one of the open source udata platform. This API enables the federation of multiple Data portals together, for example, all the datasets published on data.public.lu are also published on data.europa.eu. The DCAT API from data.public.lu is used by the european data portal to federate its metadata. The DCAT standard is thus very important to guarantee the interoperability between all data portals in Europe. Usage Full catalog You can find here a few examples using the curl command line tool: To get all the metadata from the whole catalog hosted on data.public.lu curl https://data.public.lu/catalog.rdf Metadata for an organization To get the metadata of a specific organization, you need first to find its ID. The ID of an organization is the last part of its URL. For the organization "Open data Lëtzebuerg" its URL is https://data.public.lu/fr/organizations/open-data-letzebuerg/ and its ID is open-data-letzebuerg. To get all the metadata for a given organization, we need to call the following URL, where {id} has been replaced by the correct ID: https://data.public.lu/api/1/organizations/{id}/catalog.rdf Example: curl https://data.public.lu/api/1/organizations/open-data-letzebuerg/catalog.rdf Metadata for a dataset To get the metadata of a specific dataset, you need first to find its ID. The ID of dataset is the last part of its URL. For the dataset "Digital accessibility monitoring report - 2020-2021" its URL is https://data.public.lu/fr/datasets/digital-accessibility-monitoring-report-2020-2021/ and its ID is digital-accessibility-monitoring-report-2020-2021. To get all the metadata for a given dataset, we need to call the following URL, where {id} has been replaced by the correct ID: https://data.public.lu/api/1/datasets/{id}/rdf Example: curl https://data.public.lu/api/1/datasets/digital-accessibility-monitoring-report-2020-2021/rdf Compatibility with DCAT-AP 2.1.1 The DCAT-AP standard is in constant evolution, so the compatibility of the implementation should be regularly compared with the standard and adapted accordingly. In May 2023, we have done this comparison, and the result is available in the resources below (see document named 'udata 6 dcat-ap implementation status"). In the DCAT-AP model, classes and properties have a priority level which should be respected in every implementation: mandatory, recommended and optional. Our goal is to implement all mandatory classes and properties, and if possible implement all recommended classes and properties which make sense in the context of our open data portal.

  11. h

    cses-problem-set-metadata

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    cses-problem-set-metadata [Dataset]. https://huggingface.co/datasets/minhnguyent546/cses-problem-set-metadata
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Authors
    Minh-Thien Nguyen
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This dataset contains the metadata for the CSES problem set (e.g. title, time limit, number of test cases, etc). The data is crawled on December 28, 2024. Notes:

    time_limit is in seconds memory_limit is in MB

    Important: New tasks and categories were added to CSES, see here. This dataset is now outdated and will be updated soon.

  12. Youtube Videos - 5-Minute Crafts

    • kaggle.com
    Updated Dec 31, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mikit Kanakia (2021). Youtube Videos - 5-Minute Crafts [Dataset]. https://www.kaggle.com/datasets/mikitkanakia/youtube-videos-5minute-videos
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 31, 2021
    Dataset provided by
    Kaggle
    Authors
    Mikit Kanakia
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Area covered
    YouTube
    Description

    Context

    5-Minute Crafts is the Top 10 Most Viewed and Subscribed channel and this is what amazed me. I want to find the insights which lead the success of the channel.

    Content

    The data represents the Video Meta data, description, tags and most important statistics of the video.

    Acknowledgements

    Youtube and 5-Minute Crafts Channel

    Inspiration

    Most liked topic in the channel. View, Like and Comment count based on the video tags? What does the description say about the video? What are the most used tags?

  13. Dataset: Tracking transformative agreements through open metadata: method...

    • zenodo.org
    csv, js
    Updated Mar 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hans de Jonge; Hans de Jonge; Bianca Kramer; Bianca Kramer; Jeroen Sondervan; Jeroen Sondervan (2025). Dataset: Tracking transformative agreements through open metadata: method and validation using Dutch Research Council NWO funded papers [Dataset]. http://doi.org/10.5281/zenodo.15000633
    Explore at:
    csv, jsAvailable download formats
    Dataset updated
    Mar 10, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Hans de Jonge; Hans de Jonge; Bianca Kramer; Bianca Kramer; Jeroen Sondervan; Jeroen Sondervan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Feb 2025
    Description

    Data and code belonging to the manuscript:

    Tracking transformative agreements through open metadata: method and validation using Dutch Research Council NWO funded papers

    Abstract

    Transformative agreements have become an important strategy in the transition to open access, with almost 1,200 such agreements registered by 2025. Despite their prevalence, these agreements suffer from important transparency limitations, most notably article-level metadata indicating which articles are covered by these agreements. Typically, this data is available to libraries but not openly shared, making it difficult to study the impact of these agreements. In this paper, we present a novel, open, replicable method for analyzing transformative agreements using open metadata, specifically the Journal Checker tool provided by cOAlition S and OpenAlex. To demonstrate its potential, we apply our approach to a subset of publications funded by the Dutch Research Council (NWO) and its health research counterpart ZonMw. In addition, the results of this open method are compared with the actual publisher data reported to the Dutch university library consortium UKB. This validation shows that this open method accurately identified 89% of the publications covered by transformative agreements, while the 11% false positives shed an interesting light on the limitations of this method. In the absence of hard, openly available article-level data on transformative agreements, we provide researchers and institutions with a powerful tool to critically track and evaluate the impact of these agreements.

    This dataset contains the following files:

    • Dataset.csv - Data set of unique DOIs (n = 6,610) enriched with data from Crossref, Unpaywall, OpenAlex and the Journal Checker Tool.
    • Data dictionary.csv - description of the data in the dataset, its type and sources.
    • Google_Apps_Script.js - Google Apps Script for retrieving information from the Journal Checker Tool API.
    • Data comparison.csv - Data set of DOI's (n= 10,126) retrieved from the UKBsis datahub and used to establish the overlap with the original dataset.
    • Data dictionary data comparison.csv - description of the data in the data comparison data set, its type and sources.
  14. d

    Asset database for the Cooper subregion on 27 August 2015

    • data.gov.au
    • researchdata.edu.au
    • +1more
    Updated Aug 9, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bioregional Assessment Program (2023). Asset database for the Cooper subregion on 27 August 2015 [Dataset]. https://data.gov.au/data/dataset/0b122b2b-e5fe-4166-93d1-3b94fc440c82
    Explore at:
    Dataset updated
    Aug 9, 2023
    Dataset authored and provided by
    Bioregional Assessment Program
    Description

    Abstract

    The public version of this Asset database can be accessed via the following dataset:

    Asset database for the Cooper subregion on 27 August 2015 Public (526707e0-9d32-47de-a198-9c8f35761a7e)

    The dataset was derived by the Bioregional Assessment Programme from multiple source datasets. The source datasets are identified in the Lineage field in this metadata statement. The processes undertaken to produce this derived dataset are described in the History field in this metadata statement.

    The asset database for Cooper subregion (v3) supersedes previous version (v2) of the Cooper Asset database (Asset database for the Cooper subregion on 14 August 2015, 5c3697e6-8077-4de7-b674-e0dfc33b570c). The M2_Reason in the Assetlist table and DecisionBrief in the AssetDecisions table have been updated with short descriptions (<255 characters) provided by project team 21/8, and the draft "water-dependent asset register and asset list" (BA-LEB-COO-130-WaterDependentAssetRegister-AssetList-V20150827) also updated accordingly. This change was made to avoid truncation in the brief reasons fields of the database and asset register. There have been no changes to assets or asset numbers.

    This dataset contains a combination of spatial and non-spatial (attribute) components of the Cooper subregion Asset List - an mdb file (readable as an MS Access database or as an ESRI personal geodatabase) holds the non-spatial tabular attribute data, and an ESRI file geodatabase contains the spatial data layers, which are attributed only with unique identifiers ("AID" for assets, and "ElementID" for elements). The dataset also contains an update of the draft "Water-dependent asset register and asset list" spreadsheet (BA-NIC-COO-130-WaterDependentAssetRegister-AssetList-V20150827.xlsx).

    The tabular attribute data can be joined in a GIS to the "Assetlist" table in the mdb database using the "AID" field to view asset attributes (BA attribution). To view the more detailed attribution at the element-level, the intermediate table "Element_to_asset" can be joined to the assets spatial datasets using AID, and then joining the individual attribute tables from the Access database using the common "ElementID" fields. Alternatively, the spatial feature layers representing elements can be linked directly to the individual attribute tables in the Access database using "ElementID", but this arrangement will not provide the asset-level groupings.

    Further information is provided in the accompanying document, "COO_asset_database_doc20150827.doc" located within this dataset.

    Dataset History

    Version ID Date Notes

    1.0 27/03/2015 Initial database

    2.0 14/08/2015 "(1) Updated the database for M2 test results provided from COO assessment team and created the draft BA-LEB-COO-130-WaterDependentAssetRegister-AssetList-V20150814.xlsx

    (2) updated the group, subgroup, class and depth for (up to) 2 NRM WAIT assets to cooperate the feedback to OWS from relevant SA NRM regional office (whose staff missed the asset workshop). The AIDs and names of those assets are listed in table LUT_changed_asset_class_20150814 in COO_asset_database_20150814.mdb

    (3) As a result of (2), added one new asset separated from one existing asset. This asset and its parent are listed in table LUT_ADD_1_asste_20150814 in COO_asset_database_20150814.mdb. The M2 test result for this asset is inherited from its parent in this version

    (5) Added Appendix C in COO_asset_database_doc_201500814.doc is about total elements/assets in current Group and subgroup

    (6)Added Four SQL queries (Find_All_Used_Assets, Find_All_WD_Assets, Find_Amount_Asset_in_Class and Find_Amount_Elements_in_Class) in COO_asset_database_20150814.mdb.mdb for total assets and total numbers

    (7)The databases, especially spatial database (COO_asset_database_20150814Only.gdb), were changed such as duplicated attribute fields in spatial data were removed and only ID field is kept. The user needs to join the Table Assetlist or Elementlist to the relevant spatial data"

    3.0 27/08/2015 M2_Reason in the Assetlist table and DecisionBrief in the AssetDecisions table have been updated with short descriptions (<255 characters) provided by project team 21/8, and the draft "water-dependent asset register and asset list" (BA-LEB-COO-130-WaterDependentAssetRegister-AssetList-V20150827) also updated accordingly. No changes to asset numbers.

    Dataset Citation

    Bioregional Assessment Programme (2014) Asset database for the Cooper subregion on 27 August 2015. Bioregional Assessment Derived Dataset. Viewed 27 November 2017, http://data.bioregionalassessments.gov.au/dataset/0b122b2b-e5fe-4166-93d1-3b94fc440c82.

    Dataset Ancestors

  15. N

    Meta, MO Population Breakdown by Gender Dataset: Male and Female Population...

    • neilsberg.com
    csv, json
    Updated Feb 24, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neilsberg Research (2025). Meta, MO Population Breakdown by Gender Dataset: Male and Female Population Distribution // 2025 Edition [Dataset]. https://www.neilsberg.com/research/datasets/b2440a08-f25d-11ef-8c1b-3860777c1fe6/
    Explore at:
    csv, jsonAvailable download formats
    Dataset updated
    Feb 24, 2025
    Dataset authored and provided by
    Neilsberg Research
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Missouri, Meta
    Variables measured
    Male Population, Female Population, Male Population as Percent of Total Population, Female Population as Percent of Total Population
    Measurement technique
    The data presented in this dataset is derived from the latest U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. To measure the two variables, namely (a) population and (b) population as a percentage of the total population, we initially analyzed and categorized the data for each of the gender classifications (biological sex) reported by the US Census Bureau. For further information regarding these estimates, please feel free to reach out to us via email at research@neilsberg.com.
    Dataset funded by
    Neilsberg Research
    Description
    About this dataset

    Context

    The dataset tabulates the population of Meta by gender, including both male and female populations. This dataset can be utilized to understand the population distribution of Meta across both sexes and to determine which sex constitutes the majority.

    Key observations

    There is a slight majority of female population, with 50.78% of total population being female. Source: U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.

    Content

    When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates.

    Scope of gender :

    Please note that American Community Survey asks a question about the respondents current sex, but not about gender, sexual orientation, or sex at birth. The question is intended to capture data for biological sex, not gender. Respondents are supposed to respond with the answer as either of Male or Female. Our research and this dataset mirrors the data reported as Male and Female for gender distribution analysis. No further analysis is done on the data reported from the Census Bureau.

    Variables / Data Columns

    • Gender: This column displays the Gender (Male / Female)
    • Population: The population of the gender in the Meta is shown in this column.
    • % of Total Population: This column displays the percentage distribution of each gender as a proportion of Meta total population. Please note that the sum of all percentages may not equal one due to rounding of values.

    Good to know

    Margin of Error

    Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.

    Custom data

    If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.

    Inspiration

    Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.

    Recommended for further research

    This dataset is a part of the main dataset for Meta Population by Race & Ethnicity. You can refer the same here

  16. N

    Meta, MO Age Group Population Dataset: A Complete Breakdown of Meta Age...

    • neilsberg.com
    csv, json
    Updated Feb 22, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neilsberg Research (2025). Meta, MO Age Group Population Dataset: A Complete Breakdown of Meta Age Demographics from 0 to 85 Years and Over, Distributed Across 18 Age Groups // 2025 Edition [Dataset]. https://www.neilsberg.com/research/datasets/45364374-f122-11ef-8c1b-3860777c1fe6/
    Explore at:
    csv, jsonAvailable download formats
    Dataset updated
    Feb 22, 2025
    Dataset authored and provided by
    Neilsberg Research
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Missouri, Meta
    Variables measured
    Population Under 5 Years, Population over 85 years, Population Between 5 and 9 years, Population Between 10 and 14 years, Population Between 15 and 19 years, Population Between 20 and 24 years, Population Between 25 and 29 years, Population Between 30 and 34 years, Population Between 35 and 39 years, Population Between 40 and 44 years, and 9 more
    Measurement technique
    The data presented in this dataset is derived from the latest U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates. To measure the two variables, namely (a) population and (b) population as a percentage of the total population, we initially analyzed and categorized the data for each of the age groups. For age groups we divided it into roughly a 5 year bucket for ages between 0 and 85. For over 85, we aggregated data into a single group for all ages. For further information regarding these estimates, please feel free to reach out to us via email at research@neilsberg.com.
    Dataset funded by
    Neilsberg Research
    Description
    About this dataset

    Context

    The dataset tabulates the Meta population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Meta. The dataset can be utilized to understand the population distribution of Meta by age. For example, using this dataset, we can identify the largest age group in Meta.

    Key observations

    The largest age group in Meta, MO was for the group of age 65 to 69 years years with a population of 18 (14.06%), according to the ACS 2019-2023 5-Year Estimates. At the same time, the smallest age group in Meta, MO was the 20 to 24 years years with a population of 0 (0%). Source: U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates

    Content

    When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2019-2023 5-Year Estimates

    Age groups:

    • Under 5 years
    • 5 to 9 years
    • 10 to 14 years
    • 15 to 19 years
    • 20 to 24 years
    • 25 to 29 years
    • 30 to 34 years
    • 35 to 39 years
    • 40 to 44 years
    • 45 to 49 years
    • 50 to 54 years
    • 55 to 59 years
    • 60 to 64 years
    • 65 to 69 years
    • 70 to 74 years
    • 75 to 79 years
    • 80 to 84 years
    • 85 years and over

    Variables / Data Columns

    • Age Group: This column displays the age group in consideration
    • Population: The population for the specific age group in the Meta is shown in this column.
    • % of Total Population: This column displays the population of each age group as a proportion of Meta total population. Please note that the sum of all percentages may not equal one due to rounding of values.

    Good to know

    Margin of Error

    Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.

    Custom data

    If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.

    Inspiration

    Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.

    Recommended for further research

    This dataset is a part of the main dataset for Meta Population by Age. You can refer the same here

  17. COVID-19 ORDC Cleaned Metadata

    • kaggle.com
    Updated Apr 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Róbert Lakatos (2020). COVID-19 ORDC Cleaned Metadata [Dataset]. https://www.kaggle.com/robertlakatos/covid19-ordc-cleaned-metadata/metadata
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 14, 2020
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Róbert Lakatos
    Description

    Context

    The cleaned dataset was created to purpose contextual analysis for COVID-19 Open Research Dataset Challenge. The file includes only important columns what useable to contextual clustering.

    Content

    COVID-19-ORDC-cleaned-metadata.xlsx

    Dataset consits of 10 columns.

    There are 4 columns for document identification:

    • cord_uid
    • sha
    • journal
    • authors

    There are 6 columns purpose for NLP contextual analysis:

    • title
    • abstract
    • common_words
    • vectors
    • clusters
    • words_of_topic_in_clusters

    vecs.tsv

    Content of vecs.tsv is all vectors from car. The file can used in embedding projector.

    meta.tsv

    The meta.tsv contains the meta data to vecs.tsv. There ara 3 columns.

    • title: title of documents
    • clusters: cluster's number of document
    • words of topic: Topic words for clusters generated by LDA analysis.
  18. o

    Zillow Properties Listing Information Dataset

    • opendatabay.com
    .undefined
    Updated Jun 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bright Data (2025). Zillow Properties Listing Information Dataset [Dataset]. https://www.opendatabay.com/data/premium/0bdd01d7-1b5b-4005-bb73-345bc710c694
    Explore at:
    .undefinedAvailable download formats
    Dataset updated
    Jun 26, 2025
    Dataset authored and provided by
    Bright Data
    Area covered
    Urban Planning & Infrastructure
    Description

    Zillow Properties Listing dataset to access detailed real estate listings, including property prices, locations, and features. Popular use cases include market analysis, property valuation, and investment decision-making in the real estate sector.

    Use our Zillow Properties Listing Information dataset to access detailed real estate listings, including property features, pricing trends, and location insights. This dataset is perfect for real estate agents, investors, market analysts, and property developers looking to analyze housing markets, identify investment opportunities, and assess property values.

    Leverage this dataset to track pricing patterns, compare property features, and forecast market trends across different regions. Whether you're evaluating investment prospects or optimizing property listings, the Zillow Properties dataset offers essential information for making data-driven real estate decisions.

    Dataset Features

    • zpid: Unique property identifier assigned by Zillow.
    • city: The name of the city where the property is located.
    • state: The state in which the property is located.
    • homeStatus: Indicates the current status of the property
    • address: The full address of the property, including street, city, and state.
    • isListingClaimedByCurrentSignedInUser: This field shows if the current Zillow user has claimed ownership of the listing.
    • isCurrentSignedInAgentResponsible: This field indicates whether the currently signed-in real estate agent is responsible for the listing.
    • bedrooms: Number of bedrooms in the property.
    • bathrooms: Number of bathrooms in the property.
    • price: Current asking price of the property.
    • yearBuilt: The year the home was originally constructed.
    • streetAddress: Specific street address (usually excludes city/state/zip).
    • zipcode: The postal ZIP code of the property.
    • isCurrentSignedInUserVerifiedOwner: This field indicates if the signed-in user has verified ownership of the property on Zillow.
    • isVerifiedClaimedByCurrentSignedInUser: Indicates whether the user has claimed and verified the listing as the current owner.
    • listingDataSource: The original source of the listing. Important for data lineage and trustworthiness.
    • longitude: The longitudinal geographic coordinate of the property.
    • latitude: The latitudinal geographic coordinate of the property.
    • hasBadGeocode: This indicates whether the geolocation data is incorrect or problematic.
    • streetViewMetadataUrlMediaWallLatLong: A URL or reference to the Street View media wall based on latitude and longitude.
    • streetViewMetadataUrlMediaWallAddress: A similar URL reference to the Street View, but based on the property’s address.
    • streetViewServiceUrl: The base URL to Google Street View or similar services. Enables interactive visuals of the property’s surroundings.
    • livingArea: Total internal living area of the home, typically in square feet.
    • homeType: The category/type of the home.
    • lotSize: The size of the entire lot or land the home is situated on.
    • lotAreaValue: The numerical value representing the lot area is usually tied to a measurement unit.
    • lotAreaUnits: Units in which the lot area is measured (e.g., sqft, acres).
    • livingAreaValue: The numeric value of the property's interior living space.
    • livingAreaUnitsShort: Abbreviated unit for living area (e.g., sqft), useful for compact displays.
    • isUndisclosedAddress: Boolean indicating if the full property address is hidden, typically used for privacy reasons.
    • zestimate: Zillow’s estimated market value of the home, generated via its proprietary model.
    • rentZestimate: Zillow’s estimated rental price per month, is helpful for rental market analysis.
    • currency: Currency used for price, Zestimate, and rent estimate (e.g., USD).
    • hideZestimate: Indicates whether the Zestimate is hidden from public view.
    • dateSoldString: The date when the property was last sold, in string format (e.g., 2022-06-15).
    • taxAssessedValue: The most recent assessed value of the property for tax purposes.
    • taxAssessedYear: The year in which the property was last assessed.
    • country: The country where the property is located.
    • propertyTaxRate: The most recent tax rate.
    • photocount: This column provides a photo count of the property.
    • isPremierBuilder: Boolean indicating whether the builder is listed as a premier (trusted) builder on Zillow.
    • isZillowOwned: Indicates whether the property is owned or managed directly by Zillow.
    • ssid: A unique internal Zillow identifier for the listing (not to be confused with network SSID).
    • hdpUrl: URL to the home’s detail page on Zillow (Home Details Page).
    • tourViewCount: Number of times users have viewed the property tour.
    • hasPublicVideo: This
  19. f

    Data from: HOW TO PERFORM A META-ANALYSIS: A PRACTICAL STEP-BY-STEP GUIDE...

    • scielo.figshare.com
    tiff
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diego Ariel de Lima; Camilo Partezani Helito; Lana Lacerda de Lima; Renata Clazzer; Romeu Krause Gonçalves; Olavo Pires de Camargo (2023). HOW TO PERFORM A META-ANALYSIS: A PRACTICAL STEP-BY-STEP GUIDE USING R SOFTWARE AND RSTUDIO [Dataset]. http://doi.org/10.6084/m9.figshare.19899537.v1
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    SciELO journals
    Authors
    Diego Ariel de Lima; Camilo Partezani Helito; Lana Lacerda de Lima; Renata Clazzer; Romeu Krause Gonçalves; Olavo Pires de Camargo
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    ABSTRACT Meta-analysis is an adequate statistical technique to combine results from different studies, and its use has been growing in the medical field. Thus, not only knowing how to interpret meta-analysis, but also knowing how to perform one, is fundamental today. Therefore, the objective of this article is to present the basic concepts and serve as a guide for conducting a meta-analysis using R and RStudio software. For this, the reader has access to the basic commands in the R and RStudio software, necessary for conducting a meta-analysis. The advantage of R is that it is a free software. For a better understanding of the commands, two examples were presented in a practical way, in addition to revising some basic concepts of this statistical technique. It is assumed that the data necessary for the meta-analysis has already been collected, that is, the description of methodologies for systematic review is not a discussed subject. Finally, it is worth remembering that there are many other techniques used in meta-analyses that were not addressed in this work. However, with the two examples used, the article already enables the reader to proceed with good and robust meta-analyses. Level of Evidence V, Expert Opinion.

  20. a

    18e24b - Minimum Essential Dataset (MED) Live Feed and Hazard Data for US&R

    • hub.arcgis.com
    • prep-response-portal.napsgfoundation.org
    • +1more
    Updated Dec 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NAPSG Foundation (2023). 18e24b - Minimum Essential Dataset (MED) Live Feed and Hazard Data for US&R [Dataset]. https://hub.arcgis.com/content/0c25d3330e0542d4aca4038cb318e24b
    Explore at:
    Dataset updated
    Dec 7, 2023
    Dataset authored and provided by
    NAPSG Foundation
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Area covered
    Description

    This is a group layer item for the most commonly requested live feed & hazard data layers by Urban Search & Rescue stakeholders. Created on 12/07/2023. See links below for more details and metadata. These are natural hazard data, some are static and some are live/dynamic. Whenever possible use incident specific forecast and impact models. For ease of use the layers are grouped as shown below.Current WeatherTropical CyclonesSevere Weather & TornadoesFloodingEarthquakesWildfireSee the links below for more detailed information and metadata.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
El Hounsri, Anas (2025). Dataset for "Good practice versus reality: A landscape analysis of Research Software metadata adoption in European Open Science Clusters" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14770577

Dataset for "Good practice versus reality: A landscape analysis of Research Software metadata adoption in European Open Science Clusters"

Explore at:
Dataset updated
Feb 5, 2025
Dataset provided by
El Hounsri, Anas
Garijo, Daniel
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset was collected using this GitHub repository Repositories-Extraction, collected the links to the repositories from each scientific cluster, and using the GitHub repository Metadata-Extraction, we were able to extract the relevant information needed to answer our research questions (RQ):

RQ1: How do communities describe Research Software metadata in their code repositories?

RQ2: What is the adoption of archival infrastructures across disciplines?

RQ3: How do software projects adopt versioning?

RQ4: How comprehensive is the metadata provided in code repositories? Specifically:

What is the adoption of open licenses?

Do research projects include a description?

How well documented are research projects? (i.e., in terms of installation instructions, requirements and documentation availability

RQ5: What are the most common citation practices among communities?

The dataset has two types of information, for example for one cluster we can say "ENVRI", for each RQ you will find "analysis_envri_rq1.json" which contains the information extracted using SOMEF and processed to extract the relevant information, and you will find "results_envri_rq1.json" which is the calculations of the percentages of each relevant files to the RQ.

Search
Clear search
Close search
Google apps
Main menu