Facebook
TwitterThis dataset contains the metadata of the datasets published in 77 Dataverse installations, information about each installation's metadata blocks, and the list of standard licenses that dataset depositors can apply to the datasets they publish in the 36 installations running more recent versions of the Dataverse software. The data is useful for reporting on the quality of dataset and file-level metadata within and across Dataverse installations. Curators and other researchers can use this dataset to explore how well Dataverse software and the repositories using the software help depositors describe data. How the metadata was downloaded The dataset metadata and metadata block JSON files were downloaded from each installation on October 2 and October 3, 2022 using a Python script kept in a GitHub repo at https://github.com/jggautier/dataverse-scripts/blob/main/other_scripts/get_dataset_metadata_of_all_installations.py. In order to get the metadata from installations that require an installation account API token to use certain Dataverse software APIs, I created a CSV file with two columns: one column named "hostname" listing each installation URL in which I was able to create an account and another named "apikey" listing my accounts' API tokens. The Python script expects and uses the API tokens in this CSV file to get metadata and other information from installations that require API tokens. How the files are organized ├── csv_files_with_metadata_from_most_known_dataverse_installations │ ├── author(citation).csv │ ├── basic.csv │ ├── contributor(citation).csv │ ├── ... │ └── topic_classification(citation).csv ├── dataverse_json_metadata_from_each_known_dataverse_installation │ ├── Abacus_2022.10.02_17.11.19.zip │ ├── dataset_pids_Abacus_2022.10.02_17.11.19.csv │ ├── Dataverse_JSON_metadata_2022.10.02_17.11.19 │ ├── hdl_11272.1_AB2_0AQZNT_v1.0.json │ ├── ... │ ├── metadatablocks_v5.6 │ ├── astrophysics_v5.6.json │ ├── biomedical_v5.6.json │ ├── citation_v5.6.json │ ├── ... │ ├── socialscience_v5.6.json │ ├── ACSS_Dataverse_2022.10.02_17.26.19.zip │ ├── ADA_Dataverse_2022.10.02_17.26.57.zip │ ├── Arca_Dados_2022.10.02_17.44.35.zip │ ├── ... │ └── World_Agroforestry_-_Research_Data_Repository_2022.10.02_22.59.36.zip └── dataset_pids_from_most_known_dataverse_installations.csv └── licenses_used_by_dataverse_installations.csv └── metadatablocks_from_most_known_dataverse_installations.csv This dataset contains two directories and three CSV files not in a directory. One directory, "csv_files_with_metadata_from_most_known_dataverse_installations", contains 18 CSV files that contain the values from common metadata fields of all 77 Dataverse installations. For example, author(citation)_2022.10.02-2022.10.03.csv contains the "Author" metadata for all published, non-deaccessioned, versions of all datasets in the 77 installations, where there's a row for each author name, affiliation, identifier type and identifier. The other directory, "dataverse_json_metadata_from_each_known_dataverse_installation", contains 77 zipped files, one for each of the 77 Dataverse installations whose dataset metadata I was able to download using Dataverse APIs. Each zip file contains a CSV file and two sub-directories: The CSV file contains the persistent IDs and URLs of each published dataset in the Dataverse installation as well as a column to indicate whether or not the Python script was able to download the Dataverse JSON metadata for each dataset. For Dataverse installations using Dataverse software versions whose Search APIs include each dataset's owning Dataverse collection name and alias, the CSV files also include which Dataverse collection (within the installation) that dataset was published in. One sub-directory contains a JSON file for each of the installation's published, non-deaccessioned dataset versions. The JSON files contain the metadata in the "Dataverse JSON" metadata schema. The other sub-directory contains information about the metadata models (the "metadata blocks" in JSON files) that the installation was using when the dataset metadata was downloaded. I saved them so that they can be used when extracting metadata from the Dataverse JSON files. The dataset_pids_from_most_known_dataverse_installations.csv file contains the dataset PIDs of all published datasets in the 77 Dataverse installations, with a column to indicate if the Python script was able to download the dataset's metadata. It's a union of all of the "dataset_pids_..." files in each of the 77 zip files. The licenses_used_by_dataverse_installations.csv file contains information about the licenses that a number of the installations let depositors choose when creating datasets. When I collected ... Visit https://dataone.org/datasets/sha256%3Ad27d528dae8cf01e3ea915f450426c38fd6320e8c11d3e901c43580f997a3146 for complete metadata about this dataset.
Facebook
TwitterA codebook for CrowdTangle, a social media content discovery and analytics platform. The data describes aggregated interactions with Facebook and Instagram posts from public pages, public groups, or public people, including user reactions, shares, comments, and comparisons to a benchmark. Pages are included if they exceed 110k likes/followers, or if a user has added them previously to CrowdTangle.
Facebook
TwitterThe Harvard Art Museums API is a REST-style service designed for developers who wish to explore and integrate the museums’ collections in their projects. The API provides direct access to JSON formatted data that describes many aspects of the museums. Details at http://www.harvardartmuseums.org/collections/api and https://github.com/harvardartmuseums/api-docs.
Facebook
TwitterSUPER DADA is a bash script that adapts XML-DDI metadata files produced by Dataverse in order to make them compliant with the technical requirements of the CESSDA Data Catalogue (CDC). This version of the script is geared towards versions 5+ of Dataverse. In its current state, SUPER DADA modifies XML-DDI files produced by a version 5+ Dataverse installation so that the files become fully compliant with the 'BASIC' level of validation (or 'validation gate') of the CESSDA Metadata Validator against the CESSDA Data Catalogue (CDC) DDI 2.5 Profile 1.0.4. See the README file for technical details and specifications.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Study information Design ideation study (N = 24) using eye tracking technology. Participants solved a total of twelve design problems while receiving inspirational stimuli on a monitor. Their task was to generate as many solutions to each problem and explain their solution briefly by thinking aloud. The study allows for getting further insight into how inspirational stimuli improve idea fluency during design ideation. This dataset features processed data from the experiment. Eye tracking data includes gaze data, fixation data, blink data, and pupillometry data for all participants. The study is based on the following research paper and follows the same experimental setup: Goucher-Lambert, K., Moss, J., & Cagan, J. (2019). A neuroimaging investigation of design ideation with and without inspirational stimuli—understanding the meaning of near and far stimuli. Design Studies, 60, 1-38. DOI Dataset Most files in the dataset are saved as CSV files or other human readable file formats. Large files are saved in Hierarchical Data Format (HDF5/H5) to allow for smaller file sizes and higher compression. All data is described thoroughly in 00_ReadMe.txt. The following processed data is included in the dataset: Concatenated annotations file of experimental flow for all participants (CSV). All eye tracking raw data in concatenated files. Annotated with only participant ID. (CSV/HDF5) Annotated eye tracking data for ideation routines only. A subset of the files above. (CSV/HDF5) Audio transcriptions from Google Cloud Speech-to-Text API of each recording with annotations. (CSV) Raw API response for each transcription. These files include time offset for each word in a recording. (JSON) Data for questionnaire feedback and ideas generated during the experiment. (CSV) Data for the post-experiment survey, including demographic information (TSV). Python code used for the open-source experimental setup and dataset construction is hosted at GitHub. Repository also includes code of how the dataset has been further processed.
Facebook
TwitterThe Dataverse extension for CKAN facilitates integration and interaction with Dataverse installations. This likely empowers users to connect their CKAN instance with Dataverse repositories, potentially allowing for the discovery, harvesting, and management of datasets residing in Dataverse. Given the limited information, the exact features and capabilities will need to be derived from the source code. Key Features (Assumed Based on Extension Name): Dataverse Integration: Likely provides functionality to connect and interact with remote Dataverse instances including potentially retrieving metadata about published datasets. Dataset Discovery: May include tools to search and discover datasets within connected Dataverse repositories directly from the CKAN interface. Data Harvesting (Potential): Could offer data harvesting capabilities, making it possible to import datasets from Dataverse into CKAN for centralized management. Technical Integration (Limited Information): Due to the limited information, exact integration methods are unclear. However, it likely utilizes CKAN's plugin system and API to add new functionalities for managing Dataverse interactions. It may involve configuration settings to specify Dataverse endpoints and credentials. Given that it is a GeoSolutions extension there may be related GeoServer functionalities if CKAN and Dataverse can be integrated or configured to share common workflows. Benefits & Impact (Inferred): Connecting CKAN with Dataverse could promote data accessibility and interoperability between platforms. It allows users to take advantage of both systems' capabilities, by potentially enabling the seamless transfer of datasets and catalog information and enabling broader collaboration with a wide variety of potential systems.
Facebook
Twitterhttps://dataverse-training.tdl.org/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.5072/FK2/GTP7VOhttps://dataverse-training.tdl.org/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.5072/FK2/GTP7VO
Test data set for training purposes
Facebook
TwitterADE & API data and Appendix for ADE. Visit https://dataone.org/datasets/sha256%3A1c1bfe7650e379e0780425b4ba4ad4df193c7e63985eae0c7acefa6aec57edbb for complete metadata about this dataset.
Facebook
Twitterhttps://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html
The Sicpa_OpenData libraries allow to facilitate the publication of data to the INRAE dataverse in a transparent way 1/ by simplifying the creation of the metadata document from the data already present in the information systems, 2/ by simplifying the use of dataverse.org APIs. Available as a DLL, the SicpaOpenData for .Net library can be used from all developments using the Microsoft .NET platform
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Choice and no-choice experimental data with nurse bees
Facebook
TwitterThis article describes the novel open source tools for open data publication in open access journal workflows. This comprises a plugin for Open Journal Systems that supports a data submission, citation, review, and publication workflow; and an extension to the Dataverse system that provides a standard deposit API. We describe the function and design of these tools, provide examples of their use, and summarize their initial reception. We conclude by discussing future plans and potential impact.
Facebook
Twitterhttps://dataverse-training.tdl.org/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.33536/FK2/4S3YROhttps://dataverse-training.tdl.org/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.33536/FK2/4S3YRO
A young man's life story.
Facebook
TwitterThe Course Planner API allows developers to create applications that will interact with Course Planner data. Using this API, you can build applications that will allow your users (that are enrolled Harvard College/GSAS students) to add courses to their Course Planner, view the courses that are in the Course Planner, and remove courses.
Facebook
Twitterhttps://dataverse.unimi.it/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/CE0D2Shttps://dataverse.unimi.it/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/CE0D2S
This dataset regards the data management plan. The first draft prepared by UMIL, circulated among partners to be discussed and amended until a complete consensus. Regularly it is implemented and modified as necessary.
Facebook
Twitterhttps://dataverse.unimi.it/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/YMN1UZhttps://dataverse.unimi.it/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/YMN1UZ
This dataset collects all the experimental design, the raw and processed data, and the results regarding the mechanical properties of hydrogels
Facebook
TwitterThis document describes the CrowdTangle API and user interface being provided to researchers by Social Science One under its collaboration framework with Facebook. CrowdTangle is a content discovery and analytics platform designed to give content creators the data and insights they need to succeed. The CrowdTangle API surfaces stories, and data to measure their social performance and identify influencers. This codebook describes the data's scope, structure, and fields.
Facebook
TwitterDASH is Harvard's digital repository for scholarly articles, theses and dissertatinos, and other Harvard-affiliate generated literature. Harvard Library makes the bibliographic data openly available for all uses, with a standard set of APIs.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This data set contains the IDs of the 1,186,322 tweets used in "Climate Nags: Affect and the Convergence of Global Risk in Online Networks" (published in Continuum, 2023). The data was collected from Twitter's Streaming API using the DMI-TCAT during the first four months of the Coronavirus pandemic; 2020 U.S. presidential race; and the early stages of the 2022 Russia–Ukraine War. These collections were then filtered based on key words related to climate change (see README file for more details).
Facebook
Twitterhttps://dataverse-unimi-restore2.4science.cloud/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/OUUEA2https://dataverse-unimi-restore2.4science.cloud/api/datasets/:persistentId/versions/1.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/OUUEA2
This dataset gathers the experimental design, the raw and processed data and the results regarding the permeability assay of large and small compounds through hydrogel films
Facebook
Twitterhttps://dataverse.nl/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.34894/AFYDEKhttps://dataverse.nl/api/datasets/:persistentId/versions/2.0/customlicense?persistentId=doi:10.34894/AFYDEK
Abstract and poster of paper 0681 presented at the Digital Humanities Conference 2019 (DH2019), Utrecht , the Netherlands 9-12 July, 2019.
Facebook
TwitterThis dataset contains the metadata of the datasets published in 77 Dataverse installations, information about each installation's metadata blocks, and the list of standard licenses that dataset depositors can apply to the datasets they publish in the 36 installations running more recent versions of the Dataverse software. The data is useful for reporting on the quality of dataset and file-level metadata within and across Dataverse installations. Curators and other researchers can use this dataset to explore how well Dataverse software and the repositories using the software help depositors describe data. How the metadata was downloaded The dataset metadata and metadata block JSON files were downloaded from each installation on October 2 and October 3, 2022 using a Python script kept in a GitHub repo at https://github.com/jggautier/dataverse-scripts/blob/main/other_scripts/get_dataset_metadata_of_all_installations.py. In order to get the metadata from installations that require an installation account API token to use certain Dataverse software APIs, I created a CSV file with two columns: one column named "hostname" listing each installation URL in which I was able to create an account and another named "apikey" listing my accounts' API tokens. The Python script expects and uses the API tokens in this CSV file to get metadata and other information from installations that require API tokens. How the files are organized ├── csv_files_with_metadata_from_most_known_dataverse_installations │ ├── author(citation).csv │ ├── basic.csv │ ├── contributor(citation).csv │ ├── ... │ └── topic_classification(citation).csv ├── dataverse_json_metadata_from_each_known_dataverse_installation │ ├── Abacus_2022.10.02_17.11.19.zip │ ├── dataset_pids_Abacus_2022.10.02_17.11.19.csv │ ├── Dataverse_JSON_metadata_2022.10.02_17.11.19 │ ├── hdl_11272.1_AB2_0AQZNT_v1.0.json │ ├── ... │ ├── metadatablocks_v5.6 │ ├── astrophysics_v5.6.json │ ├── biomedical_v5.6.json │ ├── citation_v5.6.json │ ├── ... │ ├── socialscience_v5.6.json │ ├── ACSS_Dataverse_2022.10.02_17.26.19.zip │ ├── ADA_Dataverse_2022.10.02_17.26.57.zip │ ├── Arca_Dados_2022.10.02_17.44.35.zip │ ├── ... │ └── World_Agroforestry_-_Research_Data_Repository_2022.10.02_22.59.36.zip └── dataset_pids_from_most_known_dataverse_installations.csv └── licenses_used_by_dataverse_installations.csv └── metadatablocks_from_most_known_dataverse_installations.csv This dataset contains two directories and three CSV files not in a directory. One directory, "csv_files_with_metadata_from_most_known_dataverse_installations", contains 18 CSV files that contain the values from common metadata fields of all 77 Dataverse installations. For example, author(citation)_2022.10.02-2022.10.03.csv contains the "Author" metadata for all published, non-deaccessioned, versions of all datasets in the 77 installations, where there's a row for each author name, affiliation, identifier type and identifier. The other directory, "dataverse_json_metadata_from_each_known_dataverse_installation", contains 77 zipped files, one for each of the 77 Dataverse installations whose dataset metadata I was able to download using Dataverse APIs. Each zip file contains a CSV file and two sub-directories: The CSV file contains the persistent IDs and URLs of each published dataset in the Dataverse installation as well as a column to indicate whether or not the Python script was able to download the Dataverse JSON metadata for each dataset. For Dataverse installations using Dataverse software versions whose Search APIs include each dataset's owning Dataverse collection name and alias, the CSV files also include which Dataverse collection (within the installation) that dataset was published in. One sub-directory contains a JSON file for each of the installation's published, non-deaccessioned dataset versions. The JSON files contain the metadata in the "Dataverse JSON" metadata schema. The other sub-directory contains information about the metadata models (the "metadata blocks" in JSON files) that the installation was using when the dataset metadata was downloaded. I saved them so that they can be used when extracting metadata from the Dataverse JSON files. The dataset_pids_from_most_known_dataverse_installations.csv file contains the dataset PIDs of all published datasets in the 77 Dataverse installations, with a column to indicate if the Python script was able to download the dataset's metadata. It's a union of all of the "dataset_pids_..." files in each of the 77 zip files. The licenses_used_by_dataverse_installations.csv file contains information about the licenses that a number of the installations let depositors choose when creating datasets. When I collected ... Visit https://dataone.org/datasets/sha256%3Ad27d528dae8cf01e3ea915f450426c38fd6320e8c11d3e901c43580f997a3146 for complete metadata about this dataset.