Archival Descriptions from the National Archives Catalog data set provides archival descriptions of the permanent holdings of the federal government in the custody of the National Archives and Records administration. The archival descriptions include information on traditional paper holdings, logical data records (electronic records), and artifacts.
https://market.us/privacy-policy/https://market.us/privacy-policy/
Health Data Archiving Market size is expected to be worth around USD 3.4 Billion by 2033 from USD 1.4 Billion in 2023
To preserve NEFSC historical data, images of biological and oceanographic data sheets (1948-1975) were scanned to digital format and can be queried through a portal on the NEFSC website. Images may include: cruise instructions, cruise tracks, original trawl logs, length frequency data sheets, cruise notes, tagging information and fisherman reports.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open access and open data are becoming more prominent on the global research agenda. Funders are increasingly requiring grantees to deposit their raw research data in appropriate public archives or stores in order to facilitate the validation of results and further work by other researchers.
While the rise of open access has fundamentally changed the academic publishing landscape, the policies around data are reigniting the conversation around what universities can and should be doing to protect the assets generated at their institution. The main difference between an open access and open data policy is that there is not already a precedent or status quo of how academia deals with the dissemination of research that is not in the form of a traditional ‘paper’ publication.
As governments and funders of research see the benefit of open content, the creation of recommendations, mandates and enforcement of mandates are coming thick and fast.
This archive publishes and preserves short and long-term research data collected from studies funded by:
Each archived data set (i.e., 'data publication') contains at least one data set, complete metadata for the data set(s), and any other documentation the researcher deemed important to understanding the data set(s). The data catalog entries present the metadata and a link to the data. In some cases the data link is to a different archive.
This is an export of the data archived from the 2022 National Incident Feature Service.Sensitive fields and features have been removed.Each edit to a feature is captured in the Archive. The GDB_FROM and GDB_TO fields show the date range that the feature existed in the National Incident Feature Service.The National Incident Feature Service is based on the National Wildfire Coordinating Group (NWCG) data standard for Wildland Fire Event. The Wildland Fire Event data standard defines the minimum attributes necessary for collection, storage and dissemination of incident based data on wildland fires (wildfires and prescribed fires). The standard is not intended for long term data storage, rather a standard to assist in the creation of incident based data management tools, minimum standards for data exchange, and to assist users in meeting the NWCG Standards for Geospatial Operations (PMS 936).
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Microdata of the ANTIELAB Research Data Archive - Teargas Map
Field definition:
event-id: an unique event identifier time_interval: time interval within when the event happened district: one of the 18 Hong Kong districts (in Chinese) location1-3: specific locations identified in the post(s) (in Chinese)
The shape file contains the WSG84 coordinates of each of the identified locations of every event. You can look up the geographical coordinates of the events by matching the field ‘event_id’ in the shape file with that in the CSV file. The coordinates are provided by Google’s Geocoding API and Place API.
December 7, 2022: This updated version fixes a data format issue occurred when exporting the coordinates to the data repository. It also makes the date format of the events consistent to avoid user’s misidentification. These changes do not affect the analysis and the results of the original paper.
Reference: Teo, E., Fu, KW. (2021) A novel systematic approach of constructing protests repertoires from social media: comparing the roles of organizational and non-organizational actors in social movement. J Comput Soc Sc. https://link.springer.com/article/10.1007/s42001-021-00101-3
This is an export of the data archived from the 2024 National Incident Feature Service.Sensitive fields and features have been removed.Each edit to a feature is captured in the Archive. The GDB_FROM and GDB_TO fields show the date range that the feature existed in the National Incident Feature Service.The National Incident Feature Service is based on the National Wildfire Coordinating Group (NWCG) data standard for Wildland Fire Event. The Wildland Fire Event data standard defines the minimum attributes necessary for collection, storage and dissemination of incident based data on wildland fires (wildfires and prescribed fires). The standard is not intended for long term data storage, rather a standard to assist in the creation of incident based data management tools, minimum standards for data exchange, and to assist users in meeting the NWCG Standards for Geospatial Operations (PMS 936).
https://collections.albert-kahn.hauts-de-seine.fr/cms/cgu_mediahttps://collections.albert-kahn.hauts-de-seine.fr/cms/cgu_media
Le musée départemental Albert-Kahn conserve les Archives de la Planète, un ensemble d'images fixes et animées, réalisé au début du XXe siècle, consacré à la diversité des peuples et des cultures.
Des images au service de la recherche de la paix et du dialogue entre les cultures. « La photographie stéréoscopique, les projections, le cinématographe surtout, voilà ce que je voudrais faire fonctionner en grand afin de fixer une fois pour toutes des aspects, des pratiques et des modes de l’activité humaine dont la disparition fatale n’est plus qu’une question de temps ». Albert Kahn, janvier 1912 Albert Kahn est animé par un idéal de paix universelle. Sa conviction : La connaissance des cultures étrangères encourage le respect et les relations pacifiques entre les peuples. Il perçoit également très tôt que son époque sera le témoin de la mutation accélérée des sociétés et de la disparition de certains modes de vie. Il crée alors les Archives de la Planète, fruit du travail d’une douzaine d’opérateurs envoyés sur le terrain entre 1909 et 1931 afin de saisir les différentes réalités culturelles dans une cinquantaine de pays. L’ambition du projet l’amène à confier sa direction scientifique au géographe Jean Brunhes (1869-1930), un des promoteurs en France de la géographie humaine. Deux inventions des frères Lumière sont mises à contribution : le cinématographe (1895) et l’autochrome, premier procédé photographique en couleur naturelle (1907). Les Archives de la Planète rassemblent une centaine d’heures de films, 4 000 stéréoscopies et 72 000 autochromes, soit la plus importante collection au monde.
Observations particulières
La mise à disposition des fonds est soumise à certaines conditions de licence. Elles sont décrites dans les conditions générales d'utilisation via le lien ci-dessous :
Fonds sous licence CC0
Fonds sous licence CC-BY
Fonds non-diffusable
Fonds non-librement réutilisable (droits réservés)
Les fonds non-diffusables et non-librement réutilisables ne sont pas visibles. Seules les informations liées au fonds sont disponibles.
https://www.icpsr.umich.edu/web/ICPSR/studies/38858/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/38858/terms
These datasets contain measures of weather by county in the United States for the years 2003-2016. Measures include average daily temperature, freezing days, cold days, hot days, rainy days, and snowy days.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
AbstractThe data underlying scientific papers should be accessible to researchers both now and in the future, but how best can we ensure that these data are available? Here we examine the effectiveness of four approaches to data archiving: no stated archiving policy, recommending (but not requiring) archiving, and two versions of mandating data deposition at acceptance. We control for differences between data types by trying to obtain data from papers that use a single, widespread population genetic analysis, STRUCTURE. At one extreme, we found that mandated data archiving policies that require the inclusion of a data availability statement in the manuscript improve the odds of finding the data online almost 1000-fold compared to having no policy. However, archiving rates at journals with less stringent policies were only very slightly higher than those with no policy at all. We also assessed the effectiveness of asking for data directly from authors and obtained over half of the requested datasets, albeit with ∼8 d delay and some disagreement with authors. Given the long-term benefits of data accessibility to the academic community, we believe that journal-based mandatory data archiving policies and mandatory data availability statements should be more widely adopted. Usage notesJournal policiesA file giving the data archiving policies from the journals covered in the study.Data request protocolThe sequence of emails used to request data from authors.Vines_et_al_Rcode_4th_JanThe R code used in the statistical analysesVinesetal_data_4th JanThe data used in the statistical analyses
Data archive to assist in the sharing of research grade information pertaining to the social and economic sciences. The majority of digital content currently consists of social science research data from experiments, program files with the code for analyzing the data, requisite documentation to use and understand the data, and associated files. Access to the ISPS Data Archive is provided at no cost and is granted for scholarship and research purposes only. When possible, Data is linked to Projects and Publications, via the ISPS KnowledgeBase. ISPS operates in accordance with the prevailing standards and practices of the digital preservation community including the Open Archival Information System (OAIS) Reference Model (ISO 14721:2003) and the Data Documentation Initiative (DDI) standard. Accordingly, ISPS supports digital life-cycle management, interoperability, and preferred methods of preservation. The ISPS Data Archive is intended for use by social science researchers, policy-makers, and practitioners who are conducting or analyzing field (and other) experiments in various social science disciplines. Currently, Replication Files originate with ISPS-affiliated scholars.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Datasets provided to the DASSH Data Archive Centre from Statutory Agencies. Data comply with MEDIN standards (www.oceannet.org) and further assets (images, video, bathymetry) may be available.
https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Data Archiving Software Market size was valued at USD 8.29 Billion in 2023 and is projected to reach USD 14.36 Billion by 2031, growing at a CAGR of 11.6% during the forecast period 2024-2031.
Global Data Archiving Software Market Drivers
The market drivers for the Data Archiving Software Market can be influenced by various factors. These may include:
Regulatory Compliance: Regulatory compliance is a significant driver in the data archiving software market due to the increasing number of laws and guidelines that mandate proper data management and retention. Organizations across various industries, such as finance, healthcare, and legal sectors, must adhere to stringent regulations like GDPR, HIPAA, and Sarbanes-Oxley. These regulations require businesses to maintain accurate records, make data easily accessible for audits, and protect sensitive information. Non-compliance can result in hefty fines and legal repercussions, thus creating a robust demand for data archiving solutions that facilitate regulatory adherence. Data archiving software offers features like tamper-proof storage, data encryption, audit trails, and compliance reporting, ensuring that organizations can meet regulatory standards effectively and efficiently. As regulations continue to evolve and become more complex, the need for sophisticated, reliable data archiving solutions will only increase, driving further growth in the market.
Data Growth: Exponential data growth is another crucial driver for the data archiving software market. With the proliferation of digital technologies, social media, IoT devices, and advanced analytics, the volume of data generated by organizations is skyrocketing. Managing this vast amount of data presents a significant challenge. Companies need efficient ways to store, organize, and retrieve data not only for operational needs but also for historical reference, legal requirements, and business analytics. Data archiving software helps organizations tackle this challenge by systematically storing infrequently accessed or inactive data, thereby freeing up primary storage systems and optimizing overall data management. The software ensures that archived data remains accessible and secure over time while also enabling organizations to meet long-term data retention policies. As data generation continues to accelerate, the demand for efficient data archiving solutions that can handle large volumes of diverse data types will continue to rise.
Cost Reduction: Cost reduction is a compelling driver in the adoption of data archiving software. Storing vast amounts of data on high-performance primary storage systems can be prohibitively expensive. Data archiving software allows organizations to move less frequently accessed data to more cost-effective, long-term storage solutions without compromising accessibility or data integrity. This process not only frees up valuable primary storage resources for more critical applications but also significantly reduces overall storage costs. Additionally, it helps minimize the need for frequent hardware upgrades and lowers data management overheads. By streamlining data storage and optimizing storage infrastructure, companies can achieve substantial savings while ensuring that valuable data is preserved and remains retrievable. In an economic environment where cost-efficiency is paramount, the financial benefits provided by data archiving solutions are a strong incentive for their adoption.
Disaster Recovery: Disaster recovery is a fundamental driver for the data archiving software market. Organizations must be prepared for unexpected events such as data breaches, natural disasters, or system failures that can result in data loss or corruption. Effective disaster recovery strategies require that critical data is stored in secure, redundant locations and can be quickly restored to maintain business continuity. Data archiving software plays a vital role in these strategies by ensuring that historical and essential data is preserved in a secure, off-site location, often in the cloud. This redundancy ensures that a copy of the organization’s valuable data is always available, facilitating quick recovery in the event of a disaster. The software’s ability to support regular backups, data integrity checks, and rapid restoration processes enhances an organization’s resilience against data loss incidents. With increasing threats to data from both cyber-attacks and environmental factors, robust disaster recovery capabilities are becoming a top priority, driving the need for advanced data archiving solutions.
Data Management and Analytics: Enhanced data management and analytics capabilities for historical data. Cloud Adoption: Growing adoption of cloud-based archiving solutions for scalability and flexibility. Technological Advancements: Advancements in data archiving technologies and integration capabilities
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Link to official OMI data archive at NASA. https://disc.gsfc.nasa.gov/datasets?keywords=omi&page=1
This Data Guide is aimed at those engaged in the cycle of social science research, from applying for a research grant, through the data collection phase, and ultimately to preparation of the data for deposit in the DANS data archive, or any other data repository.
This publication is an adaption of the 4th edition of the Guide to social science data preparation and archiving of 2009, published by the Inter-university Consortium for Political and Social Research (ICPSR) at the University of Michigan in the United States.
The publication is intended to help researchers manage and document their data to prepare them for archival deposit, as well as think more broadly about the types of digital content that should be deposited in an archive.
Neutron scattering data from NCNR's thermal and cold neutron scattering instruments.
https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
ThermoML is an XML-based IUPAC standard for the storage and exchange of experimental thermophysical and thermochemical property data. The ThermoML archive is a subset of Thermodynamics Research Center (TRC) data holdings corresponding to cooperation between NIST TRC and five journals: Journal of Chemical Engineering and Data (ISSN: 1520-5134), The Journal of Chemical Thermodynamics (ISSN: 1096-3626), Fluid Phase Equilibria (ISSN: 0378-3812), Thermochimica Acta (ISSN: 0040-6031), and International Journal of Thermophysics (ISSN: 1572-9567). Data from initial cooperation (around 2003) through the 2019 calendar year are included. The original scope of the archive has been expanded to include JSON files. The JSON files are structured according to the ThermoML.xsd (available below) and rendered from the same experimental thermophysical and thermochemical property data reported in the corresponding articles as the ThermoML files. In fact, the ThermoML files are generated from the JSON files to keep the information in sync. The JSON files may contain additional information not supported by the ThermoML schema. For example, each JSON file contains the md5 checksum on the ThermoML file (THERMOML_MD5_CHECKSUM) that may be used to validate the ThermoML download. This data.nist.gov resource provides a .tgz file download containing the JSON and ThermoML files for each version of the archive. Data from initial cooperation (around 2003) through the 2019 calendar year are provided below (ThermoML.v2020-09.30.tgz). The date of the extraction from TRC databases, as specified in the dateCit field of the xml files, are 2020-09-29 and 2020-09-30. The .tgz file contains a directory tree that maps to the DOI prefix/suffix of the entries; e.g. unzipping the .tgz file creates a directory for each of the prefixes ( 10.1007, 10.1016, and 10.1021) that contains all the .json and .xml files. The data and other information throughout this digital resource (including the website, API, JSON, and ThermoML files) have been carefully extracted from the original articles by NIST/TRC personnel. Neither the Journal publisher, nor its editors, nor NIST/TRC warrant or represent, expressly or implied, the correctness or accuracy of the content of information contained throughout this digital resource, nor its fitness for any use or for any purpose, nor can they, or will they, accept any liability or responsibility whatever for the consequences of its use or misuse by anyone. In any individual case of application, the respective user must check the correctness by consulting other relevant sources of information.
Archival Descriptions from the National Archives Catalog data set provides archival descriptions of the permanent holdings of the federal government in the custody of the National Archives and Records administration. The archival descriptions include information on traditional paper holdings, logical data records (electronic records), and artifacts.