Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The FAIR principles were published in 2016 in a Scientific Data article titled ‘FAIR Guiding Principles for scientific data management and stewardship’. These were developed to aid in the discovery and reuse of research data.FAIR stands for Findable, Accessible, Interoperable, and Reusable. Data that meet these principles are more optimal for reuse and discoverability and in turn increase your research’s exposure.Here’s how your data is more FAIR when it’s on Figshare.Illustration by Jason McDermott of RedPenBlackPen.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here published artikel about The FAIR Guiding Principles for scientific data management and stewardship
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As part of the EOSC project family the FAIRsFAIR - Fostering Fair Data Practices in Europe - project aims to supply practical solutions for the use of the FAIR data principles throughout the research data life cycle. The work package "WP2 FAIR Practices: Semantics, Interoperability, and Services" will produce three reports on FAIR requirements for persistence and interoperability to identify domain-specific standards and practices in use. These will review and document commonalities and possible gaps regarding semantic interoperability, and the use of metadata and persistent identifiers across infrastructures. They will also look into differences in terms of standards, vocabularies and ontologies. The collected information will be updated during the course of the project in cooperation with other tasks and EOSC projects.
This survey was done to complement and validate the information from desk research for the first of these reports. It was aimed at data managers and data support experts. We hoped to get information about tools and services we might have missed, but also some reflections on the thinking around identifiers and ontologies and other semantic artefacts. The information was also collected to support preparing workshops on semantics and interoperability that are forthcoming in the project, as well as the work on software and services. The survey covers questions about metadata, use of persistent identifiers, use of semantic artefacts and handling research software.
The survey was conducted as a joint effort with WP3, FAIR Policy and Practice and its open consultation, and was disseminated on the fairsfair.eu web pages, social media channels and via email lists. We received 66 answers during the period the survey was open, that is between 15 July to 2 October 2019.
{"references": ["E. Lazzeri, F. Di Donato, Open Science: Why it is important, 10.5281/zenodo.4317277", "OpenAIRE Guidelines: How do I know if my research data is protected https://www.openaire.eu/how-do-i-know-if-my-research-data-is-protected", "E. Giglia, Open Science, FAIR data, data management: what's in it for me?, 2019, 10.5281/zenodo.3618364"]} Introductory lecture, given on April 12th 2021 at the The Empirical Study of Literature Innovative Training Network (ELIT) training program - H2020-MSCA-ITN-2019. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860516.
The U.S. Merit Systems Protection Board (MSPB) has the statutory responsibility to assess the health of Federal merit systems and the authority to conduct special studies of the Federal civil service (see 5 U.S.C. 1204(a)(3) and 5 U.S.C. 1204(e)(3)). MSPB administers a periodic Merit Principles Survey (MPS) to help carry out those studies. Those studies, including summaries and analyses of data from the MPS, are officially submitted to the President and Congress and shared with Federal policymakers and agencies.
Objective To assess the use of Health Level Seven Fast Healthcare Interoperability Resources (FHIR®) for implementing the Findable, Accessible, Interoperable, and Reusable guiding principles for scientific data (FAIR). Additionally, present a list of FAIR implementation choices for supporting future FAIR implementations that use FHIR. Material and Methods A case study was conducted on the Medical Information Mart for Intensive Care-IV Emergency Department dataset (MIMIC-ED), a deidentified clinical dataset converted into FHIR. The FAIRness of this dataset was assessed using a set of common FAIR assessment indicators. Results The FHIR distribution of MIMIC-ED, comprising an implementation guide and demo data, was more FAIR compared to the non-FHIR distribution. The FAIRness score increased from 60 to 82 out of 95 points, a relative improvement of 37%. The most notable improvements were observed in interoperability, with a score increase from 5 to 19 out of 19 points, and reusability, wit..., The authors of the paper collected the dataset. , Microsoft Word (.docx files) or Microsoft Excel (.csv files) (Open-source alternatives: LibreOffice, OpenOffice) The data files (.csv) can also be opened using any text editor, R, etc., # FAIR Indicator Scores and Qualitative Comments
This dataset belongs as supplementary material to the paper entitled "Assessing the Use of HL7 FHIR for Implementing the FAIR Guiding Principles: A Case Study of the MIMIC-IV Emergency Department Module".
This dataset describes the indicator scores and qualitative comments of the FAIR data assessment of the Medical Information Mart for Intensive Care (MIMIC)-IV Emergency Department Module. Two distributions of the Emergency Department module were assessed, the PhysioNet distribution and the Fast Healthcare Interoperability Resources (FHIR) distribution. This dataset consists of two files: (1) PhysioNet.csv containing the data of the PhysioNet distribution; and (2) FHIR.csv containing the data of the FHIR distribution. Both files share the same structure and fields.
This statistic ranks the share of internet users worldwide who are aware of their country's data protection and privacy rules as of February 2019, sorted by country. During the survey period, 59 percent of respondents in Germany were very or somewhat aware of their domestic data protection and privacy rules.
MPS contains a combination of core items that MSPB tracks over time and special-purpose items developed to support a particular special study. This survey differs from the Federal Employee Viewpoint Survey administered by OPM in several respects, including: a focus on merit system principles and Governmentwide civil service issues; administration every few years instead of annually; and a smaller sample. Agency participation in the MPS was mandatory, but individual response to the survey was voluntary.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Eindhoven open data principles
a. Data in the public space (hereinafter: “data”) belongs to everyone. This data is public good. Data that is collected, generated or measured (for example by sensors placed in public spaces) must be made available so that everyone can use it for commercial and non-commercial purposes. A privacy and security consideration must be made.
b. Data may contain personal data. This data can therefore affect the lives of people. The rules of the Personal Data Protection Act apply to this. This data must only be made available after this data has been processed in such a way (for example anonymized or aggregated) that there are no longer any privacy risks.
c. Data that does pose privacy or security risks may only be processed within the framework of privacy legislation. Storage and processing of data must be carried out in accordance with existing legislation.
d. Data that no longer contains personal data must be placed in such a way that everyone has equal access to that data (for example via an Open Data portal). We call that opening up data. No technical or legal barriers are placed that prevent, restrict or discriminate against access to data.
e. Data is always made available free of charge, without unnecessary processing (where possible in raw form) and according to functional and technical requirements to be determined.
f. A distinction is made with personal data (such as an e-mail address or payment details) that are collected with the conscious knowledge and after explicit consent of people. Use of this data is determined by an agreement between the parties involved within the framework of privacy legislation (such as a user agreement).
g. The municipality always has insight into which data is collected in the public space, regardless of whether the data has been made available or not.
h. The municipality remains in dialogue with the parties that contribute to the data infrastructure in the city and strives to create earning opportunities and a fertile economic climate.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data sets accompanying the paper "The FAIR Assessment Conundrum: Reflections on Tools and Metrics", an analysis of a comprehensive set of FAIR assessment tools and the metrics used by these tools for the assessment.
The data set "metrics.csv" consists of the metrics collected from several sources linked to the analysed FAIR assessments tools. It is structured into 11 columns: (i) tool_id, (ii) tool_name, (iii) metric_discarded, (iv) metric_fairness_scope_declared, (v) metric_fairness_scope_observed, (vi) metric_id, (vii) metric_text, (viii) metric_technology, (ix) metric_approach, (x) last_accessed_date, and (xi) provenance.
The columns tool_id and tool_name are used for the identifier we assigned to each tool analysed and the full name of the tool respectively.
The metric_discarded column refers to the selection we operated on the collected metrics, since we excluded the metrics created for testing purposes or written in a language different from English. The possible values are boolean. We assigned TRUE if the metric was discarded.
The columns metric_fairness_scope_declared and metric_fairness_scope_observed are used for indicating the declared intent of the metrics, with respect to the FAIR principle assessed, and the one we observed respectively. Possible values are: (a) a letter of the FAIR acronym (for the metrics without a link declared to a specific FAIR principle), (b) one or more identifiers of the FAIR principles (F1, F2…), (c) n/a, if no FAIR references were declared, or (d) none, if no FAIR references were observed.
The metric_id and metric_text columns are used for the identifiers of the metrics and the textual and human-oriented content of the metrics respectively.
The column metric_technology is used for enumerating the technologies (a term used in its widest acceptation) mentioned or used by the metrics for the specific assessment purpose. Such technologies include very diverse typologies ranging from (meta)data formats to standards, semantic technologies, protocols, and services. For tools implementing automated assessments, the technologies listed take into consideration also the available code and documentation, not just the metric text.
The column metric_approach is used for identifying the type of implementation observed in the assessments. The identification of the implementation types followed a bottom-to-top approach applied to the metrics organised by the metric_fairness_scope_declared values. Consequently, while the labels used for creating the implementation type strings are the same, their combination and specialisation varies based on the characteristics of the actual set of metrics analysed. The main labels used are: (a) 3rd party service-based, (b) documentation-centred, (c) format-centred, (d) generic, (e) identifier-centred, (f) policy-centred, (g) protocol-centred, (h) metadata element-centred, (i) metadata schema-centred, (j) metadata value-centred, (k) service-centred, and (l) na.
The columns provenance and last_accessed_date are used for the main source of information about each metric (at least with regard to the text) and the date we last accessed it respectively.
The data set "classified_technologies.csv" consists of the technologies mentioned or used by the metrics for the specific assessment purpose. It is structured into 3 columns: (i) technology, (ii) class, and (iii) discarded.
The column technology is used for the names of the different technologies mentioned or used by the metrics.
The column class is used for specifying the type of technology used. Possible values are: (a) application programming interface, (b) format, (c) identifier, (d) library, (e) licence, (f) protocol, (g) query language, (h) registry, (i) repository, (j) search engine, (k) semantic artefact, and (l) service.
The discarded column refers to the exclusion of the value 'linked data' from the accepted technologies since it is too generic. The possible values are boolean. We assigned TRUE if the technology was discarded.
https://doi.org/10.4121/resource:terms_of_usehttps://doi.org/10.4121/resource:terms_of_use
Corresponding data-set to IDCC 2017 Practice Paper 'Are the FAIR-Principles fair?'. Excel Spreadsheet with overview and categories, frequency and proportion statistics, and graphs of 37 data repositories in the Netherlands and Europe. Compliance Evaluation is based on the principles and facets of the FAIR principles; re3data.org is the source for the data repositories.
This implementation story discusses approaches in the data archive world towards tackling the challenges of making data as FAIR as possible, when there are compelling reasons for the data to be restricted or unavailable. This topic was included in FAIRsFAIR deliverable D3.4, Recommendations on practice to support FAIR data principles, under the theme ���Ensuring trusted curation of data���. Within that set of recommendations, the FAIRsFAIR project committed to supporting change in good practice for researchers, repositories and ethics committees on selecting and preparing sensitive data to be FAIR. This implementation story aims to support that goal.
This chapter described at a very high level some of the considerations that need to be made when designing algorithms for a vehicle health management application. The choices made here affect the quality of the diagnosis and prognosis (covered in Chapter 7). Therefore, the algorithmic design choices are made in conjunction with the design choices for diagnostics and prognostics to optimally support these tasks. Furthermore, additional considerations imposed by computational constraints, resource availability, algorithm maintenance, need for algorithm re-tuning, etc. will impact the solutions. It should also be noted that technological advances, both in hardware and software, impose the need for new solutions. For example, as new materials and new sensors are being developed, the algorithmic solutions will need to follow suit. In general, there seems to be a trend to have more sensor data available. While this is potentially a good thing, sensor data provides value only when it is being processed and interpreted properly, in part by the techniques described here. Testing of the methods, however, requires the “right” kind of data. Generally, there is a lack of seeded fault data which are required to train and validate algorithms. It is also important to migrate information from the component to the subsystem to the system levels so that health management technologies can be applied effectively and efficiently at the vehicle level. It may be required to perform elements described in this chapter between different levels of the vehicle architecture.
This webinar is the first in a series of three Blue-Cloud 2026 Training Academy webinars on FAIR Data Principles, to be held from September 2023 to March 2024. The FAIR Guiding Principles for scientific data management and stewardship were first published in Scientific Data, [Wilkinson et al (2016)]. The principles apply to both data and metadata and are: Findable, Accessible, Interoperable, Reusable. They put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. In pursuit of FAIR, there is a need for increased efforts in optimisation, standardisation, best practices and harmonisation across methodologies for ocean data management, applications and digital assets, Webinar 1 explored the challenges and solutions in applying the FAIR foundational components on the journey from FAIR Principles to FAIR Practices to achieve FAIRification in the marine data community. It looked at the standards and practices supporting interoperability and efficiency which focus on the findability and accessibility of data/metadata. Standards and best practices play a crucial role in implementing the FAIR Principles by providing a consistent framework and guidelines to achieve FAIRness targets. Approaches varied, depending on the data lake, data space or repository structure and capabilities.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the datasets and documentation for the FAIR assessment conducted as part of the FAIR Phytoliths Project. This is the revised version after the integration of the reviewers´comments.
As a service manager how may I assist my organisation to make research data we hold both FAIR and “as open as possible, as closed as necessary”? The FAIR Data Point is a protocol for (meta)data provision championed by GO-FAIR as a solution to this need. In this story we describe how two organisations have applied the FAIR Data Point (FDP) to provide FAIR data or metadata in two contexts. In Leiden University Medical Centre the FDP is used to make metadata about COVID patient data as open as possible in the interest of research, while the data is necessarily closed and held in a variety of different systems. By contrast, Dutch data service provider SURF is applying the FDP to improve the FAIRness of an extensive dataset repository that is openly accessible by default. Based on interviews with the lead protagonists in both organisations' FDP implementations we compare their rationales and approaches, and how they expect this FAIR-enabling technology to benefit their user communities.
In a November 2023 survey, only half of data privacy professionals in European companies thought that most companies that they knew of complied with the core principles of GDPR. Data transfer compliance was the most problematic area, with nearly 45 percent of respondents stating that most companies were still having problems and around 24 percent saying that most were not complying at all.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This research aims to develop a principle-based framework for audit analytics implementation, which addresses the challenges of AA implementation and acknowledges its socio-technical complexities and interdependencies among challenges. This research relies on mixed methods to capture the phenomena from the research’s participants through various approaches, i.e., MICMAC-ISM, case study, and interview with practitioners, with literature exploration as the starting point. The raw data collected consists of multimedia data (audio and video recordings of interviews and focused group discussion), which is then transformed into a text file (transcript), complemented with a softcopy of the documents from the case study object.
The published data in this dataset, consists of the summarized or analyzed data, as the raw data (including transcript) is not allowed to be published according to the decision by the Human Research Ethics Committee pertinent to this research (Approval #1979, 14 February 2022). This dataset's published data are text files representing the summarized/analyzed raw data as an online appendices to the thesis.
The objective of this study was to systematically review and statistically synthesize all available research that, at a minimum, compared participants in a restorative justice program to participants processed in a more traditional way using meta-analytic methods. Ideally, these studies would include research designs with random assignment to condition groups, as this provides the most credible evidence of program effectiveness. The systematic search identified 99 publications, both published and unpublished, reporting on the results of 84 evaluations nested within 60 unique research projects or studies. Results were extracted from these studies, related to delinquency, non-delinquency, and victim outcomes for the youth and victims participating in these programs.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The following concepts detailed in the publication were taken from an article written by Howard Zehr and Henry Mika, (1998),"Fundamental Concepts in Restorative Justice", in Contemporary Justice Review, Vol. 1. At the primary level, restorative justice in Canada is guided by recognizing the need for victims to heal and put right the wrongs. Restorative Justice also grounds itself in engaging with community and recognizing the need for dialogue between victims and offenders as appropriate.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The FAIR principles were published in 2016 in a Scientific Data article titled ‘FAIR Guiding Principles for scientific data management and stewardship’. These were developed to aid in the discovery and reuse of research data.FAIR stands for Findable, Accessible, Interoperable, and Reusable. Data that meet these principles are more optimal for reuse and discoverability and in turn increase your research’s exposure.Here’s how your data is more FAIR when it’s on Figshare.Illustration by Jason McDermott of RedPenBlackPen.