Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The presentation explains in the simplest possible way what you need to know about open source licenses when starting from scratch. It also sums up the course "Open Source Licensing Basics for Software Developers (LFC191)" (Linux Foundation)
The openscience.eu datahub is a data portal platform based on the open source software CKAN. Here we share and publish project data from our projects as well as from our partners. If you are interested in participating with your own organisation, please contact us.
The ISRIC – World Soil Information Soil Data Hub is a central location for searching and downloading soil data from around the world.
ISRIC, is a regular member of the International Science Council (ISC) World Data System. We support Open Data whenever possible, respecting inherited rights (licences).
We make our own soil information products available to data users under Creative Commons licenses (CC BY-NC or CC BY for datasets, and CC BY 4.0 for derived predictions and visualisations). Details are provided in the ISRIC Data and Software Policy.
Can’t find what you are looking for? Please take a look at our collection of soil geographical databases to explore soil data available outside ISRIC-World Soil Information via https://www.isric.org/explore/soil-geographic-databases.
Disclaimer: By using the ISRIC data and web services, the user accepts the ISRIC data and software policy in full. In order to acknowledge the scientists and/or organisations that have provided data or products, ISRIC requests that data users include a bibliographic citation to all materials supplied through ISRIC in output products, websites, and publications.
Software Health Management (SWHM) is a new field that is concerned with the development of tools and technologies to enable automated detection, diagnosis, prediction, and mitigation of adverse events due to software anomalies. Significant effort has been expended in the last several decades in the development of verification and validation methods for software intensive systems, but it is becoming increasingly more apparent that this is not enough to guarantee that a complex software system meets all safety and reliability requirements.
Modern software systems can exhibit a variety of failure modes which can go undetected in a verification and validation process. While standard techniques for error handling, fault detection and isolation can have significant benefits for many systems, it is becoming increasingly evident that new technologies and methods are necessary for the development of techniques to detect, diagnose, predict, and then mitigate the adverse events due to software that has already undergone significant verification and validation procedures. These software faults often arise due to the interaction between the software and the operating environment. Unanticipated environmental changes lead to software anomalies that may have significant impact on the overall success of the mission. Because software is ubiquitous, it is not sufficient that errors are detected only after they occur. Rather, software must be instrumented and monitored for failures before they happen. This prognostic capability will yield safer and more dependable systems for the future. This paper addresses the motivation, needs, and requirements of software health management as a new discipline.
Published in the Proceedings of the IEEE Conference on Space Mission Challenges for Information Technology, Palo Alto, CA, August 2011.
The Emergency Department Integration Software (EDIS) is a VHA-wide application used to track and manage the delivery off care to patients in the Emergency Care System (ECS). This critical IT solution directly supports the Major Initiative 'Enhance the Veteran Experience and Access to Healthcare (EVEAH). EDIS v1.0 is currently being deployed in each VHA hospital and is scheduled for deployment completion by 7/31/2011. The application improves emergency department care by introducing the systematic collection, display and reporting on patient status information. It is able to integrate with Appointment Management and Patient Care Encounter within VistA. EDIS v2.0 is currently being planned and will provide enhancements, better functionality and improved reports.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the supplements for the survey article about the extraction of mention statements of scientific artefacts.
Frank Krüger and David Schindler
Institute of Communications Engineering, University of Rostock, Rostock, Germany
Frank.Krueger@uni-rostock.de
Assess the current state of the art for the identification of research artefacts in scientific publications by systematically scanning available literature.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
PDS Software Release Product Tools (3.4.0)
Computer programs and software supported by NEH grants
For the purposes of this Agreement the following terms shall, unless the context otherwise requires, have the meanings indicated below: (a) “Information and Communications Technology” (ICT) shall refer to infrastructure, hardware and software systems, needed to capture, process and disseminate information to generate information-based products and services; (b) “ICT products” shall mean the products in the WTO Information Technology Agreement (ITA1) and related products which Member States may agree to add later; (c) “ICT services” shall mean the Information and Communications-related services listed in the Central Product Classification (CPC) and any additional related services which Member States may agree to add later; and (d) “Investment(s)" shall mean direct investment(s) related to the production of ICT products and the provision of ICT services.
Size in International Function Point User Group Function Points of delivered software.
https://www.nist.gov/open/licensehttps://www.nist.gov/open/license
The STEP File Analyzer is a software tool that generates a spreadsheet or a set of CSV (comma-separated value) files from a STEP (ISO 10303 Standard for Exchange of Product model data) Part 21 file. STEP files are used to represent product and manufacturing information (PMI) and for data exchange and interoperability between Computer-Aided Design (CAD), Manufacturing (CAM), Analysis (CAE), and Inspection (CMM) software related to the smart manufacturing digital thread. STEP is also used for the long-term archiving and retrieval of product data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
(i) The Community Innovation Survey (CIS) is a survey about innovation activities in enterprises. The survey is designed to capture the information on different types of innovation, to enable analysis of innovation drivers or to assess the innovation outcomes. (ii) Indicators related to the enterprises are classified by country, economic activity (NACE Rev. 2), size class of enterprises and type of innovation. (iii)EU Member States, Norway, Iceland, and Switzerland, Serbia, Macedonia, Turkey and Montenegro. (iv)All aggregations and indicators presented in CIS collections are based on data from national CIS data collections. Countries generally carry out a stratified sample survey in order to collect the data, whilst a limited number of countries use a census or a mix of census and sample survey.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A collection of 22 data set of 50+ requirements each, expressed as user stories.
The dataset has been created by gathering data from web sources and we are not aware of license agreements or intellectual property rights on the requirements / user stories. The curator took utmost diligence in minimizing the risks of copyright infringement by using non-recent data that is less likely to be critical, by sampling a subset of the original requirements collection, and by qualitatively analyzing the requirements. In case of copyright infringement, please contact the dataset curator (Fabiano Dalpiaz, f.dalpiaz@uu.nl) to discuss the possibility of removal of that dataset [see Zenodo's policies]
The data sets have been originally used to conduct experiments about ambiguity detection with the REVV-Light tool: https://github.com/RELabUU/revv-light
This collection has been originally published in Mendeley data: https://data.mendeley.com/datasets/7zbk8zsd8y/1
The following text provides a description of the datasets, including links to the systems and websites, when available. The datasets are organized by macro-category and then by identifier.
g02-federalspending.txt
(2018) originates from early data in the Federal Spending Transparency project, which pertain to the website that is used to share publicly the spending data for the U.S. government. The website was created because of the Digital Accountability and Transparency Act of 2014 (DATA Act). The specific dataset pertains a system called DAIMS or Data Broker, which stands for DATA Act Information Model Schema. The sample that was gathered refers to a sub-project related to allowing the government to act as a data broker, thereby providing data to third parties. The data for the Data Broker project is currently not available online, although the backend seems to be hosted in GitHub under a CC0 1.0 Universal license. Current and recent snapshots of federal spending related websites, including many more projects than the one described in the shared collection, can be found here.
g03-loudoun.txt
(2018) is a set of extracted requirements from a document, by the Loudoun County Virginia, that describes the to-be user stories and use cases about a system for land management readiness assessment called Loudoun County LandMARC. The source document can be found here and it is part of the Electronic Land Management System and EPlan Review Project - RFP RFQ issued in March 2018. More information about the overall LandMARC system and services can be found here.
g04-recycling.txt
(2017) concerns a web application where recycling and waste disposal facilities can be searched and located. The application operates through the visualization of a map that the user can interact with. The dataset has obtained from a GitHub website and it is at the basis of a students' project on web site design; the code is available (no license).
g05-openspending.txt
(2018) is about the OpenSpending project (www), a project of the Open Knowledge foundation which aims at transparency about how local governments spend money. At the time of the collection, the data was retrieved from a Trello board that is currently unavailable. The sample focuses on publishing, importing and editing datasets, and how the data should be presented. Currently, OpenSpending is managed via a GitHub repository which contains multiple sub-projects with unknown license.
g11-nsf.txt
(2018) refers to a collection of user stories referring to the NSF Site Redesign & Content Discovery project, which originates from a publicly accessible GitHub repository (GPL 2.0 license). In particular, the user stories refer to an early version of the NSF's website. The user stories can be found as closed Issues.
g08-frictionless.txt
(2016) regards the Frictionless Data project, which offers an open source dataset for building data infrastructures, to be used by researchers, data scientists, and data engineers. Links to the many projects within the Frictionless Data project are on GitHub (with a mix of Unlicense and MIT license) and web. The specific set of user stories has been collected in 2016 by GitHub user @danfowler and are stored in a Trello board.
g14-datahub.txt
(2013) concerns the open source project DataHub, which is currently developed via a GitHub repository (the code has Apache License 2.0). DataHub is a data discovery platform which has been developed over multiple years. The specific data set is an initial set of user stories, which we can date back to 2013 thanks to a comment therein.
g16-mis.txt
(2015) is a collection of user stories that pertains a repository for researchers and archivists. The source of the dataset is a public Trello repository. Although the user stories do not have explicit links to projects, it can be inferred that the stories originate from some project related to the library of Duke University.
g17-cask.txt
(2016) refers to the Cask Data Application Platform (CDAP). CDAP is an open source application platform (GitHub, under Apache License 2.0) that can be used to develop applications within the Apache Hadoop ecosystem, an open-source framework which can be used for distributed processing of large datasets. The user stories are extracted from a document that includes requirements regarding dataset management for Cask 4.0, which includes the scenarios, user stories and a design for the implementation of these user stories. The raw data is available in the following environment.
g18-neurohub.txt
(2012) is concerned with the NeuroHub platform, a neuroscience data management, analysis and collaboration platform for researchers in neuroscience to collect, store, and share data with colleagues or with the research community. The user stories were collected at a time NeuroHub was still a research project sponsored by the UK Joint Information Systems Committee (JISC). For information about the research project from which the requirements were collected, see the following record.
g22-rdadmp.txt
(2018) is a collection of user stories from the Research Data Alliance's working group on DMP Common Standards. Their GitHub repository contains a collection of user stories that were created by asking the community to suggest functionality that should part of a website that manages data management plans. Each user story is stored as an issue on the GitHub's page.
g23-archivesspace.txt
(2012-2013) refers to ArchivesSpace: an open source, web application for managing archives information. The application is designed to support core functions in archives administration such as accessioning; description and arrangement of processed materials including analog, hybrid, and
born digital content; management of authorities and rights; and reference service. The application supports collection management through collection management records, tracking of events, and a growing number of administrative reports. ArchivesSpace is open source and its
The Converged Registries Solution (CRS) has been replaced by the Veterans Integrated Registries Platform (VIRP). The information contained in this entry discusses the CRS prior to its replacement. The Converged Registries platform was a hardware and software architecture designed to host individual patient registries and eliminate duplicative development effort while maximizing VAs ability to create new patient registries. The common platform included a relational database, software classes, security modules, extraction services and other components. The Converged Registries obtained data from the Corporate Data Warehouse (CDW), directly from the Veterans Health Information Systems and Technology Architecture (VistA) as well as by direct user input. Registries Projects - Embedded Fragment Registry (EFR), Eye Injury Data Store, Traumatic Brain Injury (TBI) Registry and Veterans Implant Tracking and Alert System (VITAS).
Open source GIS software available for download
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Public data set for NASA Agency Intellectual Property (IP). The distribution contains both Patent information as well as General Release of Open Source Software.
The "EMODnet Digital Bathymetry (DTM)" is a multilayer bathymetric product for Europe’s sea basins covering:: • the Greater North Sea, including the Kattegat and stretches of water such as Fair Isle, Cromarty, Forth, Forties, Dover, Wight, and Portland • the English Channel and Celtic Seas • Western and Central Mediterranean Sea and Ionian Sea • Bay of Biscay, Iberian coast and North-East Atlantic • Adriatic Sea • Aegean - Levantine Sea (Eastern Mediterranean) • Azores - Madeira EEZ • Canary Islands • Baltic Sea • Black Sea • Norwegian – Icelandic seas The DTM is based upon more than 7700 bathymetric survey data sets and Composite DTMs that have been gathered from 27 data providers from 18 European countries and involving 169 data originators. The gathered survey data sets can be discovered and requested for access through the Common Data Index (CDI) data discovery and access service that also contains additional European survey data sets for global waters. The Composite DTMs can be discovered through the Sextant Catalogue service. Both discovery services make use of SeaDataNet standards and services and have been integrated in the EMODnet Bathymetry web portal (http://www.emodnet-bathymetry.eu). In addition, the Bathymetry Viewing and Download service of the EMODnet bathymetry portal gives users wide functionality for viewing and downloading the EMODnet digital bathymetry such as: • water depth (refering to the Lowest Astronomical Tide Datum - LAT) in gridded form on a DTM grid of 1/8 * 1/8 arc minute of longitude and latitude (ca 230 * 230 meters) • option to view depth parameters of individual DTM cells and references to source data • option to download DTM in 16 tiles in different formats: ESRI ASCII, XYZ, EMODnet CSV, NetCDF (CF), GeoTiff and SD • layer with a number of high resolution DTMs for coastal regions • layer with wrecks from the UKHO Wrecks database. The NetCDF (CF) DTM files are fit for use in a special 3D Viewer software package which is based on the existing open source NASA World Wind JSK application. It has been developed in the frame of the EU FP7 Geo-Seas project (another sibling of SeaDataNet for marine geological and geophysical data) and is freely available. The 3D viewer also supports the ingestion of WMS overlay maps. The SD files can also be used for 3D viewing by means of the freely available iView4De(Fledermaus) software. The original datasets themselves are not distributed but described in the metadata services, giving clear information about the background survey data used for the DTM, their access restrictions, originators and distributors and facilitating requests by users to originator.
U.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
This dataset lists out all software in use by NASA.
https://project-open-data.cio.gov/unknown-license/#v1-legacy/publichttps://project-open-data.cio.gov/unknown-license/#v1-legacy/public
This White Paper presents a survey of high confidence software and systems research needs. It has been prepared by the High Confidence Software and Systems Coordinating Group HCSS CG of the Interagency Working Group in Information Technology Research and Development IWG-IT R and D...
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The presentation explains in the simplest possible way what you need to know about open source licenses when starting from scratch. It also sums up the course "Open Source Licensing Basics for Software Developers (LFC191)" (Linux Foundation)