Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
The dataset contains results from 2019 phenotypic evaluations of multi-location field trials. Also included is field level weather and soil data for the multiple locations where field testing was conducted. Data was collected by collaborators in the Genomes to Fields (G2F) initiative. G2F is an umbrella initiative to support translation of maize (Zea mays) genomic information for the benefit of growers, consumers, and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments. Ultimately this information will be used to enable accurate prediction of the phenotypes of corn plants in diverse environments.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset containing the geodata needed to replicate the analysis and the results related the research article: "On Cost-effective, Reliable Coverage for LoS Communications in Urban Areas" published on Transactions of Network and Service Management.
This dataset contains all the data used in the research article: "On Cost-effective, Reliable Coverage for LoS Communications in Urban Areas" published on Transactions of Network and Service Management.
It is divided into three main archives:
The data are licensed as follows:
The DSM and DTM are licensed depending on the area:
The OpenStreetMap data have been obtained by geofabrik.de and are released under an Open Data Commons Open Database License
All the results can be replicated using our code, available on Github.
In order to cite this dataset please cite the original research article:
@article{9828530,
author={Gemmi, Gabriele and Cigno, Renato Lo and Maccari, Leonardo},
journal={IEEE Transactions on Network and Service Management},
title={On Cost-effective, Reliable Coverage for LoS Communications in Urban Areas},
year={2022},
volume={},
number={},
pages={1-1},
doi={10.1109/TNSM.2022.3190634}
}
Facebook
TwitterThis data set contains Raman-shifted Eye-safe Aerosol Lidar (REAL) data collected during the Terrain-induced Rotor Experiment (T-REX) from March 3rd, 2006 to April 30th, 2006. The data represent high resolution two-dimensional images of the atmosphere. The REAL lidar is unique because it is safe for use in highly populated areas.The data set includes tar.gz files spanning 30 minutes. Each hour's worth of data in netCDF format are contained in 2 tar.gz files totalling ~1.5GB in size.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Reprocessed counts were generated using our GDC RNA-seq workflow implementation. NA rank changes indicate the DEG cannot be found in the other DEG list. (CSV)
Facebook
TwitterWhen a natural disaster or disease outbreak occurs there is a rush to establish accurate health care location data that can be used to support people on the ground. This has been demonstrated by events such as the Haiti earthquake and the Ebola epidemic in West Africa. As a result valuable time is wasted establishing accurate and accessible baseline data. Healthsites.io establishes this data and the tools necessary to upload, manage and make the data easily accessible. Global scope The Global Healthsites Mapping Project is an initiative to create an online map of every health facility in the world and make the details of each location easily accessible. Open data collaboration Through collaborations with users, trusted partners and OpenStreetMap the Global Healthsites Mapping Project will capture and validate the location and contact details of every facility and make this data freely available under an Open Data License (ODBL). Accessible The Global Healthsites Mapping Project will make the data accessible over the Internet through an API and other formats such as GeoJSON, Shapefile, KML, CSV. Focus on health care location data The Global Healthsites Mapping Project's design philosophy is the long term curation and validation of health care location data. The healthsites.io map will enable users to discover what healthcare facilities exist at any global location and the associated services and resources.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
The dataset contains results from 2020 phenotypic evaluations of multi-location field trials of Maize GxE within the Genomes to Fields (G2F) initiative. Collaborators also collected field-level weather and soil data for the multiple field testing locations. G2F is an umbrella initiative to support the translation of maize (Zea mays) genomic information for the benefit of growers, consumers, and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments. The use of large-scale datasets contributes to the accurate prediction of the phenotypes of corn plants in diverse environments.
Facebook
TwitterThis dataset contains measurements of CO2 taken aboard the R/V Discoverer during ACE-1. Richard Feely (PI) reports that the air values are not trustworthy south of 40 degrees southlatitude. The air data are noisy, which happens when the air lines get filled with sea spray during bad weather. The water data look fine according to R. Feely.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
The dataset contains 2021 phenotypic evaluations, field-level weather, and soil data from multi-location field trials of the Maize GxE Project within the Genomes to Fields (G2F) initiative. G2F is an umbrella initiative to support the translation of maize (Zea mays) genomic information for the benefit of growers, consumers, and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments and contribute to the accurate prediction of the phenotypes of corn plants in diverse environments.
Facebook
TwitterCDP liquid water content as well as manually corrected Nevzorov liquid, total, and ice water content are contained in this dataset. There exists one file from each UW King Air (UWKA) research flight from SNOWIE. Data quality flags are included for each of the corrected variables (0 = raw, 1 = good/corrected, 2 = bad). Data coinciding with a bad data quality flag (2) was not omitted from the corrected variables. Detailed information on variables, naming convention, and missing data are listed in the readme file.
Facebook
TwitterEthical Data ManagementExecutive SummaryIn the age of data and information, it is imperative that the City of Virginia Beach strategically utilize its data assets. Through expanding data access, improving quality, maintaining pace with advanced technologies, and strengthening capabilities, IT will ensure that the city remains at the forefront of digital transformation and innovation. The Data and Information Management team works under the purpose:“To promote a data-driven culture at all levels of the decision making process by supporting and enabling business capabilities with relevant and accurate information that can be accessed securely anytime, anywhere, and from any platform.”To fulfill this mission, IT will implement and utilize new and advanced technologies, enhanced data management and infrastructure, and will expand internal capabilities and regional collaboration.Introduction and JustificationThe Information technology (IT) department’s resources are integral features of the social, political and economic welfare of the City of Virginia Beach residents. In regard to local administration, the IT department makes it possible for the Data and Information Management Team to provide the general public with high-quality services, generate and disseminate knowledge, and facilitate growth through improved productivity.For the Data and Information Management Team, it is important to maximize the quality and security of the City’s data; to develop and apply the coherent management of information resources and management policies that aim to keep the general public constantly informed, protect their rights as subjects, improve the productivity, efficiency, effectiveness and public return of its projects and to promote responsible innovation. Furthermore, as technology evolves, it is important for public institutions to manage their information systems in such a way as to identify and minimize the security and privacy risks associated with the new capacities of those systems.The responsible and ethical use of data strategy is part of the City’s Master Technology Plan 2.0 (MTP), which establishes the roadmap designed by improve data and information accessibility, quality, and capabilities throughout the entire City. The strategy is being put into practice in the shape of a plan that involves various programs. Although these programs was specifically conceived as a conceptual framework for achieving a cultural change in terms of the public perception of data, it basically covers all the aspects of the MTP that concern data, and in particular the open-data and data-commons strategies, data-driven projects, with the aim of providing better urban services and interoperability based on metadata schemes and open-data formats, permanent access and data use and reuse, with the minimum possible legal, economic and technological barriers within current legislation.Fundamental valuesThe City of Virginia Beach’s data is a strategic asset and a valuable resource that enables our local government carry out its mission and its programs effectively. Appropriate access to municipal data significantly improves the value of the information and the return on the investment involved in generating it. In accordance with the Master Technology Plan 2.0 and its emphasis on public innovation, the digital economy and empowering city residents, this data-management strategy is based on the following considerations.Within this context, this new management and use of data has to respect and comply with the essential values applicable to data. For the Data and Information Team, these values are:Shared municipal knowledge. Municipal data, in its broadest sense, has a significant social dimension and provides the general public with past, present and future knowledge concerning the government, the city, society, the economy and the environment.The strategic value of data. The team must manage data as a strategic value, with an innovative vision, in order to turn it into an intellectual asset for the organization.Geared towards results. Municipal data is also a means of ensuring the administration’s accountability and transparency, for managing services and investments and for maintaining and improving the performance of the economy, wealth and the general public’s well-being.Data as a common asset. City residents and the common good have to be the central focus of the City of Virginia Beach’s plans and technological platforms. Data is a source of wealth that empowers people who have access to it. Making it possible for city residents to control the data, minimizing the digital gap and preventing discriminatory or unethical practices is the essence of municipal technological sovereignty.Transparency and interoperability. Public institutions must be open, transparent and responsible towards the general public. Promoting openness and interoperability, subject to technical and legal requirements, increases the efficiency of operations, reduces costs, improves services, supports needs and increases public access to valuable municipal information. In this way, it also promotes public participation in government.Reuse and open-source licenses. Making municipal information accessible, usable by everyone by default, without having to ask for prior permission, and analyzable by anyone who wishes to do so can foster entrepreneurship, social and digital innovation, jobs and excellence in scientific research, as well as improving the lives of Virginia Beach residents and making a significant contribution to the city’s stability and prosperity.Quality and security. The city government must take firm steps to ensure and maximize the quality, objectivity, usefulness, integrity and security of municipal information before disclosing it, and maintain processes to effectuate requests for amendments to the publicly-available information.Responsible organization. Adding value to the data and turning it into an asset, with the aim of promoting accountability and citizens’ rights, requires new actions, new integrated procedures, so that the new platforms can grow in an organic, transparent and cross-departmental way. A comprehensive governance strategy makes it possible to promote this revision and avoid redundancies, increased costs, inefficiency and bad practices.Care throughout the data’s life cycle. Paying attention to the management of municipal registers, from when they are created to when they are destroyed or preserved, is an essential part of data management and of promoting public responsibility. Being careful with the data throughout its life cycle combined with activities that ensure continued access to digital materials for as long as necessary, help with the analytic exploitation of the data, but also with the responsible protection of historic municipal government registers and safeguarding the economic and legal rights of the municipal government and the city’s residents.Privacy “by design”. Protecting privacy is of maximum importance. The Data and Information Management Team has to consider and protect individual and collective privacy during the data life cycle, systematically and verifiably, as specified in the general regulation for data protection.Security. Municipal information is a strategic asset subject to risks, and it has to be managed in such a way as to minimize those risks. This includes privacy, data protection, algorithmic discrimination and cybersecurity risks that must be specifically established, promoting ethical and responsible data architecture, techniques for improving privacy and evaluating the social effects. Although security and privacy are two separate, independent fields, they are closely related, and it is essential for the units to take a coordinated approach in order to identify and manage cybersecurity and risks to privacy with applicable requirements and standards.Open Source. It is obligatory for the Data and Information Management Team to maintain its Open Data- Open Source platform. The platform allows citizens to access open data from multiple cities in a central location, regional universities and colleges to foster continuous education, and aids in the development of data analytics skills for citizens. Continuing to uphold the Open Source platform with allow the City to continually offer citizens the ability to provide valuable input on the structure and availability of its data. Strategic areasIn order to deploy the strategy for the responsible and ethical use of data, the following areas of action have been established, which we will detail below, together with the actions and emblematic projects associated with them.In general, the strategy pivots on the following general principals, which form the basis for the strategic areas described in this section.Data sovereigntyOpen data and transparencyThe exchange and reuse of dataPolitical decision-making informed by dataThe life cycle of data and continual or permanent accessData GovernanceData quality and accessibility are crucial for meaningful data analysis, and must be ensured through the implementation of data governance. IT will establish a Data Governance Board, a collaborative organizational capability made up of the city’s data and analytics champions, who will work together to develop policies and practices to treat and use data as a strategic asset.Data governance is the overall management of the availability, usability, integrity and security of data used in the city. Increased data quality will positively impact overall trust in data, resulting in increased use and adoption. The ownership, accessibility, security, and quality, of the data is defined and maintained by the Data Governance Board.To improve operational efficiency, an enterprise-wide data catalog will be created to inventory data and track metadata from various data sources to allow for rapid data asset discovery. Through the data catalog, the city will
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
Data types in this directory tree are: hybrid and inbred agronomic and performance traits; inbred genotypic data; and environmental data collected from the Genomes To Fields (G2F) project cooperators. G2F is an umbrella initiative to support translation of maize genomic information for the benefit of growers, consumers and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments. Ultimately this information will be used to enable accurate prediction of the phenotypes of corn plants in diverse environments. There are many dimensions to the over-arching goal of understanding genotype-by-environment (GxE) interactions, including which genes impact which traits and trait components, how genes interact among themselves (GxG), the relevance of specific genes under different growing conditions, and how these genes influence plant growth during various stages of development.
Facebook
TwitterScotland’s Common Good Act, passed in 1491, creates a legal distinction for historical property owned by local authorities — often land and buildings, but also “moveable assets” such as paintings, chains of office, and furniture. CommonGood.scot, launched in April by investigative journalism cooperative The Ferret, presents a searchable, browsable, and downloadable dataset of 2,900+ of these common good assets, compiled largely through freedom of information requests.
Facebook
TwitterA subset of ~30 inbreds were evaluated in 2014 and 2015 to develop an image based ear phenotyping tool. The data is stored in CyVerse. Data types in this directory tree are: dimension and width profile data collected from scanned images of ears, cobs, and kernels collected from the Genomes To Fields (G2F) project cooperators. G2F is an umbrella initiative to support translation of maize (Zea mays) genomic information for the benefit of growers, consumers and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments. Ultimately this information will be used to enable accurate prediction of the phenotypes of corn plants in diverse environments. There are many dimensions to the over-arching goal of understanding genotype-by-environment (GxE) interactions, including which genes impact which traits and trait components, how genes interact among themselves (GxG), the relevance of specific genes under different growing conditions, and how these genes influence plant growth during various stages of development. Resources in this dataset:Resource Title: CyVerse Genomes To Fields Inbred Ear Imaging 2017 dataset download. File Name: Web Page, url: http://datacommons.cyverse.org/browse/iplant/home/shared/commons_repo/curated/Edgar_Spalding_G2F_Inbred_Ear_Imaging_June_2017 Dataset (csv, tar.gz) and metadata (BibTex/Endnote) downloads. See _readme.txt for file contents.
Facebook
TwitterAttribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
Citation metrics are widely used and misused. We have created a publicly available database of top-cited scientists that provides standardized information on citations, h-index, co-authorship adjusted hm-index, citations to papers in different authorship positions and a composite indicator (c-score). Separate data are shown for career-long and, separately, for single recent year impact. Metrics with and without self-citations and ratio of citations to citing papers are given and data on retracted papers (based on Retraction Watch database) as well as citations to/from retracted papers have been added in the most recent iteration. Scientists are classified into 22 scientific fields and 174 sub-fields according to the standard Science-Metrix classification. Field- and subfield-specific percentiles are also provided for all scientists with at least 5 papers. Career-long data are updated to end-of-2023 and single recent year data pertain to citations received during calendar year 2023. The selection is based on the top 100,000 scientists by c-score (with and without self-citations) or a percentile rank of 2% or above in the sub-field. This version (7) is based on the August 1, 2024 snapshot from Scopus, updated to end of citation year 2023. This work uses Scopus data. Calculations were performed using all Scopus author profiles as of August 1, 2024. If an author is not on the list it is simply because the composite indicator value was not high enough to appear on the list. It does not mean that the author does not do good work. PLEASE ALSO NOTE THAT THE DATABASE HAS BEEN PUBLISHED IN AN ARCHIVAL FORM AND WILL NOT BE CHANGED. The published version reflects Scopus author profiles at the time of calculation. We thus advise authors to ensure that their Scopus profiles are accurate. REQUESTS FOR CORRECIONS OF THE SCOPUS DATA (INCLUDING CORRECTIONS IN AFFILIATIONS) SHOULD NOT BE SENT TO US. They should be sent directly to Scopus, preferably by use of the Scopus to ORCID feedback wizard (https://orcid.scopusfeedback.com/) so that the correct data can be used in any future annual updates of the citation indicator databases. The c-score focuses on impact (citations) rather than productivity (number of publications) and it also incorporates information on co-authorship and author positions (single, first, last author). If you have additional questions, see attached file on FREQUENTLY ASKED QUESTIONS. Finally, we alert users that all citation metrics have limitations and their use should be tempered and judicious. For more reading, we refer to the Leiden manifesto: https://www.nature.com/articles/520429a
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As policies, good practices and funder mandates on research data management evolve, more emphasis has been put on the licencing of data. Licencing information allow potential re-users to quickly identify what they can do with the data in question and is therefore an important component to ensure the reusability of research.
In my research I analyse a pre-existing collection of 840 Horizon 2020 public data management plans (DMPs) available on the repository of the University of Vienna, Phaidra,, to determine which ones mention creative commons licences and among those who do, what licences are being used.
This excel file contains the data underlying the publication "Uncommon Commons? Creative Commons licencing in Horizon 2020 Data Management Plans ".
Sheet 1 contains the data collected in the previous "Data Re-Use" project: 840 DMPs downloaded from CORDIS and vetted to ensure they are public documents and not copyrighted
Sheet 2 contains the same data as sheet 1, with columns D to Q not visible (for better reading) but an added column R which now contains the CC licening information (where available)
Sheet 3 is filtered so that only the projects containing CC BY relevant licencing are shown
Sheet 4 is filtered so that only the projects containing CC-BY-SA relevant licencing are shown
Sheet 5 is filtered so that only the projects containing CC-BY-NC relevant licencing are shown
Sheet 6 is filtered so that only the projects containing CC-BY-ND relevant licencing are shown
Sheet 7 is filtered so that only the projects containing Cc-BY-NC-ND relevant licencing are shown
Sheet 8 is filtered so that only the projects containing CC-BY-NC-SA relevant licencing are shown
Sheet 9 is filtered so that only the projects containing CC0 relevant information are shown
Sheet 10 provides an overview table of the relevant licences (manual entry)
Sheet 11 and 12 contain graphic visulations of the data as used in the article
Facebook
TwitterIDEA Section 618 Data Products: Data Displays - Part B IDEA Part B Data Displays present annual data related to children with disabilities based on data from various sources including IDEA Section 618 data, IDEA Annual Performance Report data, NAEP data, Census data, Common Core Data, and Consolidated State Performance Report data. The data displays are used to provide a clear, quick and accurate snapshot of each State’s/ entity’s education data regarding children served under the IDEA. 2024 Part B Data Displays 2024 2023 Part B Data Displays 2023 2022 Part B Data Displays 2022 2021 Part B Data Displays 2021
Facebook
TwitterThis dataset contains Raman-shifted Eye-safe Aerosol Lidar (REAL) final data collected during the Canopy Horizontal Array Turbulence Study (CHATS) field experiment from 14 March 2007 to 11 June 2007. The data are in netCDF format and are available as ten minute gzipped tar files.
Facebook
TwitterU.S. Government Workshttps://www.usa.gov/government-works
License information was derived automatically
Phenotypic, genotypic, and environment data for the 2016 field season: The data is stored in CyVerse. Data types in this directory tree are: hybrid and inbred agronomic and performance traits; inbred genotypic data; and environmental (soil, weather) data collected from the Genomes To Fields (G2F) project cooperators. G2F is an umbrella initiative to support translation of maize (Zea mays) genomic information for the benefit of growers, consumers and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments. Ultimately this information will be used to enable accurate prediction of the phenotypes of corn plants in diverse environments. There are many dimensions to the over-arching goal of understanding genotype-by-environment (GxE) interactions, including which genes impact which traits and trait components, how genes interact among themselves (GxG), the relevance of specific genes under different growing conditions, and how these genes influence plant growth during various stages of development. Resources in this dataset:Resource Title: CyVerse Genomes To Fields 2016 dataset download. File Name: Web Page, url: http://datacommons.cyverse.org/browse/iplant/home/shared/commons_repo/curated/GenomesToFields_G2F_2016_Data_Mar_2018 Dataset (csv) and metadata (BibTex, Endnote) data downloads. See _readme.txt for file contents.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Imaging Data Commons (IDC)(https://imaging.datacommons.cancer.gov/) [1] connects researchers with publicly available cancer imaging data, often linked with other types of cancer data. Many of the collections have limited annotations due to the expense and effort required to create these manually. The increased capabilities of AI analysis of radiology images provide an opportunity to augment existing IDC collections with new annotation data. To further this goal, we trained several nnUNet [2] based models for a variety of radiology segmentation tasks from public datasets and used them to generate segmentations for IDC collections.
To validate the model's performance, roughly 10% of the AI predictions were assigned to a validation set. For this set, a board-certified radiologist graded the quality of AI predictions on a Likert scale. If they did not 'strongly agree' with the AI output, the reviewer corrected the segmentation.
This record provides the AI segmentations, Manually corrected segmentations, and Manual scores for the inspected IDC Collection images.
Only 10% of the AI-derived annotations provided in this dataset are verified by expert radiologists . More details, on model training and annotations are provided within the associated manuscript to ensure transparency and reproducibility.
This work was done in two stages. Versions 1.x of this record were from the first stage. Versions 2.x added additional records. In the Version 1.x collections, a medical student (non-expert) reviewed all the AI predictions and rated them on a 5-point Likert Scale, for any AI predictions in the validation set that they did not 'strongly agree' with, the non-expert provided corrected segmentations. This non-expert was not utilized for the Version 2.x additional records.
Guidelines for reviewers to grade the quality of AI segmentations.
Each zip file in the collection correlates to a specific segmentation task. The common folder structure is
The qa-results.csv file contains metadata about the segmentations, their related IDC case image, as well as the Likert ratings and comments by the reviewers.
|
Column |
Description |
|
Collection |
The name of the IDC collection for this case |
|
PatientID |
PatientID in DICOM metadata of scan. Also called Case ID in the IDC |
|
StudyInstanceUID |
StudyInstanceUID in the DICOM metadata of the scan |
|
SeriesInstanceUID |
SeriesInstanceUID in the DICOM metadata of the scan |
|
Validation |
true/false if this scan was manually reviewed |
|
Reviewer |
Coded ID of the reviewer. Radiologist IDs start with ‘rad’ non-expect IDs start with ‘ne’ |
|
AimiProjectYear |
2023 or 2024, This work was split over two years. The main methodology difference between the two is that in 2023, a non-expert also reviewed the AI output, but a non-expert was not utilized in 2024. |
|
AISegmentation |
The filename of the AI prediction file in DICOM-seg format. This file is in the ai-segmentations-dcm folder. |
|
CorrectedSegmentation |
The filename of the reviewer-corrected prediction file in DICOM-seg format. This file is in the qa-segmentations-dcm folder. If the reviewer strongly agreed with the AI for all segments, they did not provide any correction file. |
|
Was the AI predicted ROIs accurate? |
This column appears one for each segment in the task for images from AimiProjectYear 2023. The reviewer rates segmentation quality on a Likert scale. In tasks that have multiple labels in the output, there is only one rating to cover them all. |
|
Was the AI predicted {SEGMENT_NAME} label accurate?
|
This column appears one for each segment in the task for images from AimiProjectYear 2024. The reviewer rates each segment for its quality on a Likert scale. |
|
Do you have any comments about the AI predicted ROIs?
|
Open ended question for the reviewer |
|
Do you have any comments about the findings from the study scans? |
Open ended question for the reviewer |
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset provides test data for the FMCIB model integrated within the MHub platform, a robust solution for deploying, managing, and testing deep learning models tailored for medical imaging.
The sample images used in this dataset are sourced from public datasets available through the Imaging Data Commons (IDC), a repository that provides access to a wide range of medical imaging data. This ensures that the test cases reflect real-world clinical scenarios, facilitating robust validation of model performance.
The primary objective of this dataset is to enable the rigorous testing and validation of model performance within MHub workflows. To assess the performance of a model, users can process the sample data and compare the resulting output to the reference data. Additionally, users may inspect the sample and reference data independently to better understand the input-output structure that defines each model’s workflow.
This dataset streamlines the process of model validation. By providing a standardized testing framework, the dataset facilitates reproducible results and accelerates the development of reliable AI models for medical imaging.
MHub (mhub.ai) is an innovative platform designed to simplify the deployment, management, and testing of deep learning models for medical imaging. It enables researchers and clinicians to integrate AI-based solutions into clinical workflows while ensuring reproducibility and scalability. The platform provides a modular framework where users can execute complex workflows, such as image segmentation, classification, and registration, leveraging state-of-the-art AI models. MHub's goal is to accelerate the development and clinical adoption of medical imaging models by providing a streamlined, user-friendly environment for testing and validating new algorithms.
For more information on the platform and its capabilities, visit mhub.ai.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
The dataset contains results from 2019 phenotypic evaluations of multi-location field trials. Also included is field level weather and soil data for the multiple locations where field testing was conducted. Data was collected by collaborators in the Genomes to Fields (G2F) initiative. G2F is an umbrella initiative to support translation of maize (Zea mays) genomic information for the benefit of growers, consumers, and society. This public-private partnership is building on publicly funded corn genome sequencing projects to develop approaches to understand the functions of corn genes and specific alleles across environments. Ultimately this information will be used to enable accurate prediction of the phenotypes of corn plants in diverse environments.