Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
RCV1
The open dataset, software, and other files accompanying the manuscript "An Open Combinatorial Diffraction Dataset Including Consensus Human and Machine Learning Labels with Quantified Uncertainty for Training New Machine Learning Models," submitted for publication to Integrated Materials and Manufacturing Innovations.Machine learning and autonomy are increasingly prevalent in materials science, but existing models are often trained or tuned using idealized data as absolute ground truths. In actual materials science, "ground truth" is often a matter of interpretation and is more readily determined by consensus. Here we present the data, software, and other files for a study using as-obtained diffraction data as a test case for evaluating the performance of machine learning models in the presence of differing expert opinions. We demonstrate that experts with similar backgrounds can disagree greatly even for something as intuitive as using diffraction to identify the start and end of a phase transformation. We then use a logarithmic likelihood method to evaluate the performance of machine learning models in relation to the consensus expert labels and their variance. We further illustrate this method's efficacy in ranking a number of state-of-the-art phase mapping algorithms. We propose a materials data challenge centered around the problem of evaluating models based on consensus with uncertainty. The data, labels, and code used in this study are all available online at data.gov, and the interested reader is encouraged to replicate and improve the existing models or to propose alternative methods for evaluating algorithmic performance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The multi-label code-smell dataset for studies related to multi-label classification
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global image data labeling service market size was valued at approximately USD 1.5 billion in 2023 and is projected to reach around USD 6.1 billion by 2032, exhibiting a robust CAGR of 17.1% during the forecast period. The exponential growth of this market is driven by the increasing demand for high-quality labeled data for machine learning and artificial intelligence applications across various industries.
One of the primary growth factors of the image data labeling service market is the surge in the adoption of artificial intelligence (AI) and machine learning (ML) technologies across multiple sectors. Organizations are increasingly relying on AI and ML to enhance operational efficiency, improve customer experience, and gain competitive advantages. As a result, there is a rising need for accurately labeled data to train these AI and ML models, driving the demand for image data labeling services. Furthermore, advancements in computer vision technology have expanded the scope of image data labeling, making it essential for applications such as autonomous vehicles, facial recognition, and medical imaging.
Another significant factor contributing to market growth is the proliferation of big data. The massive volume of data generated from various sources, including social media, surveillance cameras, and IoT devices, necessitates the need for effective data labeling solutions. Companies are leveraging image data labeling services to manage and analyze these vast datasets efficiently. Additionally, the growing focus on personalized customer experiences in sectors like retail and e-commerce is fueling the demand for labeled data, which helps in understanding customer preferences and behaviors.
Investment in research and development (R&D) activities by key players in the market is also a crucial growth driver. Companies are continuously innovating and developing new techniques to enhance the accuracy and efficiency of image data labeling processes. These advancements not only improve the quality of labeled data but also reduce the time and cost associated with manual labeling. The integration of AI and machine learning algorithms in the labeling process is further boosting the market growth by automating repetitive tasks and minimizing human errors.
From a regional perspective, North America holds the largest market share due to early adoption of advanced technologies and the presence of major AI and ML companies. The region is expected to maintain its dominance during the forecast period, driven by continuous technological advancements and substantial investments in AI research. Asia Pacific is anticipated to witness the highest growth rate due to the rising adoption of AI technologies in countries like China, Japan, and India. The increasing focus on digital transformation and government initiatives to promote AI adoption are significant factors contributing to the regional market growth.
The image data labeling service market is segmented into three primary types: manual labeling, semi-automatic labeling, and automatic labeling. Manual labeling, which involves human annotators tagging images, is essential for ensuring high accuracy, especially in complex tasks. Despite being time-consuming and labor-intensive, manual labeling is widely used in applications where nuanced understanding and precision are paramount. This segment continues to hold a significant market share due to the reliability it offers. However, the cost and time constraints associated with manual labeling are driving the growth of more advanced labeling techniques.
Semi-automatic labeling combines human intervention with automated processes, providing a balance between accuracy and efficiency. In this approach, algorithms perform initial labeling, and human annotators refine and validate the results. This method significantly reduces the time required for data labeling while maintaining high accuracy levels. The semi-automatic labeling segment is gaining traction as it offers a scalable and cost-effective solution, particularly beneficial for industries dealing with large volumes of data, such as retail and IT.
Automatic labeling, driven by AI and machine learning algorithms, represents the most advanced segment of the market. This approach leverages sophisticated models to autonomously label image data with minimal human intervention. The continuous improvement in AI algorithms, along with the availability of large datasets for training, has enhanced the accuracy and reliability of automatic lab
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Data Labeling Solutions and Services market is experiencing robust growth, driven by the escalating demand for high-quality training data to fuel the advancement of artificial intelligence (AI) and machine learning (ML) technologies. The market, estimated at $10 billion in 2025, is projected to expand at a Compound Annual Growth Rate (CAGR) of 25% from 2025 to 2033, reaching an estimated $45 billion by 2033. This significant growth is fueled by several key factors. The increasing adoption of AI across diverse sectors, including automotive, healthcare, and finance, is creating a massive need for labeled datasets. Furthermore, the complexity of AI models is constantly increasing, requiring larger and more sophisticated labeled datasets. The emergence of new data labeling techniques, such as synthetic data generation and automated labeling tools, is also accelerating market expansion. However, challenges remain, including the high cost and time associated with data labeling, the need for skilled professionals, and concerns surrounding data privacy and security. This necessitates innovative solutions and collaborative efforts to address these limitations and fully realize the potential of AI. The market segmentation reveals a diverse landscape. The automotive sector is a significant driver, heavily relying on data labeling for autonomous driving systems and advanced driver-assistance systems (ADAS). Healthcare is another key segment, leveraging data labeling for medical image analysis, diagnostics, and drug discovery. Financial services utilize data labeling for fraud detection, risk assessment, and algorithmic trading. While these sectors dominate currently, the "Others" segment, encompassing various emerging applications, is poised for substantial growth. Geographically, North America currently holds the largest market share, attributed to the high concentration of AI companies and technological advancements. However, the Asia-Pacific region is projected to witness the fastest growth rate due to the increasing adoption of AI and the availability of a large, skilled workforce. Competition within the market is fierce, with established players and emerging startups vying for market share. This competitive landscape drives innovation and offers diverse solutions to meet the evolving needs of the industry.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
Market Analysis for Data Labeling Software The global data labeling software market is expected to reach a valuation of USD 53 million by 2033, exhibiting a remarkable CAGR of 16.6% over the forecast period (2025-2033). This growth is attributed to the surging demand for accurately labeled data for AI model training and the proliferation of machine learning and deep learning applications across various industries. Key Drivers, Trends, and Restraints The major drivers fueling market growth include the increasing adoption of AI and ML in enterprise operations, the growing volume of unstructured data, and the need for high-quality labeled data for model training. Other significant trends include the rise of cloud-based data labeling platforms, the integration of automation technologies, and the emergence of specialized data labeling tools for specific industry verticals. However, the market faces certain restraints, such as data privacy concerns, the cost and complexity of data labeling, and the shortage of skilled data labelers. Data labeling software is essential for training machine learning models. It enables users to annotate data with labels that identify the objects or concepts present, which helps the model learn to recognize and classify them. The market for data labeling software is growing rapidly, driven by the increasing demand for machine learning and AI applications.
its manual application is labor-intensive
https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy
The open-source data labeling tool market is experiencing robust growth, driven by the increasing demand for high-quality training data in machine learning and artificial intelligence applications. The market's expansion is fueled by several factors: the rising adoption of AI across various sectors (including IT, automotive, healthcare, and finance), the need for cost-effective data annotation solutions, and the inherent flexibility and customization offered by open-source tools. While cloud-based solutions currently dominate the market due to scalability and accessibility, on-premise deployments remain significant, particularly for organizations with stringent data security requirements. The market's growth is further propelled by advancements in automation and semi-supervised learning techniques within data labeling, leading to increased efficiency and reduced annotation costs. Geographic distribution shows a strong concentration in North America and Europe, reflecting the higher adoption of AI technologies in these regions; however, Asia-Pacific is emerging as a rapidly growing market due to increasing investment in AI and the availability of a large workforce for data annotation. Despite the promising outlook, certain challenges restrain market growth. The complexity of implementing and maintaining open-source tools, along with the need for specialized technical expertise, can pose barriers to entry for smaller organizations. Furthermore, the quality control and data governance aspects of open-source annotation require careful consideration. The potential for data bias and the need for robust validation processes necessitate a strategic approach to ensure data accuracy and reliability. Competition is intensifying with both established and emerging players vying for market share, forcing companies to focus on differentiation through innovation and specialized functionalities within their tools. The market is anticipated to maintain a healthy growth trajectory in the coming years, with increasing adoption across diverse sectors and geographical regions. The continued advancements in automation and the growing emphasis on data quality will be key drivers of future market expansion.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context: Exception handling (EH) bugs stem from incorrect usage of exception handling mechanisms (EHMs) and often incur severe consequences (e.g., system downtime, data loss, and security risk). Tracking EH bugs is particularly relevant for contemporary systems (e.g., cloud- and AI-based systems), in which the software's sophisticated logic is an additional threat to the correct use of the EHM. On top of that, bug reporters seldom can tag EH bugs --- since it may require an encompassing knowledge of the software's EH strategy. Surprisingly, to the best of our knowledge, there is no automated procedure to identify EH bugs from report descriptions.Objective: First, we aim to evaluate the extent to which Natural Language Processing (NLP) and Machine Learning (ML) can be used to reliably label EH bugs using the text fields from bug reports (e.g., summary, description, and comments). Second, we aim to provide a reliably labeled dataset that the community can use in future endeavors. Overall, we expect our work to raise the community's awareness regarding the importance of EH bugs.Method: We manually analyzed 4,516 bug reports from the four main components of Apache’s Hadoop project, out of which we labeled ~20% (943) as EH bugs. We also labeled 2,584 non-EH bugs analyzing their bug-fixing code and creating a dataset composed of 7,100 bug reports. Then, we used word embedding techniques (Bag-of-Words and TF-IDF) to summarize the textual fields of bug reports. Subsequently, we used these embeddings to fit five classes of ML methods and evaluate them on unseen data. We also evaluated a pre-trained transformer-based model using the complete textual fields. We have also evaluated whether considering only EH keywords is enough to achieve high predictive performance.Results: Our results show that using a pre-trained DistilBERT with a linear layer trained with our proposed dataset can reasonably label EH bugs, achieving ROC-AUC scores of up to 0.88. The combination of NLP and ML traditional techniques achieved ROC-AUC scores of up to 0.74 and recall up to 0.56. As a sanity check, we also evaluate methods using embeddings extracted solely from keywords. Considering ROC-AUC as the primary concern, for the majority of ML methods tested, the analysis suggests that keywords alone are not sufficient to characterize reports of EH bugs, although this can change based on other metrics (such as recall and precision) or ML methods (e.g., Random Forest).Conclusions: To the best of our knowledge, this is the first study addressing the problem of automatic labeling of EH bugs. Based on our results, we can conclude that the use of ML techniques, specially transformer-base models, sounds promising to automate the task of labeling EH bugs. Overall, we hope (i) that our work will contribute towards raising awareness around EH bugs; and (ii) that our (publicly available) dataset will serve as a benchmarking dataset, paving the way for follow-up works. Additionally, our findings can be used to build tools that help maintainers flesh out EH bugs during the triage process.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Multi-Label Web Page Classification Dataset
Dataset Description
The Multi-Label Web Page Classification Dataset is a curated dataset containingweb page titles and snippets, extracted from the CC-Meta25-1M dataset. Each entry has been automatically categorized into multiple predefined categories using ChatGPT-4o-mini. This dataset is designed for multi-label text classification tasks, making it ideal for training and evaluating machine learning models in web content… See the full description on the dataset page: https://huggingface.co/datasets/tshasan/multi-label-web-categorization.
Modeling data and analysis scripts generated during the current study are available in the github repository: https://github.com/USEPA/CompTox-MIEML. RefChemDB is available for download as supplemental material from its original publication (PMID: 30570668). LINCS gene expression data are publicly available and accessible through the gene expression omnibus (GSE92742 and GSE70138) at https://www.ncbi.nlm.nih.gov/geo/ . This dataset is associated with the following publication: Bundy, J., R. Judson, A. Williams, C. Grulke, I. Shah, and L. Everett. Predicting Molecular Initiating Events Using Chemical Target Annotations and Gene Expression. BioData Mining. BioMed Central Ltd, London, UK, issue}: 7, (2022).
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The data labeling market is experiencing robust growth, projected to reach $3.84 billion in 2025 and maintain a Compound Annual Growth Rate (CAGR) of 28.13% from 2025 to 2033. This expansion is fueled by the increasing demand for high-quality training data across various sectors, including healthcare, automotive, and finance, which heavily rely on machine learning and artificial intelligence (AI). The surge in AI adoption, particularly in areas like autonomous vehicles, medical image analysis, and fraud detection, necessitates vast quantities of accurately labeled data. The market is segmented by sourcing type (in-house vs. outsourced), data type (text, image, audio), labeling method (manual, automatic, semi-supervised), and end-user industry. Outsourcing is expected to dominate the sourcing segment due to cost-effectiveness and access to specialized expertise. Similarly, image data labeling is likely to hold a significant share, given the visual nature of many AI applications. The shift towards automation and semi-supervised techniques aims to improve efficiency and reduce labeling costs, though manual labeling will remain crucial for tasks requiring high accuracy and nuanced understanding. Geographical distribution shows strong potential across North America and Europe, with Asia-Pacific emerging as a key growth region driven by increasing technological advancements and digital transformation. Competition in the data labeling market is intense, with a mix of established players like Amazon Mechanical Turk and Appen, alongside emerging specialized companies. The market's future trajectory will likely be shaped by advancements in automation technologies, the development of more efficient labeling techniques, and the increasing need for specialized data labeling services catering to niche applications. Companies are focusing on improving the accuracy and speed of data labeling through innovations in AI-powered tools and techniques. Furthermore, the rise of synthetic data generation offers a promising avenue for supplementing real-world data, potentially addressing data scarcity challenges and reducing labeling costs in certain applications. This will, however, require careful attention to ensure that the synthetic data generated is representative of real-world data to maintain model accuracy. This comprehensive report provides an in-depth analysis of the global data labeling market, offering invaluable insights for businesses, investors, and researchers. The study period covers 2019-2033, with 2025 as the base and estimated year, and a forecast period of 2025-2033. We delve into market size, segmentation, growth drivers, challenges, and emerging trends, examining the impact of technological advancements and regulatory changes on this rapidly evolving sector. The market is projected to reach multi-billion dollar valuations by 2033, fueled by the increasing demand for high-quality data to train sophisticated machine learning models. Recent developments include: September 2024: The National Geospatial-Intelligence Agency (NGA) is poised to invest heavily in artificial intelligence, earmarking up to USD 700 million for data labeling services over the next five years. This initiative aims to enhance NGA's machine-learning capabilities, particularly in analyzing satellite imagery and other geospatial data. The agency has opted for a multi-vendor indefinite-delivery/indefinite-quantity (IDIQ) contract, emphasizing the importance of annotating raw data be it images or videos—to render it understandable for machine learning models. For instance, when dealing with satellite imagery, the focus could be on labeling distinct entities such as buildings, roads, or patches of vegetation.October 2023: Refuel.ai unveiled a new platform, Refuel Cloud, and a specialized large language model (LLM) for data labeling. Refuel Cloud harnesses advanced LLMs, including its proprietary model, to automate data cleaning, labeling, and enrichment at scale, catering to diverse industry use cases. Recognizing that clean data underpins modern AI and data-centric software, Refuel Cloud addresses the historical challenge of human labor bottlenecks in data production. With Refuel Cloud, enterprises can swiftly generate the expansive, precise datasets they require in mere minutes, a task that traditionally spanned weeks.. Key drivers for this market are: Rising Penetration of Connected Cars and Advances in Autonomous Driving Technology, Advances in Big Data Analytics based on AI and ML. Potential restraints include: Rising Penetration of Connected Cars and Advances in Autonomous Driving Technology, Advances in Big Data Analytics based on AI and ML. Notable trends are: Healthcare is Expected to Witness Remarkable Growth.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The open-source data labeling tool market is experiencing robust growth, driven by the increasing demand for high-quality training data in various AI applications. The market's expansion is fueled by several key factors: the rising adoption of machine learning and deep learning algorithms across industries, the need for efficient and cost-effective data annotation solutions, and a growing preference for customizable and flexible tools that can adapt to diverse data types and project requirements. While proprietary solutions exist, the open-source ecosystem offers advantages including community support, transparency, cost-effectiveness, and the ability to tailor tools to specific needs, fostering innovation and accessibility. The market is segmented by tool type (image, text, video, audio), deployment model (cloud, on-premise), and industry (automotive, healthcare, finance). We project a market size of approximately $500 million in 2025, with a compound annual growth rate (CAGR) of 25% from 2025 to 2033, reaching approximately $2.7 billion by 2033. This growth is tempered by challenges such as the complexities associated with data security, the need for skilled personnel to manage and use these tools effectively, and the inherent limitations of certain open-source solutions compared to their commercial counterparts. Despite these restraints, the open-source model's inherent flexibility and cost advantages will continue to attract a significant user base. The market's competitive landscape includes established players like Alecion and Appen, alongside numerous smaller companies and open-source communities actively contributing to the development and improvement of these tools. Geographical expansion is expected across North America, Europe, and Asia-Pacific, with the latter projected to witness significant growth due to the increasing adoption of AI and machine learning in developing economies. Future market trends point towards increased integration of automated labeling techniques within open-source tools, enhanced collaborative features to improve efficiency, and further specialization to cater to specific data types and industry-specific requirements. Continuous innovation and community contributions will remain crucial drivers of growth in this dynamic market segment.
https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy
The global market for data labeling tools is experiencing robust growth, driven by the escalating demand for high-quality training data in the burgeoning fields of artificial intelligence (AI) and machine learning (ML). The market, estimated at $2 billion in 2025, is projected to exhibit a Compound Annual Growth Rate (CAGR) of approximately 25% from 2025 to 2033, reaching an estimated market value of $10 billion by 2033. This expansion is fueled by several key factors, including the increasing adoption of AI across diverse industries like automotive, healthcare, and finance, the rising complexity of AI models requiring larger and more meticulously labeled datasets, and the emergence of innovative data labeling techniques like active learning and transfer learning. The market is segmented by tool type (e.g., image annotation, text annotation, video annotation), deployment mode (cloud, on-premise), and end-user industry. Competitive landscape analysis reveals a mix of established players like Amazon, Google, and Lionbridge, alongside emerging innovative startups offering specialized solutions. Despite the significant growth potential, the market faces certain challenges. The high cost of data labeling, particularly for complex datasets, can be a barrier to entry for smaller companies. Ensuring data quality and accuracy remains a crucial concern, as errors in labeled data can significantly impact the performance of AI models. Furthermore, the need for skilled data annotators and the ethical considerations surrounding data privacy and bias in labeled datasets pose ongoing challenges to market expansion. To overcome these hurdles, market players are focusing on developing automated labeling tools, improving data quality control mechanisms, and prioritizing data privacy and ethical labeling practices. The future of the data labeling tools market is bright, with continued innovation and increasing demand expected to drive significant growth throughout the forecast period.
https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The Data Collection and Labeling market is experiencing robust growth, driven by the increasing demand for high-quality training data to fuel the advancements in artificial intelligence (AI) and machine learning (ML) technologies. The market's expansion is fueled by the burgeoning adoption of AI across diverse sectors, including healthcare, automotive, finance, and retail. Companies are increasingly recognizing the critical role of accurate and well-labeled data in developing effective AI models. This has led to a surge in outsourcing data collection and labeling tasks to specialized companies, contributing to the market's expansion. The market is segmented by data type (image, text, audio, video), labeling technique (supervised, unsupervised, semi-supervised), and industry vertical. We project a steady CAGR of 20% for the period 2025-2033, reflecting continued strong demand across various applications. Key trends include the increasing use of automation and AI-powered tools to streamline the data labeling process, resulting in higher efficiency and lower costs. The growing demand for synthetic data generation is also emerging as a significant trend, alleviating concerns about data privacy and scarcity. However, challenges remain, including data bias, ensuring data quality, and the high cost associated with manual labeling for complex datasets. These restraints are being addressed through technological innovations and improvements in data management practices. The competitive landscape is characterized by a mix of established players and emerging startups. Companies like Scale AI, Appen, and others are leading the market, offering comprehensive solutions that span data collection, annotation, and model validation. The presence of numerous companies suggests a fragmented yet dynamic market, with ongoing competition driving innovation and service enhancements. The geographical distribution of the market is expected to be broad, with North America and Europe currently holding significant market share, followed by Asia-Pacific showing robust growth potential. Future growth will depend on technological advancements, increasing investment in AI, and the emergence of new applications that rely on high-quality data.
https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy
The data collection and labeling market is experiencing robust growth, fueled by the escalating demand for high-quality training data in artificial intelligence (AI) and machine learning (ML) applications. The market, estimated at $15 billion in 2025, is projected to achieve a Compound Annual Growth Rate (CAGR) of 25% over the forecast period (2025-2033), reaching approximately $75 billion by 2033. This expansion is primarily driven by the increasing adoption of AI across diverse sectors, including healthcare (medical image analysis, drug discovery), automotive (autonomous driving systems), finance (fraud detection, risk assessment), and retail (personalized recommendations, inventory management). The rising complexity of AI models and the need for more diverse and nuanced datasets are significant contributing factors to this growth. Furthermore, advancements in data annotation tools and techniques, such as active learning and synthetic data generation, are streamlining the data labeling process and making it more cost-effective. However, challenges remain. Data privacy concerns and regulations like GDPR necessitate robust data security measures, adding to the cost and complexity of data collection and labeling. The shortage of skilled data annotators also hinders market growth, necessitating investments in training and upskilling programs. Despite these restraints, the market’s inherent potential, coupled with ongoing technological advancements and increased industry investments, ensures sustained expansion in the coming years. Geographic distribution shows strong concentration in North America and Europe initially, but Asia-Pacific is poised for rapid growth due to increasing AI adoption and the availability of a large workforce. This makes strategic partnerships and global expansion crucial for market players aiming for long-term success.
Supervised machine learning methods are increasingly employed in political science. Such models require costly manual labeling of documents. In this paper we introduce active learning, a framework in which data to be labeled by human coders are not chosen at random but rather targeted in such a way that the required amount of data to train a machine learning model can be minimized. We study the benefits of active learning using text data examples. We perform simulation studies that illustrate conditions where active learning can reduce the cost of labeling text data. We perform these simulations on three corpora that vary in size, document length and domain. We find that in cases where the document class of interest is not balanced, researchers can label a fraction of the documents one would need using random sampling (or `passive' learning) to achieve equally performing classifiers. We further investigate how varying levels of inter-coder reliability affect the active learning procedures and find that even with low-reliability active learning performs more efficiently than does random sampling.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Automated classification of metadata of research data by their discipline(s) of research can be used in scientometric research, by repository service providers, and in the context of research data aggregation services. Openly available metadata of the DataCite index for research data were used to compile a large training and evaluation set comprised of 609,524 records. This publication contains aggregated data for the paper. It also contains the evaluation data of all model/hyper-parameter training and test runs.
These images and associated binary labels were collected from collaborators across multiple universities to serve as a diverse representation of biomedical images of vessel structures, for use in the training and validation of machine learning tools for vessel segmentation. The dataset contains images from a variety of imaging modalities, at different resolutions, using difference sources of contrast and featuring different organs/ pathologies. This data was use to train, test and validated a foundational model for 3D vessel segmentation, tUbeNet, which can be found on github. The paper descripting the training and validation of the model can be found here. Filenames are structured as follows: Data - [Modality][species Organ][resolution].tif Labels - [Modality][species Organ][resolution]labels.tif Sub-volumes of larger dataset - [Modality][species Organ]_subvolume[dimensions in pixels].tif Manual labelling of blood vessels was carried out using Amira (2020.2, Thermo-Fisher, UK). Training data: opticalHREM_murineLiver_2.26x2.26x1.75um.tif: A high resolution episcopic microscopy (HREM) dataset, acquired in house by staining a healthy mouse liver with Eosin B and imaged using a standard HREM protocol. NB: 25% of this image volume was withheld from training, for use as test data. CT_murineTumour_20x20x20um.tif: X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (acquired in house). NB: 25% of this image volume was withheld from training, for use as test data. RSOM_murineTumour_20x20um.tif: Raster-Scanning Optoacoustic Mesoscopy (RSOM) data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge). The image data has undergone filtering to reduce the background ​(Brown et al., 2019)​. OCTA_humanRetina_24x24um.tif: retinal angiography data obtained using Optical Coherence Tomography Angiography (OCT-A) (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital). Test data: MRI_porcineLiver_0.9x0.9x5mm.tif: T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver, acquired in-house. Test Data MFHREM_murineTumourLectin_2.76x2.76x2.61um.tif: a subcutaneous colorectal tumour mouse model was imaged in house using Multi-fluorescence HREM in house, with Dylight 647 conjugated lectin staining the vasculature ​(Walsh et al., 2021)​. The image data has been processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 480x480x640 voxels was manually labelled (MFHREM_murineTumourLectin_subvolume480x480x640.tif). MFHREM_murineBrainLectin_0.85x0.85x0.86um.tif: an MF-HREM image of the cortex of a mouse brain, stained with Dylight-647 conjugated lectin, was acquired in house ​(Walsh et al., 2021)​. The image data has been downsampled and processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 1000x1000x99 voxels was manually labelled. This sub-volume is provided at full resolution and without preprocessing (MFHREM_murineBrainLectin_subvol_0.57x0.57x0.86um.tif). 2Photon_murineOlfactoryBulbLectin_0.2x0.46x5.2um.tif: two-photon data of mouse olfactory bulb blood vessels, labelled with sulforhodamine 101, was kindly provided by Yuxin Zhang at the Sensory Circuits and Neurotechnology Lab, the Francis Crick Institute ​(Bosch et al., 2022)​. NB: A sub-volume of 500x500x79 voxel was manually labelled (2Photon_murineOlfactoryBulbLectin_subvolume500x500x79.tif). References: ​​Bosch, C., Ackels, T., Pacureanu, A., Zhang, Y., Peddie, C. J., Berning, M., Rzepka, N., Zdora, M. C., Whiteley, I., Storm, M., Bonnin, A., Rau, C., Margrie, T., Collinson, L., & Schaefer, A. T. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications 2022 13:1, 13(1), 1–16. https://doi.org/10.1038/s41467-022-30199-6 ​Brown, E., Brunker, J., & Bohndiek, S. E. (2019). Photoacoustic imaging as a tool to probe the tumour microenvironment. DMM Disease Models and Mechanisms, 12(7). https://doi.org/10.1242/DMM.039636 ​Walsh, C., Holroyd, N. A., Finnerty, E., Ryan, S. G., Sweeney, P. W., Shipley, R. J., & Walker-Samuel, S. (2021). Multifluorescence High-Resolution Episcopic Microscopy for 3D Imaging of Adult Murine Organs. Advanced Photonics Research, 2(10), 2100110. https://doi.org/10.1002/ADPR.202100110 ​Walsh, C., Holroyd, N., Shipley, R., & Walker-Samuel, S. (2020). Asymmetric Point Spread Function Estimation and Deconvolution for Serial-Sectioning Block-Face Imaging. Communications in Computer and Information Science, 1248 CCIS, 235–249. https://doi.org/10.1007/978-3-030-52791-4_19 ​
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
RTAnews dataset is a collections of multi-label Arabic texts, collected form Russia Today in Arabic news portal. It consists of 23,837 texts (news articles) distributed over 40 categories, and divided into 15,001 texts for the training and 8,836 texts for the test.
The original dataset (without preprocessing), a preprocessed version of the dataset, versions of the dataset in MEKA and Mulan formats, single-label version, and WEAK version all are available.
For any enquiry or support regarding the dataset, please feel free to contact us via bassalemi at gmail dot com
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
RCV1