Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
In the field of polymer informatics, utilizing machine learning (ML) techniques to evaluate the glass transition temperature Tg and other properties of polymers has attracted extensive attention. This data-centric approach is much more efficient and practical than the laborious experimental measurements when encountered a daunting number of polymer structures. Various ML models are demonstrated to perform well for Tg prediction. Nevertheless, they are trained on different data sets, using different structure representations, and based on different feature engineering methods. Thus, the critical question arises on selecting a proper ML model to better handle the Tg prediction with generalization ability. To provide a fair comparison of different ML techniques and examine the key factors that affect the model performance, we carry out a systematic benchmark study by compiling 79 different ML models and training them on a large and diverse data set. The three major components in setting up an ML model are structure representations, feature representations, and ML algorithms. In terms of polymer structure representation, we consider the polymer monomer, repeat unit, and oligomer with longer chain structure. Based on that feature, representation is calculated, including Morgan fingerprinting with or without substructure frequency, RDKit descriptors, molecular embedding, molecular graph, etc. Afterward, the obtained feature input is trained using different ML algorithms, such as deep neural networks, convolutional neural networks, random forest, support vector machine, LASSO regression, and Gaussian process regression. We evaluate the performance of these ML models using a holdout test set and an extra unlabeled data set from high-throughput molecular dynamics simulation. The ML model’s generalization ability on an unlabeled data set is especially focused, and the model’s sensitivity to topology and the molecular weight of polymers is also taken into consideration. This benchmark study provides not only a guideline for the Tg prediction task but also a useful reference for other polymer informatics tasks.
Facebook
TwitterThe STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. All images were acquired from labeled examples on ImageNet.
To use this dataset:
import tensorflow_datasets as tfds
ds = tfds.load('stl10', split='train')
for ex in ds.take(4):
print(ex)
See the guide for more informations on tensorflow_datasets.
https://storage.googleapis.com/tfds-data/visualization/fig/stl10-1.0.0.png" alt="Visualization" width="500px">
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To analyse large numbers of texts, social science researchers are increasingly confronting the challenge of text classification. When manual labeling is not possible and researchers have to find automatized ways to classify texts, computer science provides a useful toolbox of machine-learning methods whose performance remains understudied in the social sciences. In this article, we compare the performance of the most widely used text classifiers by applying them to a typical research scenario in social science research: a relatively small labeled dataset with infrequent occurrence of categories of interest, which is a part of a large unlabeled dataset. As an example case, we look at Twitter communication regarding climate change, a topic of increasing scholarly interest in interdisciplinary social science research. Using a novel dataset including 5,750 tweets from various international organizations regarding the highly ambiguous concept of climate change, we evaluate the performance of methods in automatically classifying tweets based on whether they are about climate change or not. In this context, we highlight two main findings. First, supervised machine-learning methods perform better than state-of-the-art lexicons, in particular as class balance increases. Second, traditional machine-learning methods, such as logistic regression and random forest, perform similarly to sophisticated deep-learning methods, whilst requiring much less training time and computational resources. The results have important implications for the analysis of short texts in social science research.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global market size of Machine Learning (ML) courses is witnessing substantial growth, with market valuation expected to reach $3.1 billion in 2023 and projected to soar to $12.6 billion by 2032, exhibiting a robust CAGR of 16.5% over the forecast period. This rapid expansion is fueled by the increasing adoption of artificial intelligence (AI) and machine learning technologies across various industries, the rising need for upskilling and reskilling in the workforce, and the growing penetration of online education platforms.
One of the most significant growth factors driving the ML courses market is the escalating demand for AI and ML expertise in the job market. As industries increasingly integrate AI and machine learning into their operations to enhance efficiency and innovation, there is a burgeoning need for professionals with relevant skills. Companies across sectors such as finance, healthcare, retail, and manufacturing are investing heavily in training programs to bridge the skills gap, thus driving the demand for ML courses. Additionally, the rapid evolution of technology necessitates continuous learning, further bolstering market growth.
Another crucial factor contributing to the market's expansion is the proliferation of online education platforms that offer flexible and affordable ML courses. Platforms like Coursera, Udacity, edX, and Khan Academy have made high-quality education accessible to a global audience. These platforms offer an array of courses tailored to different skill levels, from beginners to advanced learners, making it easier for individuals to pursue continuous learning and career advancement. The convenience and flexibility of online learning are particularly appealing to working professionals and students, thereby driving the market's growth.
The increasing collaboration between educational institutions and technology companies is also playing a pivotal role in the growth of the ML courses market. Many universities and colleges are partnering with leading tech firms to develop specialized curricula that align with industry requirements. These collaborations help ensure that the courses offered are up-to-date with the latest technological advancements and industry standards. As a result, students and professionals are better equipped with the skills needed to thrive in a technology-driven job market, further propelling the demand for ML courses.
On a regional level, North America holds a significant share of the ML courses market, driven by the presence of numerous leading tech companies and educational institutions, as well as a highly skilled workforce. The region's strong emphasis on innovation and technological advancement is a key driver of market growth. Additionally, Asia Pacific is emerging as a lucrative market for ML courses, with countries like China, India, and Japan witnessing increased investments in AI and ML education and training. The rising internet penetration, growing popularity of online education, and government initiatives to promote digital literacy are some of the factors contributing to the market's growth in this region.
Self-Supervised Learning, a cutting-edge approach in the realm of machine learning, is gaining traction as a pivotal element in the development of more autonomous AI systems. Unlike traditional supervised learning, which relies heavily on labeled data, self-supervised learning leverages unlabeled data to train models, significantly reducing the dependency on human intervention for data annotation. This method is particularly advantageous in scenarios where acquiring labeled data is costly or impractical. By enabling models to learn from vast amounts of unlabeled data, self-supervised learning enhances the ability of AI systems to generalize from limited labeled examples, thereby improving their performance in real-world applications. The integration of self-supervised learning techniques into machine learning courses is becoming increasingly important, as it equips learners with the knowledge to tackle complex AI challenges and develop more robust models.
The Machine Learning Courses market is segmented by course type into online courses, offline courses, bootcamps, and workshops. Online courses dominate the segment due to their accessibility, flexibility, and cost-effectiveness. Platforms like Coursera and Udacity have democratized access to high-quality ML education, enabling lear
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These four labeled data sets are targeted at ordinal quantification. The goal of quantification is not to predict the label of each individual instance, but the distribution of labels in unlabeled sets of data. With the scripts provided, you can extract CSV files from the UCI machine learning repository and from OpenML. The ordinal class labels stem from a binning of a continuous regression label. We complement this data set with the indices of data items that appear in each sample of our evaluation. Hence, you can precisely replicate our samples by drawing the specified data items. The indices stem from two evaluation protocols that are well suited for ordinal quantification. To this end, each row in the files app_val_indices.csv, app_tst_indices.csv, app-oq_val_indices.csv, and app-oq_tst_indices.csv represents one sample. Our first protocol is the artificial prevalence protocol (APP), where all possible distributions of labels are drawn with an equal probability. The second protocol, APP-OQ, is a variant thereof, where only the smoothest 20% of all APP samples are considered. This variant is targeted at ordinal quantification tasks, where classes are ordered and a similarity of neighboring classes can be assumed. Usage You can extract four CSV files through the provided script extract-oq.jl, which is conveniently wrapped in a Makefile. The Project.toml and Manifest.toml specify the Julia package dependencies, similar to a requirements file in Python. Preliminaries: You have to have a working Julia installation. We have used Julia v1.6.5 in our experiments. Data Extraction: In your terminal, you can call either make (recommended), or julia --project="." --eval "using Pkg; Pkg.instantiate()" julia --project="." extract-oq.jl Outcome: The first row in each CSV file is the header. The first column, named "class_label", is the ordinal class. Further Reading Implementation of our experiments: https://github.com/mirkobunse/regularized-oq
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data repository contains the OCT images and binary annotations for segmentation of retinal tissue using deep learning. To use, please refer to the Github repository https://github.com/theislab/DeepRT.
#######
Access to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. While current leading-edge transfer learning from pre-trained models can help with cases lacking data, it limits design choices, and generally results in the use of unnecessarily large models. We propose a novel, self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabeled, cross-modal medical imaging data, which will allow for creating accurate and efficient models. We demonstrate this by accurately predicting optical coherence tomography (OCT)-based retinal thickness measurements from simple infrared (IR) fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-staged scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on OCT and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since OCT is not required for predictions. We expect this concept will also apply to other multimodal clinical data-imaging, health records, and genomics data, and be applicable to corresponding sample-starved learning problems.
#######
Facebook
Twitterhttps://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
![]() The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. We also expect that the higher resolution of this dataset (96x96) will make it a challenging benchmark for developing more scalable unsupervised learning methods. Overview 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. Images are 96x96 pixels, color. 500 training images (10 pre-defined folds), 800 test images per class. 100000 unlabeled images for uns
Facebook
TwitterWe use open source human gut microbiome data to learn a microbial “language†model by adapting techniques from Natural Language Processing (NLP). Our microbial “language†model is trained in a self-supervised fashion (i.e., without additional external labels) to capture the interactions among different microbial species and the common compositional patterns in microbial communities. The learned model produces contextualized taxa representations that allow a single bacteria species to be represented differently according to the specific microbial environment it appears in. The model further provides a sample representation by collectively interpreting different bacteria species in the sample and their interactions as a whole. We show that, compared to baseline representations, our sample representation consistently leads to improved performance for multiple prediction tasks including predicting Irritable Bowel Disease (IBD) and diet patterns. Coupled with a simple ensemble strategy, it p..., No additional raw data was collected for this project. All inputs are available publicly. American Gut Project, Halfvarson, and Schirmer raw data are available from the NCBI database (accession numbers PRJEB11419, PRJEB18471, and PRJNA398089, respectively). We used the curated data produced by Tataru and David, 2020., , # Code and data for "Learning a deep language model for microbiomes: the power of large scale unlabeled microbiome data"
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and metadata used in "Machine learning reveals the waggle drift’s role in the honey bee dance communication system"
All timestamps are given in ISO 8601 format.
The following files are included:
Berlin2019_waggle_phases.csv, Berlin2021_waggle_phases.csv
Automatic individual detections of waggle phases during our recording periods in 2019 and 2021.
timestamp: Date and time of the detection.
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
x_median, y_median: Median position of the bee during the waggle phase (for 2019 given in millimeters after applying a homography, for 2021 in the original image coordinates).
waggle_angle: Body orientation of the bee during the waggle phase in radians (0: oriented to the right, PI / 4: oriented upwards).
Berlin2019_dances.csv
Automatic detections of dance behavior during our recording period in 2019.
dancer_id: Unique ID of the individual bee.
dance_id: Unique ID of the dance.
ts_from, ts_to: Date and time of the beginning and end of the dance.
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
median_x, median_y: Median position of the individual during the dance.
feeder_cam_id: ID of the feeder that the bee was detected at prior to the dance.
Berlin2019_followers.csv
Automatic detections of attendance and following behavior, corresponding to the dances in Berlin2019_dances.csv.
dance_id: Unique ID of the dance being attended or followed.
follower_id: Unique ID of the individual attending or following the dance.
ts_from, ts_to: Date and time of the beginning and end of the interaction.
label: “attendance” or “follower”
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
Berlin2019_dances_with_manually_verified_times.csv
A sample of dances from Berlin2019_dances.csv where the exact timestamps have been manually verified to correspond to the beginning of the first and last waggle phase down to a precision of ca. 166 ms (video material was recorded at 6 FPS).
dance_id: Unique ID of the dance.
dancer_id: Unique ID of the dancing individual.
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
feeder_cam_id: ID of the feeder that the bee was detected at prior to the dance.
dance_start, dance_end: Manually verified date and times of the beginning and end of the dance.
Berlin2019_dance_classifier_labels.csv
Manually annotated waggle phases or following behavior for our recording season in 2019 that was used to train the dancing and following classifier. Can be merged with the supplied individual detections.
timestamp: Timestamp of the individual frame the behavior was observed in.
frame_id: Unique ID of the video frame the behavior was observed in.
bee_id: Unique ID of the individual bee.
label: One of “nothing”, “waggle”, “follower”
Berlin2019_dance_classifier_unlabeled.csv
Additional unlabeled samples of timestamp and individual ID with the same format as Berlin2019_dance_classifier_labels.csv, but without a label. The data points have been sampled close to detections of our waggle phase classifier, so behaviors related to the waggle dance are likely overrepresented in that sample.
Berlin2021_waggle_phase_classifier_labels.csv
Manually annotated detections of our waggle phase detector (bb_wdd2) that were used to train the neural network filter (bb_wdd_filter) for the 2021 data.
detection_id: Unique ID of the waggle phase.
label: One of “waggle”, “activating”, “ventilating”, “trembling”, “other”. Where “waggle” denoted a waggle phase, “activating” is the shaking signal, “ventilating” is a bee fanning her wings. “trembling” denotes a tremble dance, but the distinction from the “other” class was often not clear, so “trembling” was merged into “other” for training.
orientation: The body orientation of the bee that triggered the detection in radians (0: facing to the right, PI /4: facing up).
metadata_path: Path to the individual detection in the same directory structure as created by the waggle dance detector.
Berlin2021_waggle_phase_classifier_ground_truth.zip
The output of the waggle dance detector (bb_wdd2) that corresponds to Berlin2021_waggle_phase_classifier_labels.csv and is used for training. The archive includes a directory structure as output by the bb_wdd2 and each directory includes the original image sequence that triggered the detection in an archive and the corresponding metadata. The training code supplied in bb_wdd_filter directly works with this directory structure.
Berlin2019_tracks.zip
Detections and tracks from the recording season in 2019 as produced by our tracking system. As the full data is several terabytes in size, we include the subset of our data here that is relevant for our publication which comprises over 46 million detections. We included tracks for all detected behaviors (dancing, following, attending) including one minute before and after the behavior. We also included all tracks that correspond to the labeled and unlabeled data that was used to train the dance classifier including 30 seconds before and after the data used for training. We grouped the exported data by date to make the handling easier, but to efficiently work with the data, we recommend importing it into an indexable database.
The individual files contain the following columns:
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
timestamp: Date and time of the detection.
frame_id: Unique ID of the video frame of the recording from which the detection was extracted.
track_id: Unique ID of an individual track (short motion path from one individual). For longer tracks, the detections can be linked based on the bee_id.
bee_id: Unique ID of the individual bee.
bee_id_confidence: Confidence between 0 and 1 that the bee_id is correct as output by our tracking system.
x_pos_hive, y_pos_hive: Spatial position of the bee in the hive on the side indicated by cam_id. Given in millimeters after applying a homography on the video material.
orientation_hive: Orientation of the bees’ thorax in the hive in radians (0: oriented to the right, PI / 4: oriented upwards).
Berlin2019_feeder_experiment_log.csv
Experiment log for our feeder experiments in 2019.
date: Date given in the format year-month-day.
feeder_cam_id: Numeric ID of the feeder.
coordinates: Longitude and latitude of the feeder. For feeders 1 and 2 this is only given once and held constant. Feeder 3 had varying locations.
time_opened, time_closed: Date and time when the feeder was set up or closed again. sucrose_solution: Concentration of the sucrose solution given as sugar:water (in terms of weight). On days where feeder 3 was open, the other two feeders offered water without sugar.
Software used to acquire and analyze the data:
bb_pipeline: Tag localization and decoding pipeline
bb_pipeline_models: Pretrained localizer and decoder models for bb_pipeline
bb_binary: Raw detection data storage format
bb_irflash: IR flash system schematics and arduino code
bb_imgacquisition: Recording and network storage
bb_behavior: Database interaction and data (pre)processing, feature extraction
bb_tracking: Tracking of bee detections over time
bb_wdd2: Automatic detection and decoding of honey bee waggle dances
bb_wdd_filter: Machine learning model to improve the accuracy of the waggle dance detector
bb_dance_networks: Detection of dancing and following behavior from trajectories
Facebook
TwitterThis data set comprises a labeled training set, validation samples, and testing samples for ordinal quantification. The goal of quantification is not to predict the class label of each individual instance, but the distribution of labels in unlabeled sets of data.
The data is extracted from the McAuley data set of product reviews in Amazon, where the goal is to predict the 5-star rating of each textual review. We have sampled this data according to three protocols that are suited for quantification research.
The first protocol is the artificial prevalence protocol (APP), where all possible distributions of labels are drawn with an equal probability. The second protocol, APP-OQ(50%), is a variant thereof, where only the smoothest 50% of all APP samples are considered. This variant is targeted at ordinal quantification, where classes are ordered and a similarity of neighboring classes can be assumed. 5-star ratings of product reviews lie on an ordinal scale and, hence, pose such an ordinal quantification task. The third protocol considers "real" distributions of labels. These distributions stem from actual products in the original data set.
The data is represented by a RoBERTa embedding. In our experience, logistic regression classifiers work well with this representation.
You can extract our data sets yourself, for instance, if you require a raw textual representation. The original McAuley data set is public already and we provide all of our extraction scripts.
Extraction scripts and experiments: https://github.com/mirkobunse/regularized-oq
Original data by McAuley: https://jmcauley.ucsd.edu/data/amazon/
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Despite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research. The paper can be found at https://arxiv.org/pdf/2206.08023.pdf
In addition to providing the labeled 600 CT and MRI scans, we expect to provide 2000 CT and 1200 MRI scans without labels to support more learning tasks (semi-supervised, un-supervised, domain adaption, ...). The link can be found in:
if you found this dataset useful for your research, please cite:
@article{ji2022amos,
title={AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation},
author={Ji, Yuanfeng and Bai, Haotian and Yang, Jie and Ge, Chongjian and Zhu, Ye and Zhang, Ruimao and Li, Zhen and Zhang, Lingyan and Ma, Wanling and Wan, Xiang and others},
journal={arXiv preprint arXiv:2206.08023},
year={2022}
}
Facebook
TwitterBluesky Journalist Classification Dataset
Dataset Description
This dataset contains Bluesky user profiles for training and evaluating journalist classification models. Created for the CSH Vienna Machine Learning Workshop, it includes comprehensive user data with human-verified labels for binary classification tasks.
Dataset Summary
Total Examples: 1,189
Test Split: 229 labeled examples
Unlabeled Split: 960 unlabeled examples
Languages: Primarily English… See the full description on the dataset page: https://huggingface.co/datasets/ruggsea/bluesky-journalist-classification.
Facebook
Twitter
As per our latest research, the global market size for Self-Supervised Learning for Robotic Grasping stood at USD 1.24 billion in 2024, demonstrating remarkable momentum in the field of intelligent automation. The market is projected to expand at a robust CAGR of 23.8% from 2025 to 2033, reaching an estimated USD 10.89 billion by 2033. This significant growth is being driven by the increasing adoption of AI-powered robotics across diverse sectors, the urgent need for enhanced automation in manufacturing and logistics, and advances in machine learning algorithms that enable robots to learn complex manipulation tasks with minimal human intervention.
A principal growth factor for the Self-Supervised Learning for Robotic Grasping market is the rapid evolution of artificial intelligence and deep learning technologies. Self-supervised learning algorithms empower robots to autonomously understand and interpret their environment, improving their ability to grasp and manipulate objects with high precision. Unlike traditional supervised learning, which relies heavily on labeled datasets, self-supervised approaches enable robots to learn from vast amounts of unlabeled data, significantly reducing the time and cost associated with data annotation. This paradigm shift is fostering a new era of scalable and adaptive robotic systems, particularly in industries where object diversity and unpredictability are high, such as logistics, e-commerce, and advanced manufacturing.
Another key driver is the surge in demand for automation across industrial and service sectors. As global supply chains become more complex, there is a pressing need for robotic systems that can handle a wide variety of objects and tasks without extensive reprogramming. Self-supervised learning for robotic grasping addresses this challenge by enabling robots to continuously improve their performance through experience, even in dynamic and unstructured environments. This capability is proving invaluable in applications ranging from automated warehouses and smart factories to healthcare robotics, where robots assist in tasks such as surgical tool handling or patient care. The push for higher efficiency, safety, and operational flexibility is further accelerating market growth.
The proliferation of collaborative robots (cobots) and advancements in sensor technologies are also fueling the expansion of the Self-Supervised Learning for Robotic Grasping market. Modern cobots are designed to work alongside human operators, and self-supervised learning enhances their ability to adapt to new objects and tasks on the fly. Integration with advanced sensors, such as 3D cameras and tactile feedback devices, allows these robots to perceive their environment with unprecedented accuracy. This synergy is opening up new possibilities for deployment in sectors like retail, automotive, and consumer electronics, where customization and rapid adaptation are critical. As companies invest in next-generation automation solutions, the demand for self-supervised learning capabilities is expected to soar.
From a regional perspective, North America and Asia Pacific are leading the adoption of self-supervised robotic grasping technologies, driven by strong investments in R&D, a robust industrial base, and supportive government initiatives. North America, particularly the United States, benefits from a mature robotics ecosystem and close collaboration between academia and industry. Meanwhile, Asia Pacific, led by China, Japan, and South Korea, is experiencing rapid growth due to the presence of major manufacturing hubs and a rising focus on industrial automation. Europe follows closely, with a strong emphasis on innovation and quality standards. Other regions, such as Latin America and the Middle East & Africa, are gradually catching up as automation becomes a strategic imperative for economic development.
The Self-Su
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This labeled data set is targeted at ordinal quantification. It appears in our research paper "Ordinal Quantification Through Regularization", which we have published at ECML-PKDD 2022. The goal of quantification is not to predict the label of each individual instance, but the distribution of labels in unlabeled sets of data.
With the scripts provided, you can extract the relevant features and labels from the public data set of the FACT Cherenkov telescope. These features are precisely the ones that domain experts from astro-particle physics employ in their analyses. The labels stem from a binning of a continuous energy label, which is common practice in these analyses.
We complement this data set with the indices of data items that appear in each sample of our evaluation. Hence, you can precisely replicate our samples by drawing the specified data items. The indices stem from two evaluation protocols that are well suited for ordinal quantification. To this end, each row in the files app_val_indices.csv, app_tst_indices.csv, app-oq_val_indices.csv, and app-oq_tst_indices.csv represents one sample.
Our first protocol is the artificial prevalence protocol (APP), where all possible distributions of labels are drawn with an equal probability. The second protocol, APP-OQ, is a variant thereof, where only the smoothest 20% of all APP samples are considered. This variant is targeted at ordinal quantification tasks, where classes are ordered and a similarity of neighboring classes can be assumed. The labels of the FACT data lie on an ordinal scale and, hence, pose such an ordinal quantification task.
Usage
You can extract the data fact.csv through the provided script extract-fact.jl, which is conveniently wrapped in a Makefile. The Project.toml and Manifest.toml specify the Julia package dependencies, similar to a requirements file in Python.
Preliminaries: You have to have a working Julia installation. We have used Julia v1.6.5 in our experiments.
Data Extraction: In your terminal, you can call either
make
(recommended) or
curl --fail -o fact.hdf5 https://factdata.app.tu-dortmund.de/dl2/FACT-Tools/v1.1.2/gamma_simulations_facttools_dl2.hdf5
julia --project="." --eval "using Pkg; Pkg.instantiate()"
julia --project="." extract-fact.jl
Outcome: The first row in the resulting `fact.csv` file is the header. The first column, named "class_label", is the ordinal class.
Further Reading
Implementation of our experiments: https://github.com/mirkobunse/ecml22
Original data repository: https://factdata.app.tu-dortmund.de/
Reference analysis by astro-particle physicists: https://github.com/fact-project/open_crab_sample_analysis
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The prediction of response to drugs before initiating therapy based on transcriptome data is a major challenge. However, identifying effective drug response label data costs time and resources. Methods available often predict poorly and fail to identify robust biomarkers due to the curse of dimensionality: high dimensionality and low sample size. Therefore, this necessitates the development of predictive models to effectively predict the response to drugs using limited labeled data while being interpretable. In this study, we report a novel Hierarchical Graph Random Neural Networks (HiRAND) framework to predict the drug response using transcriptome data of few labeled data and additional unlabeled data. HiRAND completes the information integration of the gene graph and sample graph by graph convolutional network (GCN). The innovation of our model is leveraging data augmentation strategy to solve the dilemma of limited labeled data and using consistency regularization to optimize the prediction consistency of unlabeled data across different data augmentations. The results showed that HiRAND achieved better performance than competitive methods in various prediction scenarios, including both simulation data and multiple drug response data. We found that the prediction ability of HiRAND in the drug vorinostat showed the best results across all 62 drugs. In addition, HiRAND was interpreted to identify the key genes most important to vorinostat response, highlighting critical roles for ribosomal protein-related genes in the response to histone deacetylase inhibition. Our HiRAND could be utilized as an efficient framework for improving the drug response prediction performance using few labeled data.
Facebook
TwitterThis software repository contains a python package Aegis (Active Evaluator Germane Interactive Selector) package that allows us to evaluate machine learning systems's performance (according to a metric such as accuracy) by adaptively sampling trials to label from an unlabeled test set to minimize the number of labels needed. This includes sample (public) data as well as a simulation script that tests different label-selecting strategies on already labelled test sets. This software is configured so that users can add their own data and system outputs to test evaluation.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data sharing restrictions are common in NLP datasets. For example, Twitter policies do not allow sharing of tweet text, though tweet IDs may be shared. The situation is even more common in clinical NLP, where patient health information must be protected, and annotations over health text, when released at all, often require the signing of complex data use agreements. The SemEval-2021 Task 10 framework asks participants to develop semantic annotation systems in the face of data sharing constraints. A participant's goal is to develop an accurate system for a target domain when annotations exist for a related domain but cannot be distributed. Instead of annotated training data, participants are given a model trained on the annotations. Then, given unlabeled target domain data, they are asked to make predictions.
Website: https://machine-learning-for-medical-language.github.io/source-free-domain-adaptation/
CodaLab site: https://competitions.codalab.org/competitions/26152
Github repository: https://github.com/Machine-Learning-for-Medical-Language/source-free-domain-adaptation
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
🌱 GreenHyperSpectra: A multi-source hyperspectral dataset for global vegetation trait prediction 🌱
GreenHySpectra is a collection of hyperspectral reflectance data of vegetation from different sources. It is intended for Regression machine learning task for plant trait prediction with self and semi-supervised learning.
📁 Configurations
1. GreenHyperSpectra: Unlabeled set
Files: all CSVs under unlabeled/ Contains: Sample ID Spectral bands (400-2450… See the full description on the dataset page: https://huggingface.co/datasets/Avatarr05/GreenHyperSpectra.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
Cry sense dataset
Overview The Cry Sense Dataset is a curated audio dataset containing 1,044 infant cry recordings, carefully labeled into 8 emotional and physical categories. The dataset is designed to support research and development in baby cry analysis, especially for machine learning and deep learning applications. Each audio sample reflects a specific condition or need expressed through a baby’s crying, such as hunger, pain, discomfort, or tiredness.
This dataset is particularly suited for training classification models, analyzing audio patterns, and building intelligent systems that interpret infant vocal cues to assist parents, caregivers, and healthcare providers.
Category Number of Files Description
belly_pain 124 files Crying due to stomach pain, colic, or digestive discomfort.
burping 118 files Crying associated with the need to burp after feeding.
cold_hot 115 files Crying due to feeling too cold or hot.
discomfort 138 files Crying from general discomfort (e.g., wet diaper or tight clothes).
hungry 382 files Crying due to hunger; this is the most frequent cry type.
lonely 11 files Crying due to separation from caregivers.
scared 20 files Crying triggered by fear or sudden stimuli.
tired 136 files Crying caused by sleepiness or exhaustion.
Data Characteristics
Total files: 1,044 audio recordings File format: .wav and Other Formulas (standard high-quality audio format) Structure: Each category stored in its own directory (folder-per-class structure) Average duration: Varies by recording (most under 10 seconds) Labeling: All files are pre-labeled and ready for supervised learning tasks
Dataset Purpose
To assist researchers in studying infant vocal behavior and its correlation with physical/emotional needs To support the development of AI-powered systems for automatic classification of baby cries To enable the creation of smart childcare applications that respond to specific infant needs through sound interpretation
Potential Uses
Training deep learning models for cry classification and emotion recognition Enhancing baby monitor devices with intelligent cry analysis features Supporting pediatric studies related to early detection of abnormal crying patterns Building parent-assist apps that interpret baby cries in real time Enabling research in sound signal processing, bioacoustics, and health informatics
Notes
The dataset includes only categorized and labeled files; no raw or unlabeled data are included The dataset is ideal for classification tasks using CNNs, LSTMs, or hybrid models File names are consistent and traceable to their labeled categories All recordings are anonymized and ethically sourced
License
This dataset is shared under the CC BY-SA 4.0 license You are free to use, adapt, and redistribute the dataset for any purpose, including commercial, provided that proper credit is given and derivative works are shared under the same terms.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Deep learning technology has been used in the medical field to produce devices for clinical practice. Deep learning methods in cytology offer the potential to enhance cancer screening while also providing quantitative, objective, and highly reproducible testing. However, constructing high-accuracy deep learning models necessitates a significant amount of manually labeled data, which takes time. To address this issue, we used the Noisy Student Training technique to create a binary classification deep learning model for cervical cytology screening, which reduces the quantity of labeled data necessary. We used 140 whole-slide images from liquid-based cytology specimens, 50 of which were low-grade squamous intraepithelial lesions, 50 were high-grade squamous intraepithelial lesions, and 40 were negative samples. We extracted 56,996 images from the slides and then used them to train and test the model. We trained the EfficientNet using 2,600 manually labeled images to generate additional pseudo labels for the unlabeled data and then self-trained it within a student-teacher framework. Based on the presence or absence of abnormal cells, the created model was used to classify the images as normal or abnormal. The Grad-CAM approach was used to visualize the image components that contributed to the classification. The model achieved an area under the curve of 0.908, accuracy of 0.873, and F1-score of 0.833 with our test data. We also explored the optimal confidence threshold score and optimal augmentation approaches for low-magnification images. Our model efficiently classified normal and abnormal images at low magnification with high reliability, making it a promising screening tool for cervical cytology.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
In the field of polymer informatics, utilizing machine learning (ML) techniques to evaluate the glass transition temperature Tg and other properties of polymers has attracted extensive attention. This data-centric approach is much more efficient and practical than the laborious experimental measurements when encountered a daunting number of polymer structures. Various ML models are demonstrated to perform well for Tg prediction. Nevertheless, they are trained on different data sets, using different structure representations, and based on different feature engineering methods. Thus, the critical question arises on selecting a proper ML model to better handle the Tg prediction with generalization ability. To provide a fair comparison of different ML techniques and examine the key factors that affect the model performance, we carry out a systematic benchmark study by compiling 79 different ML models and training them on a large and diverse data set. The three major components in setting up an ML model are structure representations, feature representations, and ML algorithms. In terms of polymer structure representation, we consider the polymer monomer, repeat unit, and oligomer with longer chain structure. Based on that feature, representation is calculated, including Morgan fingerprinting with or without substructure frequency, RDKit descriptors, molecular embedding, molecular graph, etc. Afterward, the obtained feature input is trained using different ML algorithms, such as deep neural networks, convolutional neural networks, random forest, support vector machine, LASSO regression, and Gaussian process regression. We evaluate the performance of these ML models using a holdout test set and an extra unlabeled data set from high-throughput molecular dynamics simulation. The ML model’s generalization ability on an unlabeled data set is especially focused, and the model’s sensitivity to topology and the molecular weight of polymers is also taken into consideration. This benchmark study provides not only a guideline for the Tg prediction task but also a useful reference for other polymer informatics tasks.