Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
Sure! I'd be happy to provide you with an introduction to machine learning basics for beginners. Machine learning is a subfield of artificial intelligence (AI) that focuses on enabling computers to learn and make predictions or decisions without being explicitly programmed. Here are some key concepts and terms to help you get started:
Supervised Learning: In supervised learning, the machine learning algorithm learns from labeled training data. The training data consists of input examples and their corresponding correct output or target values. The algorithm learns to generalize from this data and make predictions or classify new, unseen examples.
Unsupervised Learning: Unsupervised learning involves learning patterns and relationships from unlabeled data. Unlike supervised learning, there are no target values provided. Instead, the algorithm aims to discover inherent structures or clusters in the data.
Training Data and Test Data: Machine learning models require a dataset to learn from. The dataset is typically split into two parts: the training data and the test data. The model learns from the training data, and the test data is used to evaluate its performance and generalization ability.
Features and Labels: In supervised learning, the input examples are often represented by features or attributes. For example, in a spam email classification task, features might include the presence of certain keywords or the length of the email. The corresponding output or target values are called labels, indicating the class or category to which the example belongs (e.g., spam or not spam).
Model Evaluation Metrics: To assess the performance of a machine learning model, various evaluation metrics are used. Common metrics include accuracy (the proportion of correctly predicted examples), precision (the proportion of true positives among all positive predictions), recall (the proportion of true positives predicted correctly), and F1 score (a combination of precision and recall).
Overfitting and Underfitting: Overfitting occurs when a model becomes too complex and learns to memorize the training data instead of generalizing well to unseen examples. On the other hand, underfitting happens when a model is too simple and fails to capture the underlying patterns in the data. Balancing the complexity of the model is crucial to achieve good generalization.
Feature Engineering: Feature engineering involves selecting or creating relevant features that can help improve the performance of a machine learning model. It often requires domain knowledge and creativity to transform raw data into a suitable representation that captures the important information.
Bias and Variance Trade-off: The bias-variance trade-off is a fundamental concept in machine learning. Bias refers to the errors introduced by the model's assumptions and simplifications, while variance refers to the model's sensitivity to small fluctuations in the training data. Reducing bias may increase variance and vice versa. Finding the right balance is important for building a well-performing model.
Supervised Learning Algorithms: There are various supervised learning algorithms, including linear regression, logistic regression, decision trees, random forests, support vector machines (SVM), and neural networks. Each algorithm has its own strengths, weaknesses, and specific use cases.
Unsupervised Learning Algorithms: Unsupervised learning algorithms include clustering algorithms like k-means clustering and hierarchical clustering, dimensionality reduction techniques like principal component analysis (PCA) and t-SNE, and anomaly detection algorithms, among others.
These concepts provide a starting point for understanding the basics of machine learning. As you delve deeper, you can explore more advanced topics such as deep learning, reinforcement learning, and natural language processing. Remember to practice hands-on with real-world datasets to gain practical experience and further refine your skills.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These four labeled data sets are targeted at ordinal quantification. The goal of quantification is not to predict the label of each individual instance, but the distribution of labels in unlabeled sets of data.
With the scripts provided, you can extract CSV files from the UCI machine learning repository and from OpenML. The ordinal class labels stem from a binning of a continuous regression label.
We complement this data set with the indices of data items that appear in each sample of our evaluation. Hence, you can precisely replicate our samples by drawing the specified data items. The indices stem from two evaluation protocols that are well suited for ordinal quantification. To this end, each row in the files app_val_indices.csv, app_tst_indices.csv, app-oq_val_indices.csv, and app-oq_tst_indices.csv represents one sample.
Our first protocol is the artificial prevalence protocol (APP), where all possible distributions of labels are drawn with an equal probability. The second protocol, APP-OQ, is a variant thereof, where only the smoothest 20% of all APP samples are considered. This variant is targeted at ordinal quantification tasks, where classes are ordered and a similarity of neighboring classes can be assumed.
Usage
You can extract four CSV files through the provided script extract-oq.jl, which is conveniently wrapped in a Makefile. The Project.toml and Manifest.toml specify the Julia package dependencies, similar to a requirements file in Python.
Preliminaries: You have to have a working Julia installation. We have used Julia v1.6.5 in our experiments.
Data Extraction: In your terminal, you can call either
make
(recommended), or
julia --project="." --eval "using Pkg; Pkg.instantiate()" julia --project="." extract-oq.jl
Outcome: The first row in each CSV file is the header. The first column, named "class_label", is the ordinal class.
Further Reading
Implementation of our experiments: https://github.com/mirkobunse/regularized-oq
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
In the field of polymer informatics, utilizing machine learning (ML) techniques to evaluate the glass transition temperature Tg and other properties of polymers has attracted extensive attention. This data-centric approach is much more efficient and practical than the laborious experimental measurements when encountered a daunting number of polymer structures. Various ML models are demonstrated to perform well for Tg prediction. Nevertheless, they are trained on different data sets, using different structure representations, and based on different feature engineering methods. Thus, the critical question arises on selecting a proper ML model to better handle the Tg prediction with generalization ability. To provide a fair comparison of different ML techniques and examine the key factors that affect the model performance, we carry out a systematic benchmark study by compiling 79 different ML models and training them on a large and diverse data set. The three major components in setting up an ML model are structure representations, feature representations, and ML algorithms. In terms of polymer structure representation, we consider the polymer monomer, repeat unit, and oligomer with longer chain structure. Based on that feature, representation is calculated, including Morgan fingerprinting with or without substructure frequency, RDKit descriptors, molecular embedding, molecular graph, etc. Afterward, the obtained feature input is trained using different ML algorithms, such as deep neural networks, convolutional neural networks, random forest, support vector machine, LASSO regression, and Gaussian process regression. We evaluate the performance of these ML models using a holdout test set and an extra unlabeled data set from high-throughput molecular dynamics simulation. The ML model’s generalization ability on an unlabeled data set is especially focused, and the model’s sensitivity to topology and the molecular weight of polymers is also taken into consideration. This benchmark study provides not only a guideline for the Tg prediction task but also a useful reference for other polymer informatics tasks.
Facebook
TwitterThese datasets were used while writing the following work:
Polo, F. M., Ciochetti, I., and Bertolo, E. (2021). Predicting legal proceedings status: approaches based on sequential text data. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law, pages 264–265.
Please cite us if you use our datasets in your academic work:
@inproceedings{polo2021predicting,
title={Predicting legal proceedings status: approaches based on sequential text data},
author={Polo, Felipe Maia and Ciochetti, Itamar and Bertolo, Emerson},
booktitle={Proceedings of the Eighteenth International Conference on Artificial Intelligence and Law},
pages={264--265},
year={2021}
}
More details below!
Every legal proceeding in Brazil is one of three possible classes of status: (i) archived proceedings, (ii) active proceedings, and (iii) suspended proceedings. The three possible classes are given in a specific instant in time, which may be temporary or permanent. Moreover, they are decided by the courts to organize their workflow, which in Brazil may reach thousands of simultaneous cases per judge. Developing machine learning models to classify legal proceedings according to their status can assist public and private institutions in managing large portfolios of legal proceedings, providing gains in scale and efficiency.
In this dataset, each proceeding is made up of a sequence of short texts called “motions” written in Portuguese by the courts’ administrative staff. The motions relate to the proceedings, but not necessarily to their legal status.
Our data is composed of two datasets: a dataset of ~3*10^6 unlabeled motions and a dataset containing 6449 legal proceedings, each with an individual and a variable number of motions, but which have been labeled by lawyers. Among the labeled data, 47.14% is classified as archived (class 1), 45.23% is classified as active (class 2), and 7.63% is classified as suspended (class 3).
The datasets we use are representative samples from the first (São Paulo) and third (Rio de Janeiro) most significant state courts. State courts handle the most variable types of cases throughout Brazil and are responsible for 80% of the total amount of lawsuits. Therefore, these datasets are a good representation of a very significant portion of the use of language and expressions in Brazilian legal vocabulary.
Regarding the labels dataset, the key "-1" denotes the most recent text while "-2" the second most recent and so on.
We would like to thank Ana Carolina Domingues Borges, Andrews Adriani Angeli, and Nathália Caroline Juarez Delgado from Tikal Tech for helping us to obtain the datasets. This work would not be possible without their efforts.
Can you develop good machine learning classifiers for text sequences? :)
Facebook
TwitterSTL-10 is an image recognition dataset inspired by CIFAR-10 dataset with some improvements. With a corpus of 100,000 unlabeled images and 500 training images, this dataset is best for developing unsupervised feature learning, deep learning, self-taught learning algorithms. Unlike CIFAR-10, the dataset has a higher resolution which makes it a challenging benchmark for developing more scalable unsupervised learning methods.
Data overview:
The original data source recommends the following standardized testing protocol for reporting results:
Original data source and banner image: https://cs.stanford.edu/~acoates/stl10/
Please cite the following reference when using this dataset:
Adam Coates, Honglak Lee, Andrew Y. Ng An Analysis of Single Layer Networks in Unsupervised Feature Learning AISTATS, 2011.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
To analyse large numbers of texts, social science researchers are increasingly confronting the challenge of text classification. When manual labeling is not possible and researchers have to find automatized ways to classify texts, computer science provides a useful toolbox of machine-learning methods whose performance remains understudied in the social sciences. In this article, we compare the performance of the most widely used text classifiers by applying them to a typical research scenario in social science research: a relatively small labeled dataset with infrequent occurrence of categories of interest, which is a part of a large unlabeled dataset. As an example case, we look at Twitter communication regarding climate change, a topic of increasing scholarly interest in interdisciplinary social science research. Using a novel dataset including 5,750 tweets from various international organizations regarding the highly ambiguous concept of climate change, we evaluate the performance of methods in automatically classifying tweets based on whether they are about climate change or not. In this context, we highlight two main findings. First, supervised machine-learning methods perform better than state-of-the-art lexicons, in particular as class balance increases. Second, traditional machine-learning methods, such as logistic regression and random forest, perform similarly to sophisticated deep-learning methods, whilst requiring much less training time and computational resources. The results have important implications for the analysis of short texts in social science research.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data repository contains the OCT images and binary annotations for segmentation of retinal tissue using deep learning. To use, please refer to the Github repository https://github.com/theislab/DeepRT.
#######
Access to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. While current leading-edge transfer learning from pre-trained models can help with cases lacking data, it limits design choices, and generally results in the use of unnecessarily large models. We propose a novel, self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabeled, cross-modal medical imaging data, which will allow for creating accurate and efficient models. We demonstrate this by accurately predicting optical coherence tomography (OCT)-based retinal thickness measurements from simple infrared (IR) fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-staged scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on OCT and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since OCT is not required for predictions. We expect this concept will also apply to other multimodal clinical data-imaging, health records, and genomics data, and be applicable to corresponding sample-starved learning problems.
#######
Facebook
Twitter
According to our latest research, the global self-supervised learning market size reached USD 10.2 billion in 2024, demonstrating rapid adoption across multiple sectors. The market is set to expand at a strong CAGR of 33.1% from 2025 to 2033, propelled by the growing need for advanced artificial intelligence solutions that minimize dependency on labeled data. By 2033, the market is forecasted to achieve an impressive size of USD 117.2 billion, underscoring the transformative potential of self-supervised learning in revolutionizing data-driven decision-making and automation across industries. This growth trajectory is supported by increasing investments in AI research, the proliferation of big data, and the urgent demand for scalable machine learning models.
The primary growth driver for the self-supervised learning market is the exponential surge in data generation across industries and the corresponding need for efficient data labeling techniques. Traditional supervised learning requires vast amounts of labeled data, which is both time-consuming and expensive to annotate. Self-supervised learning, by contrast, leverages unlabeled data to train models, significantly reducing operational costs and accelerating the deployment of AI systems. This paradigm shift is particularly critical in sectors like healthcare, finance, and autonomous vehicles, where large datasets are abundant but labeled examples are scarce. As organizations seek to unlock value from their data assets, self-supervised learning is emerging as a cornerstone technology, enabling more robust, scalable, and generalizable AI applications.
Another significant factor fueling market expansion is the rapid advancement in computing infrastructure and algorithmic innovation. The availability of high-performance hardware, such as GPUs and TPUs, coupled with breakthroughs in neural network architectures, has made it feasible to train complex self-supervised models on massive datasets. Additionally, the open-source movement and collaborative research have democratized access to state-of-the-art self-supervised learning frameworks, fostering innovation and lowering barriers to entry for enterprises of all sizes. These technological advancements are empowering organizations to experiment with self-supervised learning at scale, driving adoption across a wide range of applications, from natural language processing to computer vision and robotics.
The market is also benefiting from the growing emphasis on ethical AI and data privacy. Self-supervised learning methods, which minimize the need for sensitive labeled data, are increasingly being adopted to address privacy concerns and regulatory compliance requirements. This is particularly relevant in regions with stringent data protection regulations, such as the European Union. Furthermore, the ability of self-supervised learning to generalize across domains and tasks is enabling businesses to build more resilient and adaptable AI systems, further accelerating market growth. The convergence of these factors is positioning self-supervised learning as a key enabler of next-generation AI solutions.
Transfer Learning is emerging as a pivotal technique in the realm of self-supervised learning, offering a bridge between different domains and tasks. By leveraging knowledge from pre-trained models, transfer learning allows for the adaptation of AI systems to new, related tasks with minimal additional data. This approach is particularly beneficial in scenarios where labeled data is scarce, enabling models to generalize better and learn more efficiently. The integration of transfer learning into self-supervised frameworks is enhancing the ability of AI systems to tackle complex problems across various industries, from healthcare diagnostics to autonomous driving. As the demand for versatile and efficient AI solutions grows, transfer learning is set to play a crucial role in the evolution of self-supervised learning technologies.
From a regional perspective, North America currently leads the self-supervised learning market, accounting for the largest share due to its robust AI research ecosystem, significant investments from technology giants, and early adoption across verticals. However, Asia Pacific is projected to witness the fastest growth over the forecast period, driven by the rapid digital tran
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset, titled "Network Anomaly Dataset," is designed for the development and evaluation of machine learning models focused on network anomaly detection. The dataset is available in two versions: a labeled version where each instance is marked as "Anomaly" or "Normal," and an unlabeled version that can be used for unsupervised learning techniques.
Dataset Features: - Throughput: The amount of data successfully transmitted over a network in a given period. - Congestion: The degree of network traffic load, potentially leading to delays or packet loss. - Packet Loss: The percentage of packets that fail to reach their destination, indicative of network issues. - Latency: The time taken for data to travel from the source to the destination, crucial for time-sensitive applications. - Jitter: The variation in packet arrival times, affecting the quality of real-time communications.
Applications: - Supervised Learning: Use the labeled dataset to train and evaluate models such as Random Forest, SVM, and Logistic Regression for anomaly detection. - Unsupervised Learning: Apply techniques like clustering and change point detection on the unlabeled dataset to discover hidden patterns and anomalies.
This dataset is ideal for practitioners and researchers aiming to explore network security, develop robust anomaly detection models, or conduct comparative analysis between supervised and unsupervised learning methods.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Reaching the performance of fully supervised learning with unlabeled data and only labeling one sample per class might be ideal for deep learning applications. We demonstrate for the first time the potential for building one-shot semi-supervised (BOSS) learning on CIFAR-10 and SVHN up to attain test accuracies that are comparable to fully supervised learning. Our method combines class prototype refining, class balancing, and self-training. A good prototype choice is essential and we propose a technique for obtaining iconic examples. In addition, we demonstrate that class balancing methods substantially improve accuracy results in semi-supervised learning to levels that allow self-training to reach the level of fully supervised learning performance. Our experiments demonstrate the value with computing and analyzing test accuracies for every class, rather than only a total test accuracy. We show that our BOSS methodology can obtain total test accuracies with CIFAR-10 images and only one labeled sample per class up to 95% (compared to 94.5% for fully supervised). Similarly, the SVHN images obtains test accuracies of 97.8%, compared to 98.27% for fully supervised. Rigorous empirical evaluations provide evidence that labeling large datasets is not necessary for training deep neural networks. Our code is available at https://github.com/lnsmith54/BOSS to facilitate replication.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and metadata used in "Machine learning reveals the waggle drift’s role in the honey bee dance communication system"
All timestamps are given in ISO 8601 format.
The following files are included:
Berlin2019_waggle_phases.csv, Berlin2021_waggle_phases.csv
Automatic individual detections of waggle phases during our recording periods in 2019 and 2021.
timestamp: Date and time of the detection.
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
x_median, y_median: Median position of the bee during the waggle phase (for 2019 given in millimeters after applying a homography, for 2021 in the original image coordinates).
waggle_angle: Body orientation of the bee during the waggle phase in radians (0: oriented to the right, PI / 4: oriented upwards).
Berlin2019_dances.csv
Automatic detections of dance behavior during our recording period in 2019.
dancer_id: Unique ID of the individual bee.
dance_id: Unique ID of the dance.
ts_from, ts_to: Date and time of the beginning and end of the dance.
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
median_x, median_y: Median position of the individual during the dance.
feeder_cam_id: ID of the feeder that the bee was detected at prior to the dance.
Berlin2019_followers.csv
Automatic detections of attendance and following behavior, corresponding to the dances in Berlin2019_dances.csv.
dance_id: Unique ID of the dance being attended or followed.
follower_id: Unique ID of the individual attending or following the dance.
ts_from, ts_to: Date and time of the beginning and end of the interaction.
label: “attendance” or “follower”
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
Berlin2019_dances_with_manually_verified_times.csv
A sample of dances from Berlin2019_dances.csv where the exact timestamps have been manually verified to correspond to the beginning of the first and last waggle phase down to a precision of ca. 166 ms (video material was recorded at 6 FPS).
dance_id: Unique ID of the dance.
dancer_id: Unique ID of the dancing individual.
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
feeder_cam_id: ID of the feeder that the bee was detected at prior to the dance.
dance_start, dance_end: Manually verified date and times of the beginning and end of the dance.
Berlin2019_dance_classifier_labels.csv
Manually annotated waggle phases or following behavior for our recording season in 2019 that was used to train the dancing and following classifier. Can be merged with the supplied individual detections.
timestamp: Timestamp of the individual frame the behavior was observed in.
frame_id: Unique ID of the video frame the behavior was observed in.
bee_id: Unique ID of the individual bee.
label: One of “nothing”, “waggle”, “follower”
Berlin2019_dance_classifier_unlabeled.csv
Additional unlabeled samples of timestamp and individual ID with the same format as Berlin2019_dance_classifier_labels.csv, but without a label. The data points have been sampled close to detections of our waggle phase classifier, so behaviors related to the waggle dance are likely overrepresented in that sample.
Berlin2021_waggle_phase_classifier_labels.csv
Manually annotated detections of our waggle phase detector (bb_wdd2) that were used to train the neural network filter (bb_wdd_filter) for the 2021 data.
detection_id: Unique ID of the waggle phase.
label: One of “waggle”, “activating”, “ventilating”, “trembling”, “other”. Where “waggle” denoted a waggle phase, “activating” is the shaking signal, “ventilating” is a bee fanning her wings. “trembling” denotes a tremble dance, but the distinction from the “other” class was often not clear, so “trembling” was merged into “other” for training.
orientation: The body orientation of the bee that triggered the detection in radians (0: facing to the right, PI /4: facing up).
metadata_path: Path to the individual detection in the same directory structure as created by the waggle dance detector.
Berlin2021_waggle_phase_classifier_ground_truth.zip
The output of the waggle dance detector (bb_wdd2) that corresponds to Berlin2021_waggle_phase_classifier_labels.csv and is used for training. The archive includes a directory structure as output by the bb_wdd2 and each directory includes the original image sequence that triggered the detection in an archive and the corresponding metadata. The training code supplied in bb_wdd_filter directly works with this directory structure.
Berlin2019_tracks.zip
Detections and tracks from the recording season in 2019 as produced by our tracking system. As the full data is several terabytes in size, we include the subset of our data here that is relevant for our publication which comprises over 46 million detections. We included tracks for all detected behaviors (dancing, following, attending) including one minute before and after the behavior. We also included all tracks that correspond to the labeled and unlabeled data that was used to train the dance classifier including 30 seconds before and after the data used for training. We grouped the exported data by date to make the handling easier, but to efficiently work with the data, we recommend importing it into an indexable database.
The individual files contain the following columns:
cam_id: Camera ID (0: left side of the hive, 1: right side of the hive).
timestamp: Date and time of the detection.
frame_id: Unique ID of the video frame of the recording from which the detection was extracted.
track_id: Unique ID of an individual track (short motion path from one individual). For longer tracks, the detections can be linked based on the bee_id.
bee_id: Unique ID of the individual bee.
bee_id_confidence: Confidence between 0 and 1 that the bee_id is correct as output by our tracking system.
x_pos_hive, y_pos_hive: Spatial position of the bee in the hive on the side indicated by cam_id. Given in millimeters after applying a homography on the video material.
orientation_hive: Orientation of the bees’ thorax in the hive in radians (0: oriented to the right, PI / 4: oriented upwards).
Berlin2019_feeder_experiment_log.csv
Experiment log for our feeder experiments in 2019.
date: Date given in the format year-month-day.
feeder_cam_id: Numeric ID of the feeder.
coordinates: Longitude and latitude of the feeder. For feeders 1 and 2 this is only given once and held constant. Feeder 3 had varying locations.
time_opened, time_closed: Date and time when the feeder was set up or closed again. sucrose_solution: Concentration of the sucrose solution given as sugar:water (in terms of weight). On days where feeder 3 was open, the other two feeders offered water without sugar.
Software used to acquire and analyze the data:
bb_pipeline: Tag localization and decoding pipeline
bb_pipeline_models: Pretrained localizer and decoder models for bb_pipeline
bb_binary: Raw detection data storage format
bb_irflash: IR flash system schematics and arduino code
bb_imgacquisition: Recording and network storage
bb_behavior: Database interaction and data (pre)processing, feature extraction
bb_tracking: Tracking of bee detections over time
bb_wdd2: Automatic detection and decoding of honey bee waggle dances
bb_wdd_filter: Machine learning model to improve the accuracy of the waggle dance detector
bb_dance_networks: Detection of dancing and following behavior from trajectories
Facebook
TwitterWe use open source human gut microbiome data to learn a microbial “language†model by adapting techniques from Natural Language Processing (NLP). Our microbial “language†model is trained in a self-supervised fashion (i.e., without additional external labels) to capture the interactions among different microbial taxa and the common compositional patterns in microbial communities. The learned model produces contextualized taxon representations that allow a single microbial taxon to be represented differently according to the specific microbial environment in which it appears. The model further provides a sample representation by collectively interpreting different microbial taxa in the sample and their interactions as a whole. We demonstrate that, while our sample representation performs comparably to baseline models in in-domain prediction tasks such as predicting Irritable Bowel Disease (IBD) and diet patterns, it significantly outperforms them when generalizing to test data from indep..., No additional raw data was collected for this project. All inputs are available publicly. American Gut Project, Halfvarson, and Schirmer raw data are available from the NCBI database (accession numbers PRJEB11419, PRJEB18471, and PRJNA398089, respectively). We used the curated data produced by Tataru and David, 2020., , # Code and data for "Learning a deep language model for microbiomes: the power of large scale unlabeled microbiome data"
Facebook
TwitterThis data set comprises a labeled training set, validation samples, and testing samples for ordinal quantification. It appears in our research paper "Ordinal Quantification Through Regularization", which we have published at ECML-PKDD 2022.
The data is extracted from the McAuley data set of product reviews in Amazon, where the goal is to predict the 5-star rating of each textual review. We have sampled this data according to two protocols that are suited for quantification research. The goal of quantification is not to predict the star rating of each individual instance, but the distribution of ratings in sets of textual reviews. More generally speaking, quantification aims at estimating the distribution of labels in unlabeled samples of data.
The first protocol is the artificial prevalence protocol (APP), where all possible distributions of labels are drawn with an equal probability. The second protocol, APP-OQ, is a variant thereof, where only the smoothest 20% of all APP samples are considered. This variant is targeted at ordinal quantification, where classes are ordered and a similarity of neighboring classes can be assumed. 5-star ratings of product reviews lie on an ordinal scale and, hence, pose such an ordinal quantification task.
This data set comprises two representations of the McAuley data. The first representation consists of TF-IDF features. The second representation is a RoBERTa embedding. This second representation is dense, while the first is sparse. In our experience, logistic regression classifiers work well with both representations. RoBERTa embeddings yield more accurate predictors than the TF-IDF features.
You can extract our data sets yourself, for instance, if you require a raw textual representation. The original McAuley data set is public already and we provide all of our extraction scripts.
Extraction scripts and experiments: https://github.com/mirkobunse/ecml22
Original data by McAuley: https://jmcauley.ucsd.edu/data/amazon/
Facebook
TwitterThe prediction of response to drugs before initiating therapy based on transcriptome data is a major challenge. However, identifying effective drug response label data costs time and resources. Methods available often predict poorly and fail to identify robust biomarkers due to the curse of dimensionality: high dimensionality and low sample size. Therefore, this necessitates the development of predictive models to effectively predict the response to drugs using limited labeled data while being interpretable. In this study, we report a novel Hierarchical Graph Random Neural Networks (HiRAND) framework to predict the drug response using transcriptome data of few labeled data and additional unlabeled data. HiRAND completes the information integration of the gene graph and sample graph by graph convolutional network (GCN). The innovation of our model is leveraging data augmentation strategy to solve the dilemma of limited labeled data and using consistency regularization to optimize the prediction consistency of unlabeled data across different data augmentations. The results showed that HiRAND achieved better performance than competitive methods in various prediction scenarios, including both simulation data and multiple drug response data. We found that the prediction ability of HiRAND in the drug vorinostat showed the best results across all 62 drugs. In addition, HiRAND was interpreted to identify the key genes most important to vorinostat response, highlighting critical roles for ribosomal protein-related genes in the response to histone deacetylase inhibition. Our HiRAND could be utilized as an efficient framework for improving the drug response prediction performance using few labeled data.
Facebook
Twitterhttps://academictorrents.com/nolicensespecifiedhttps://academictorrents.com/nolicensespecified
![]() The STL-10 dataset is an image recognition dataset for developing unsupervised feature learning, deep learning, self-taught learning algorithms. It is inspired by the CIFAR-10 dataset but with some modifications. In particular, each class has fewer labeled training examples than in CIFAR-10, but a very large set of unlabeled examples is provided to learn image models prior to supervised training. The primary challenge is to make use of the unlabeled data (which comes from a similar but different distribution from the labeled data) to build a useful prior. We also expect that the higher resolution of this dataset (96x96) will make it a challenging benchmark for developing more scalable unsupervised learning methods. Overview 10 classes: airplane, bird, car, cat, deer, dog, horse, monkey, ship, truck. Images are 96x96 pixels, color. 500 training images (10 pre-defined folds), 800 test images per class. 100000 unlabeled images for uns
Facebook
TwitterDataset Source: https://www.aicrowd.com/challenges/data-purchasing-challenge-2022
🕵️ Introduction Data for machine learning tasks usually does not come for free but has to be purchased. The costs and benefits of data have to be weighed against each other. This is challenging. First, data usually has combinatorial value. For instance, different observations might complement or substitute each other for a given machine learning task. In such cases, the decision to purchase one group of observations has to be made conditional on the decision to purchase another group of observations. If these relationships are high-dimensional, finding the optimal bundle becomes computationally hard. Second, data comes at different quality, for instance, with different levels of noise. Third, data has to be acquired under the assumption of being valuable out-of-sample. Distribution shifts have to be anticipated.
In this competition, you face these data purchasing challenges in the context of an multi-label image classification task in a quality control setting.
📑 Problem Statement
In short: You have to classify images. Some images in your training set are labelled but most of them aren't. How do you decide which images to label if you have a limited budget to do so?
In more detail: You face a multi-label image classification task. The dataset consists of synthetically generated images of painted metal sheets. A classifier is meant to predict whether the sheets have production damages and if so which ones. You have access to a set of images, a subset of which are labelled with respect to production damages. Because labeling is costly and your budget is limited, you have to decide for which of the unlabelled images labels should be purchased in order to maximize prediction accuracy.
Each of the images have a 4 dimensional label representing the presence or the absence of ['scratch_small', 'scratch_large', 'dent_small', 'dent_large'] in the images.
You are required to submit code, which can be run in three different phases:
Pre-Training Phase
In the Pre-Training Phase, your code will have access to 5,000 labelled images on a multi-label image classification task with 4 classes. It is up to you, how you wish to use this data. For instance, you might want to pre-train a classification model. Purchase Phase
In the Purchase Phase, your code, after going through the Pre-Training Phase will have access to an unlabelled dataset of 10,000 images. You will have a budget of 3,000 label purchases, that you can freely use across any of the images in the unlabelled dataset to obtain their labels. You are tasked with designing your own approach on how to select the optimal subset of 3,000 images in the unlabelled dataset, which would help you optimize your model's performance on the prediction task. You can then continue training your model (which has been pre-trained in the pre-training phase) using the newly purchased labels. Prediction Phase
In the Prediction Phase, your code will have access to a test set of 3,000 unlabelled images, for which you have to generate and submit predictions. Your submission will be evaluated based on the performance of your predictions on this test set. Your code will have access to a node with 4 CPUS, 16 GB RAM, 1 NVIDIA T4 GPU and 3 hours of runtime per submission. In the final round of this challenge, your code will be evaluated across multiple budget-runtime constraints.
💾 Dataset
The datasets for this challenge can be accessed in the Resources Section.
training.tar.gz: The training set containing 5,000 images with their associated labels. During your local experiments you are allowed to use the data as you please. unlabelled.tar.gz: The unlabelled set containing 10,000 images, and their associated labels. During your local experiments you are only allowed to access the labels through the provided purchase_label function. validation.tar.gz: The validation set containing 3,000 images, and their associated labels. During your local experiments you are only allowed to use the labels of the validation set to measure the performance of your models and experiments. debug.tar.gz.: A small set of 100 images with their associated labels, that you can use for integration testing, and for trying out the provided starter kit. NOTE While you run your local experiments on this dataset, your submissions will be evaluated on a dataset which might be sampled from a different distribution, and is not the same as this publicly released version.
👥 Participation
🖊 Evaluation Criteria The challenge will use the Accuracy Score, Hamming Loss and the Exact Match Ratio during evaluation. The primary score will be the Accuracy Score.
📅 Timeline This challenge has two Rounds.
Round 1 : Feb 4th – Feb 28th, 2022
The first round submissions will be evaluated based on one budget-compute constraint pair (max. of 3,00...
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Despite the considerable progress in automatic abdominal multi-organ segmentation from CT/MRI scans in recent years, a comprehensive evaluation of the models' capabilities is hampered by the lack of a large-scale benchmark from diverse clinical scenarios. Constraint by the high cost of collecting and labeling 3D medical data, most of the deep learning models to date are driven by datasets with a limited number of organs of interest or samples, which still limits the power of modern deep models and makes it difficult to provide a fully comprehensive and fair estimate of various methods. To mitigate the limitations, we present AMOS, a large-scale, diverse, clinical dataset for abdominal organ segmentation. AMOS provides 500 CT and 100 MRI scans collected from multi-center, multi-vendor, multi-modality, multi-phase, multi-disease patients, each with voxel-level annotations of 15 abdominal organs, providing challenging examples and test-bed for studying robust segmentation algorithms under diverse targets and scenarios. We further benchmark several state-of-the-art medical segmentation models to evaluate the status of the existing methods on this new challenging dataset. We have made our datasets, benchmark servers, and baselines publicly available, and hope to inspire future research. The paper can be found at https://arxiv.org/pdf/2206.08023.pdf
In addition to providing the labeled 600 CT and MRI scans, we expect to provide 2000 CT and 1200 MRI scans without labels to support more learning tasks (semi-supervised, un-supervised, domain adaption, ...). The link can be found in:
if you found this dataset useful for your research, please cite:
@article{ji2022amos,
title={AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation},
author={Ji, Yuanfeng and Bai, Haotian and Yang, Jie and Ge, Chongjian and Zhu, Ye and Zhang, Ruimao and Li, Zhen and Zhang, Lingyan and Ma, Wanling and Wan, Xiang and others},
journal={arXiv preprint arXiv:2206.08023},
year={2022}
}
Facebook
Twitter
As per our latest research, the global market size for Self-Supervised Learning for Robotic Grasping stood at USD 1.24 billion in 2024, demonstrating remarkable momentum in the field of intelligent automation. The market is projected to expand at a robust CAGR of 23.8% from 2025 to 2033, reaching an estimated USD 10.89 billion by 2033. This significant growth is being driven by the increasing adoption of AI-powered robotics across diverse sectors, the urgent need for enhanced automation in manufacturing and logistics, and advances in machine learning algorithms that enable robots to learn complex manipulation tasks with minimal human intervention.
A principal growth factor for the Self-Supervised Learning for Robotic Grasping market is the rapid evolution of artificial intelligence and deep learning technologies. Self-supervised learning algorithms empower robots to autonomously understand and interpret their environment, improving their ability to grasp and manipulate objects with high precision. Unlike traditional supervised learning, which relies heavily on labeled datasets, self-supervised approaches enable robots to learn from vast amounts of unlabeled data, significantly reducing the time and cost associated with data annotation. This paradigm shift is fostering a new era of scalable and adaptive robotic systems, particularly in industries where object diversity and unpredictability are high, such as logistics, e-commerce, and advanced manufacturing.
Another key driver is the surge in demand for automation across industrial and service sectors. As global supply chains become more complex, there is a pressing need for robotic systems that can handle a wide variety of objects and tasks without extensive reprogramming. Self-supervised learning for robotic grasping addresses this challenge by enabling robots to continuously improve their performance through experience, even in dynamic and unstructured environments. This capability is proving invaluable in applications ranging from automated warehouses and smart factories to healthcare robotics, where robots assist in tasks such as surgical tool handling or patient care. The push for higher efficiency, safety, and operational flexibility is further accelerating market growth.
The proliferation of collaborative robots (cobots) and advancements in sensor technologies are also fueling the expansion of the Self-Supervised Learning for Robotic Grasping market. Modern cobots are designed to work alongside human operators, and self-supervised learning enhances their ability to adapt to new objects and tasks on the fly. Integration with advanced sensors, such as 3D cameras and tactile feedback devices, allows these robots to perceive their environment with unprecedented accuracy. This synergy is opening up new possibilities for deployment in sectors like retail, automotive, and consumer electronics, where customization and rapid adaptation are critical. As companies invest in next-generation automation solutions, the demand for self-supervised learning capabilities is expected to soar.
From a regional perspective, North America and Asia Pacific are leading the adoption of self-supervised robotic grasping technologies, driven by strong investments in R&D, a robust industrial base, and supportive government initiatives. North America, particularly the United States, benefits from a mature robotics ecosystem and close collaboration between academia and industry. Meanwhile, Asia Pacific, led by China, Japan, and South Korea, is experiencing rapid growth due to the presence of major manufacturing hubs and a rising focus on industrial automation. Europe follows closely, with a strong emphasis on innovation and quality standards. Other regions, such as Latin America and the Middle East & Africa, are gradually catching up as automation becomes a strategic imperative for economic development.
The Self-Su
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data sharing restrictions are common in NLP datasets. For example, Twitter policies do not allow sharing of tweet text, though tweet IDs may be shared. The situation is even more common in clinical NLP, where patient health information must be protected, and annotations over health text, when released at all, often require the signing of complex data use agreements. The SemEval-2021 Task 10 framework asks participants to develop semantic annotation systems in the face of data sharing constraints. A participant's goal is to develop an accurate system for a target domain when annotations exist for a related domain but cannot be distributed. Instead of annotated training data, participants are given a model trained on the annotations. Then, given unlabeled target domain data, they are asked to make predictions.
Website: https://machine-learning-for-medical-language.github.io/source-free-domain-adaptation/
CodaLab site: https://competitions.codalab.org/competitions/26152
Github repository: https://github.com/Machine-Learning-for-Medical-Language/source-free-domain-adaptation
Facebook
TwitterThe COUGHVID dataset is one of the largest crowdsourced cough audio collections available for research and development in cough sound classification. This dataset is particularly valuable for respiratory disease detection, including COVID-19 screening, using Machine Learning (ML) techniques.
The third version of the COUGHVID dataset includes thousands of additional recordings obtained through October 2021. Additionally, cough recordings were re-labeled using a semi-supervised learning algorithm, which combined user-provided labels with expert physician annotations. This model expanded on previously unlabeled data to improve the dataset’s accuracy. These newly generated labels can be found in the "status_SSL" column of the "metadata_compiled.csv" file.
If you use this dataset in your research or project, please cite:
Orlandic, L., Teijeiro, T., & Atienza, D. (2021). The COUGHVID crowdsourcing dataset: A corpus for the study of large-scale cough analysis algorithms (3.0) [Data set]. Zenodo. DOI: 10.5281/zenodo.7024894
Creative Commons Attribution 4.0 International (CC BY 4.0).
Researchers who wish to test their models on the private test dataset should contact the COUGHVID team at coughvid@epfl.ch with a brief explanation of the type of validation they intend to conduct and the results obtained through cross-validation with the public data. After reviewing the request, access to the unlabeled recordings will be provided. The predictions on these recordings should then be sent to the team for performance evaluation.
You find the dataset here as well https://zenodo.org/records/7024894
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
Sure! I'd be happy to provide you with an introduction to machine learning basics for beginners. Machine learning is a subfield of artificial intelligence (AI) that focuses on enabling computers to learn and make predictions or decisions without being explicitly programmed. Here are some key concepts and terms to help you get started:
Supervised Learning: In supervised learning, the machine learning algorithm learns from labeled training data. The training data consists of input examples and their corresponding correct output or target values. The algorithm learns to generalize from this data and make predictions or classify new, unseen examples.
Unsupervised Learning: Unsupervised learning involves learning patterns and relationships from unlabeled data. Unlike supervised learning, there are no target values provided. Instead, the algorithm aims to discover inherent structures or clusters in the data.
Training Data and Test Data: Machine learning models require a dataset to learn from. The dataset is typically split into two parts: the training data and the test data. The model learns from the training data, and the test data is used to evaluate its performance and generalization ability.
Features and Labels: In supervised learning, the input examples are often represented by features or attributes. For example, in a spam email classification task, features might include the presence of certain keywords or the length of the email. The corresponding output or target values are called labels, indicating the class or category to which the example belongs (e.g., spam or not spam).
Model Evaluation Metrics: To assess the performance of a machine learning model, various evaluation metrics are used. Common metrics include accuracy (the proportion of correctly predicted examples), precision (the proportion of true positives among all positive predictions), recall (the proportion of true positives predicted correctly), and F1 score (a combination of precision and recall).
Overfitting and Underfitting: Overfitting occurs when a model becomes too complex and learns to memorize the training data instead of generalizing well to unseen examples. On the other hand, underfitting happens when a model is too simple and fails to capture the underlying patterns in the data. Balancing the complexity of the model is crucial to achieve good generalization.
Feature Engineering: Feature engineering involves selecting or creating relevant features that can help improve the performance of a machine learning model. It often requires domain knowledge and creativity to transform raw data into a suitable representation that captures the important information.
Bias and Variance Trade-off: The bias-variance trade-off is a fundamental concept in machine learning. Bias refers to the errors introduced by the model's assumptions and simplifications, while variance refers to the model's sensitivity to small fluctuations in the training data. Reducing bias may increase variance and vice versa. Finding the right balance is important for building a well-performing model.
Supervised Learning Algorithms: There are various supervised learning algorithms, including linear regression, logistic regression, decision trees, random forests, support vector machines (SVM), and neural networks. Each algorithm has its own strengths, weaknesses, and specific use cases.
Unsupervised Learning Algorithms: Unsupervised learning algorithms include clustering algorithms like k-means clustering and hierarchical clustering, dimensionality reduction techniques like principal component analysis (PCA) and t-SNE, and anomaly detection algorithms, among others.
These concepts provide a starting point for understanding the basics of machine learning. As you delve deeper, you can explore more advanced topics such as deep learning, reinforcement learning, and natural language processing. Remember to practice hands-on with real-world datasets to gain practical experience and further refine your skills.