Facebook
Twitterhttps://brightdata.com/licensehttps://brightdata.com/license
Utilize our machine learning datasets to develop and validate your models. Our datasets are designed to support a variety of machine learning applications, from image recognition to natural language processing and recommendation systems. You can access a comprehensive dataset or tailor a subset to fit your specific requirements, using data from a combination of various sources and websites, including custom ones. Popular use cases include model training and validation, where the dataset can be used to ensure robust performance across different applications. Additionally, the dataset helps in algorithm benchmarking by providing extensive data to test and compare various machine learning algorithms, identifying the most effective ones for tasks such as fraud detection, sentiment analysis, and predictive maintenance. Furthermore, it supports feature engineering by allowing you to uncover significant data attributes, enhancing the predictive accuracy of your machine learning models for applications like customer segmentation, personalized marketing, and financial forecasting.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
Sure! I'd be happy to provide you with an introduction to machine learning basics for beginners. Machine learning is a subfield of artificial intelligence (AI) that focuses on enabling computers to learn and make predictions or decisions without being explicitly programmed. Here are some key concepts and terms to help you get started:
Supervised Learning: In supervised learning, the machine learning algorithm learns from labeled training data. The training data consists of input examples and their corresponding correct output or target values. The algorithm learns to generalize from this data and make predictions or classify new, unseen examples.
Unsupervised Learning: Unsupervised learning involves learning patterns and relationships from unlabeled data. Unlike supervised learning, there are no target values provided. Instead, the algorithm aims to discover inherent structures or clusters in the data.
Training Data and Test Data: Machine learning models require a dataset to learn from. The dataset is typically split into two parts: the training data and the test data. The model learns from the training data, and the test data is used to evaluate its performance and generalization ability.
Features and Labels: In supervised learning, the input examples are often represented by features or attributes. For example, in a spam email classification task, features might include the presence of certain keywords or the length of the email. The corresponding output or target values are called labels, indicating the class or category to which the example belongs (e.g., spam or not spam).
Model Evaluation Metrics: To assess the performance of a machine learning model, various evaluation metrics are used. Common metrics include accuracy (the proportion of correctly predicted examples), precision (the proportion of true positives among all positive predictions), recall (the proportion of true positives predicted correctly), and F1 score (a combination of precision and recall).
Overfitting and Underfitting: Overfitting occurs when a model becomes too complex and learns to memorize the training data instead of generalizing well to unseen examples. On the other hand, underfitting happens when a model is too simple and fails to capture the underlying patterns in the data. Balancing the complexity of the model is crucial to achieve good generalization.
Feature Engineering: Feature engineering involves selecting or creating relevant features that can help improve the performance of a machine learning model. It often requires domain knowledge and creativity to transform raw data into a suitable representation that captures the important information.
Bias and Variance Trade-off: The bias-variance trade-off is a fundamental concept in machine learning. Bias refers to the errors introduced by the model's assumptions and simplifications, while variance refers to the model's sensitivity to small fluctuations in the training data. Reducing bias may increase variance and vice versa. Finding the right balance is important for building a well-performing model.
Supervised Learning Algorithms: There are various supervised learning algorithms, including linear regression, logistic regression, decision trees, random forests, support vector machines (SVM), and neural networks. Each algorithm has its own strengths, weaknesses, and specific use cases.
Unsupervised Learning Algorithms: Unsupervised learning algorithms include clustering algorithms like k-means clustering and hierarchical clustering, dimensionality reduction techniques like principal component analysis (PCA) and t-SNE, and anomaly detection algorithms, among others.
These concepts provide a starting point for understanding the basics of machine learning. As you delve deeper, you can explore more advanced topics such as deep learning, reinforcement learning, and natural language processing. Remember to practice hands-on with real-world datasets to gain practical experience and further refine your skills.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The MLCommons Dollar Street Dataset is a collection of images of everyday household items from homes around the world that visually captures socioeconomic diversity of traditionally underrepresented populations. It consists of public domain data, licensed for academic, commercial and non-commercial usage, under CC-BY and CC-BY-SA 4.0. The dataset was developed because similar datasets lack socioeconomic metadata and are not representative of global diversity.
This is a subset of the original dataset that can be used for multiclass classification with 10 categories. It is designed to be used in teaching, similar to the widely used, but unlicensed CIFAR-10 dataset.
These are the preprocessing steps that were performed:
This is the label mapping:
| Category | label |
| day bed | 0 |
| dishrag | 1 |
| plate | 2 |
| running shoe | 3 |
| soap dispenser | 4 |
| street sign | 5 |
| table lamp | 6 |
| tile roof | 7 |
| toilet seat | 8 |
| washing machine | 9 |
Checkout https://github.com/carpentries-lab/deep-learning-intro/blob/main/instructors/prepare-dollar-street-data.ipynb" target="_blank" rel="noopener">this notebook to see how the subset was created.
The original dataset was downloaded from https://www.kaggle.com/datasets/mlcommons/the-dollar-street-dataset. See https://mlcommons.org/datasets/dollar-street/ for more information.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
The Learning Path Index Dataset is a comprehensive collection of byte-sized courses and learning materials tailored for individuals eager to delve into the fields of Data Science, Machine Learning, and Artificial Intelligence (AI), making it an indispensable reference for students, professionals, and educators in the Data Science and AI communities.
This Kaggle Dataset along with the KaggleX Learning Path Index GitHub Repo were created by the mentors and mentees of Cohort 3 KaggleX BIPOC Mentorship Program (between August 2023 and November 2023, also see this). See Credits section at the bottom of the long description.
This dataset was created out of a commitment to facilitate learning and growth within the Data Science, Machine Learning, and AI communities. It started off as an idea at the end of Cohort 2 of the KaggleX BIPOC Mentorship Program brainstorming and feedback session. It was one of the ideas to create byte-sized learning material to help our KaggleX mentees learn things faster. It aspires to simplify the process of finding, evaluating, and selecting the most fitting educational resources.
This dataset was meticulously curated to assist learners in navigating the vast landscape of Data Science, Machine Learning, and AI education. It serves as a compass for those aiming to develop their skills and expertise in these rapidly evolving fields.
The mentors and mentees communicated via Discord, Trello, Google Hangout, etc... to put together these artifacts and made them public for everyone to use and contribute back.
The dataset compiles data from a curated selection of reputable sources including leading educational platforms such as Google Developer, Google Cloud Skill Boost, IBM, Fast AI, etc. By drawing from these trusted sources, we ensure that the data is both accurate and pertinent. The raw data and other artifacts as a result of this exercise can be found on the GitHub Repo i.e. KaggleX Learning Path Index GitHub Repo.
The dataset encompasses the following attributes:
The Learning Path Index Dataset is openly shared under a permissive license, allowing users to utilize the data for educational, analytical, and research purposes within the Data Science, Machine Learning, and AI domains. Feel free to fork the dataset and make it your own, we would be delighted if you contributed back to the dataset and/or our KaggleX Learning Path Index GitHub Repo as well.
Credits for all the work done to create this Kaggle Dataset and the KaggleX [Learnin...
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global machine learning market is projected to witness a remarkable growth trajectory, with the market size estimated to reach USD 21.17 billion in 2023 and anticipated to expand to USD 209.91 billion by 2032, growing at a compound annual growth rate (CAGR) of 29.2% over the forecast period. This extraordinary growth is primarily propelled by the escalating demand for artificial intelligence-driven solutions across various industries. As businesses seek to leverage machine learning for improving operational efficiency, enhancing customer experience, and driving innovation, the market is poised to expand rapidly. Key factors contributing to this growth include advancements in data generation, increasing computational power, and the proliferation of big data analytics.
A pivotal growth factor for the machine learning market is the ongoing digital transformation across industries. Enterprises globally are increasingly adopting machine learning technologies to optimize their operations, streamline processes, and make data-driven decisions. The healthcare sector, for example, leverages machine learning for predictive analytics to improve patient outcomes, while the finance sector uses machine learning algorithms for fraud detection and risk assessment. The retail industry is also utilizing machine learning for personalized customer experiences and inventory management. The ability of machine learning to analyze vast amounts of data in real-time and provide actionable insights is fueling its adoption across various applications, thereby driving market growth.
Another significant growth driver is the increasing integration of machine learning with the Internet of Things (IoT). The convergence of these technologies enables the creation of smarter, more efficient systems that enhance operational performance and productivity. In manufacturing, for instance, IoT devices equipped with machine learning capabilities can predict equipment failures and optimize maintenance schedules, leading to reduced downtime and costs. Similarly, in the automotive industry, machine learning algorithms are employed in autonomous vehicles to process and analyze sensor data, improving navigation and safety. The synergistic relationship between machine learning and IoT is expected to further propel market expansion during the forecast period.
Moreover, the rising investments in AI research and development by both public and private sectors are accelerating the advancement and adoption of machine learning technologies. Governments worldwide are recognizing the potential of AI and machine learning to transform industries, leading to increased funding for research initiatives and innovation centers. Companies are also investing heavily in developing cutting-edge machine learning solutions to maintain a competitive edge. This robust investment landscape is fostering an environment conducive to technological breakthroughs, thereby contributing to the growth of the machine learning market.
Supervised Learning, a subset of machine learning, plays a crucial role in the advancement of AI-driven solutions. It involves training algorithms on a labeled dataset, allowing the model to learn and make predictions or decisions based on new, unseen data. This approach is particularly beneficial in applications where the desired output is known, such as in classification or regression tasks. For instance, in the healthcare sector, supervised learning algorithms are employed to analyze patient data and predict health outcomes, thereby enhancing diagnostic accuracy and treatment efficacy. Similarly, in finance, these algorithms are used for credit scoring and fraud detection, providing financial institutions with reliable tools for risk assessment. As the demand for precise and efficient AI applications grows, the significance of supervised learning in driving innovation and operational excellence across industries becomes increasingly evident.
From a regional perspective, North America holds a dominant position in the machine learning market due to the early adoption of advanced technologies and the presence of major technology companies. The region's strong focus on R&D and innovation, coupled with a well-established IT infrastructure, further supports market growth. In addition, Asia Pacific is emerging as a lucrative market for machine learning, driven by rapid industrialization, increasing digitalization, and government initiatives promoting AI adoption. The region is witnessing significant investments in AI technologies, particu
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Improving the accuracy of prediction on future values based on the past and current observations has been pursued by enhancing the prediction's methods, combining those methods or performing data pre-processing. In this paper, another approach is taken, namely by increasing the number of input in the dataset. This approach would be useful especially for a shorter time series data. By filling the in-between values in the time series, the number of training set can be increased, thus increasing the generalization capability of the predictor. The algorithm used to make prediction is Neural Network as it is widely used in literature for time series tasks. For comparison, Support Vector Regression is also employed. The dataset used in the experiment is the frequency of USPTO's patents and PubMed's scientific publications on the field of health, namely on Apnea, Arrhythmia, and Sleep Stages. Another time series data designated for NN3 Competition in the field of transportation is also used for benchmarking. The experimental result shows that the prediction performance can be significantly increased by filling in-between data in the time series. Furthermore, the use of detrend and deseasonalization which separates the data into trend, seasonal and stationary time series also improve the prediction performance both on original and filled dataset. The optimal number of increase on the dataset in this experiment is about five times of the length of original dataset.
Facebook
Twitterhttps://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy
The data preparation tools market is experiencing robust growth, driven by the exponential increase in data volume and velocity across various industries. The rising need for data quality and consistency, coupled with the increasing adoption of advanced analytics and business intelligence solutions, fuels this expansion. A CAGR of, let's assume, 15% (a reasonable estimate given the rapid technological advancements in this space) between 2019 and 2024 suggests a significant market expansion. This growth is further amplified by the increasing demand for self-service data preparation tools that empower business users to access and prepare data without needing extensive technical expertise. Major players like Microsoft, Tableau, and Alteryx are leading the charge, continuously innovating and expanding their offerings to cater to diverse industry needs. The market is segmented based on deployment type (cloud, on-premise), organization size (small, medium, large enterprises), and industry vertical (BFSI, healthcare, retail, etc.), creating lucrative opportunities across various segments. However, challenges remain. The complexity of integrating data preparation tools with existing data infrastructures can pose implementation hurdles for certain organizations. Furthermore, the need for skilled professionals to manage and utilize these tools effectively presents a potential restraint to wider adoption. Despite these obstacles, the long-term outlook for the data preparation tools market remains highly positive, with continuous innovation in areas like automated data preparation, machine learning-powered data cleansing, and enhanced collaboration features driving further growth throughout the forecast period (2025-2033). We project a market size of approximately $15 billion in 2025, considering a realistic growth trajectory and the significant investment made by both established players and emerging startups.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Daily Machine Learning Practice – 1 Commit per Day
Author: Astrid Villalobos Location: Montréal, QC LinkedIn: https://www.linkedin.com/in/astridcvr/
Objective The goal of this project is to strengthen Machine Learning and data analysis skills through small, consistent daily contributions. Each commit focuses on a specific aspect of data processing, feature engineering, or modeling using Python, Pandas, and Scikit-learn.
Dataset Source: Kaggle – Sample Sales Data File: data/sales_data_sample.csv Variables: ORDERNUMBER, QUANTITYORDERED, PRICEEACH, SALES, COUNTRY, etc. Goal: Analyze e-commerce performance, predict sales trends, segment customers, and forecast demand.
**Project Rules **Rule Description 🟩 1 Commit per Day Minimum one line of code daily to ensure consistency and discipline 🌍 Bilingual Comments Code and documentation in English and French 📈 Visible Progress Daily green squares = daily learning 🧰 Tech Stack
Languages: Python Libraries: Pandas, NumPy, Scikit-learn, Matplotlib, Seaborn Tools: Jupyter Notebook, GitHub, Kaggle
Learning Outcomes By the end of this challenge: Develop a stronger understanding of data preprocessing, modeling, and evaluation. Build consistent coding habits through daily practice. Apply ML techniques to real-world sales data scenarios.
Facebook
TwitterThis dataset contains 55,000 entries of synthetic customer transactions, generated using Python's Faker library. The goal behind creating this dataset was to provide a resource for learners like myself to explore, analyze, and apply various data analysis techniques in a context that closely mimics real-world data.
About the Dataset: - CID (Customer ID): A unique identifier for each customer. - TID (Transaction ID): A unique identifier for each transaction. - Gender: The gender of the customer, categorized as Male or Female. - Age Group: Age group of the customer, divided into several ranges. - Purchase Date: The timestamp of when the transaction took place. - Product Category: The category of the product purchased, such as Electronics, Apparel, etc. - Discount Availed: Indicates whether the customer availed any discount (Yes/No). - Discount Name: Name of the discount applied (e.g., FESTIVE50). - Discount Amount (INR): The amount of discount availed by the customer. - Gross Amount: The total amount before applying any discount. - Net Amount: The final amount after applying the discount. - Purchase Method: The payment method used (e.g., Credit Card, Debit Card, etc.). - Location: The city where the purchase took place.
Use Cases: 1. Exploratory Data Analysis (EDA): This dataset is ideal for conducting EDA, allowing users to practice techniques such as summary statistics, visualizations, and identifying patterns within the data. 2. Data Preprocessing and Cleaning: Learners can work on handling missing data, encoding categorical variables, and normalizing numerical values to prepare the dataset for analysis. 3. Data Visualization: Use tools like Python’s Matplotlib, Seaborn, or Power BI to visualize purchasing trends, customer demographics, or the impact of discounts on purchase amounts. 4. Machine Learning Applications: After applying feature engineering, this dataset is suitable for supervised learning models, such as predicting whether a customer will avail a discount or forecasting purchase amounts based on the input features.
This dataset provides an excellent sandbox for honing skills in data analysis, machine learning, and visualization in a structured but flexible manner.
This is not a real dataset. This dataset was generated using Python's Faker library for the sole purpose of learning
Facebook
TwitterBats play crucial ecological roles and provide valuable ecosystem services, yet many populations face serious threats from various ecological disturbances. The North American Bat Monitoring Program (NABat) aims to assess status and trends of bat populations while developing innovative and community-driven conservation solutions using its unique data and technology infrastructure. To support scalability and transparency in the NABat acoustic data pipeline, we developed a fully-automated machine-learning algorithm. This dataset includes audio files of bat echolocation calls that were considered to develop V1.0 of the NABat machine-learning algorithm, however the test set (i.e., holdout dataset) has been excluded from this release. These recordings were collected by various bat monitoring partners across North America using ultrasonic acoustic recorders for stationary acoustic and mobile acoustic surveys. For more information on how these surveys may be conducted, see Chapters 4 and 5 of “A Plan for the North American Bat Monitoring Program” (https://doi.org/10.2737/SRS-GTR-208). These data were then post-processed by bat monitoring partners to remove noise files (or those that do not contain recognizable bat calls) and apply a species label to each file. There is undoubtedly variation in the steps that monitoring partners take to apply a species label, but the steps documented in “A Guide to Processing Bat Acoustic Data for the North American Bat Monitoring Program” (https://doi.org/10.3133/ofr20181068) include first processing with an automated classifier and then manually reviewing to confirm or downgrade the suggested species label. Once a manual ID label was applied, audio files of bat acoustic recordings were submitted to the NABat database in Waveform Audio File format. From these available files in the NABat database, we considered files from 35 classes (34 species and a noise class). Files for 4 species were excluded due to low sample size (Corynorhinus rafinesquii, N=3; Eumops floridanus, N =3; Lasiurus xanthinus, N = 4; Nyctinomops femorosaccus, N =11). From this pool, files were randomly selected until files for each species/grid cell combination were exhausted or the number of recordings reach 1250. The dataset was then randomly split into training, validation, and test sets (i.e., holdout dataset). This data release includes all files considered for training and validation, including files that had been excluded from model development and testing due to low sample size for a given species or because the threshold for species/grid cell combinations had been met. The test set (i.e., holdout dataset) is not included. Audio files are grouped by species, as indicated by the four-letter species code in the name of each folder. Definitions for each four-letter code, including Family, Genus, Species, and Common name, are also included as a dataset in this release.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The SalmonScan dataset is a collection of images of salmon fish, including healthy fish and infected fish. The dataset consists of two classes of images:
Fresh salmon 🐟 Infected Salmon 🐠
This dataset is ideal for various computer vision tasks in machine learning and deep learning applications. Whether you are a researcher, developer, or student, the SalmonScan dataset offers a rich and diverse data source to support your projects and experiments.
So, dive in and explore the fascinating world of salmon health and disease!
The SalmonScan dataset (raw) consists of 24 fresh fish and 91 infected fish. [Due to server cleaning in the past, some raw datasets have been deleted]
The SalmonScan dataset (augmented) consists of approximately 1,208 images of salmon fish, classified into two classes:
Each class contains a representative and diverse collection of images, capturing a range of different perspectives, scales, and lighting conditions. The images have been carefully curated to ensure that they are of high quality and suitable for use in a variety of computer vision tasks.
Data Preprocessing
The input images were preprocessed to enhance their quality and suitability for further analysis. The following steps were taken:
Resizing 📏: All the images were resized to a uniform size of 600 pixels in width and 250 pixels in height to ensure compatibility with the learning algorithm. Image Augmentation 📸: To overcome the small amount of images, various image augmentation techniques were applied to the input images. These included: Horizontal Flip ↩️: The images were horizontally flipped to create additional samples. Vertical Flip ⬆️: The images were vertically flipped to create additional samples. Rotation 🔄: The images were rotated to create additional samples. Cropping 🪓: A portion of the image was randomly cropped to create additional samples. Gaussian Noise 🌌: Gaussian noise was added to the images to create additional samples. Shearing 🌆: The images were sheared to create additional samples. Contrast Adjustment (Gamma) ⚖️: The gamma correction was applied to the images to adjust their contrast. Contrast Adjustment (Sigmoid) ⚖️: The sigmoid function was applied to the images to adjust their contrast.
Usage
To use the salmon scan dataset in your ML and DL projects, follow these steps:
Facebook
Twitterhttps://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy
The North America Machine Learning (ML) Market size was valued at USD 19.20 USD billion in 2023 and is projected to reach USD 172.15 USD billion by 2032, exhibiting a CAGR of 36.8 % during the forecast period. The increase in demand for efficient data analytics solutions, the growth of cloud computing, and the proliferation of IoT devices are driving the market's growth. Machine learning (ML) is a discipline of artificial intelligence that provides machines with the ability to automatically learn from data and past experiences while identifying patterns to make predictions with minimal human intervention. Machine learning methods enable computers to operate autonomously without explicit programming. ML applications are fed with new data, and they can independently learn, grow, develop, and adapt. Machine learning derives insightful information from large volumes of data by leveraging algorithms to identify patterns and learn in an iterative process. ML algorithms use computation methods to learn directly from data instead of relying on any predetermined equation that may serve as a model. Machine learning is used today for a wide range of commercial purposes, including suggesting products to consumers based on their past purchases, predicting stock market fluctuations, and translating text from one language to another. The North America Machine Learning (ML) Market is primarily driven by the increasing adoption of essential services like security information and cloud applications. Key drivers for this market are: Growing Adoption of Mobile Commerce to Augment the Demand for Virtual Fitting Room Tool . Potential restraints include: Lack of Privacy and Privacy Violations in AI and ML Applications to Restrain Market Growth. Notable trends are: Growing Implementation of Touch-based and Voice-based Infotainment Systems to Increase Adoption of Intelligent Cars.
Facebook
Twitter
According to our latest research, the global Data Preparation Platform market size reached USD 4.6 billion in 2024, reflecting robust adoption across diverse industries. The market is expected to expand at a CAGR of 19.8% during the forecast period, with revenue projected to reach USD 17.1 billion by 2033. This accelerated growth is primarily driven by the rising demand for advanced analytics, artificial intelligence, and machine learning applications, which require clean, integrated, and high-quality data as a foundation for actionable insights.
The primary growth factor propelling the data preparation platform market is the increasing volume and complexity of data generated by organizations worldwide. With the proliferation of digital transformation initiatives, businesses are collecting vast amounts of structured and unstructured data from sources such as IoT devices, social media, enterprise applications, and customer interactions. This data deluge presents significant challenges in terms of integration, cleansing, and transformation, necessitating advanced data preparation solutions. As organizations strive to leverage big data analytics for strategic decision-making, the need for automated, scalable, and user-friendly data preparation tools has become paramount. These platforms enable data scientists, analysts, and business users to efficiently prepare and manage data, reducing the time-to-insight and enhancing overall productivity.
Another critical driver for the data preparation platform market is the growing emphasis on data quality and governance. In regulated industries such as BFSI, healthcare, and government, compliance with data privacy laws and industry standards is non-negotiable. Poor data quality can lead to erroneous analytics, flawed business strategies, and substantial financial penalties. Data preparation platforms address these challenges by providing robust features for data profiling, cleansing, enrichment, and validation, ensuring that only accurate and reliable data is used for analysis. Additionally, the integration of AI and machine learning capabilities within these platforms further automates the identification and correction of anomalies, outliers, and inconsistencies, supporting organizations in maintaining high standards of data integrity and compliance.
The rapid shift towards cloud-based solutions is also fueling the expansion of the data preparation platform market. Cloud deployment offers unparalleled scalability, flexibility, and cost-efficiency, making it an attractive choice for enterprises of all sizes. Cloud-native data preparation platforms facilitate seamless collaboration among geographically dispersed teams, enable real-time data processing, and support integration with modern data warehouses and analytics tools. As remote and hybrid work models become the norm and organizations pursue digital agility, the adoption of cloud-based data preparation solutions is expected to surge. This trend is particularly pronounced among small and medium enterprises (SMEs), which benefit from the reduced infrastructure costs and simplified deployment offered by cloud platforms.
From a regional perspective, North America continues to dominate the data preparation platform market, driven by the presence of leading technology vendors, early adoption of advanced analytics, and a strong focus on data-driven business strategies. However, the Asia Pacific region is emerging as the fastest-growing market, fueled by rapid digitalization, increasing investments in AI and big data, and the expansion of cloud infrastructure. Europe also holds a significant share, supported by stringent data protection regulations and a mature enterprise landscape. Latin America and the Middle East & Africa are witnessing steady growth, as organizations in these regions recognize the value of data-driven insights for operational efficiency and competitive advantage.
Data Wrangling, a crucial aspect of data preparation, involves the process of cleaning and unifying complex data sets for easy access and analysis. In the context of data preparation platforms, data wrangling is essential for transforming raw data into a structured format that can be readily used for analytics. This process includes tasks such as filtering, sorting, aggregating, and enriching data, which are ne
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Acute poisoning is a significant global health burden, and the causative agent is often unclear. The primary aim of this pilot study was to develop a deep learning algorithm that predicts the most probable agent a poisoned patient was exposed to from a pre-specified list of drugs. Data were queried from the National Poison Data System (NPDS) from 2014 through 2018 for eight single-agent poisonings (acetaminophen, diphenhydramine, aspirin, calcium channel blockers, sulfonylureas, benzodiazepines, bupropion, and lithium). Two Deep Neural Networks (PyTorch and Keras) designed for multi-class classification tasks were applied. There were 201,031 single-agent poisonings included in the analysis. For distinguishing among selected poisonings, PyTorch model had specificity of 97%, accuracy of 83%, precision of 83%, recall of 83%, and a F1-score of 82%. Keras had specificity of 98%, accuracy of 83%, precision of 84%, recall of 83%, and a F1-score of 83%. The best performance was achieved in the diagnosis of single-agent poisoning in diagnosing poisoning by lithium, sulfonylureas, diphenhydramine, calcium channel blockers, then acetaminophen, in PyTorch (F1-score = 99%, 94%, 85%, 83%, and 82%, respectively) and Keras (F1-score = 99%, 94%, 86%, 82%, and 82%, respectively). Deep neural networks can potentially help in distinguishing the causative agent of acute poisoning. This study used a small list of drugs, with polysubstance ingestions excluded.Reproducible source code and results can be obtained at https://github.com/ashiskb/npds-workspace.git.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The final dataset utilised for the publication "Investigating Reinforcement Learning Approaches In Stock Market Trading" was processed by downloading and combining data from multiple reputable sources to suit the specific needs of this project. Raw data were retrieved by downloading them using a Python finance API. Afterwards, Python and NumPy were used to combine and normalise the data to create the final dataset.The raw data was sourced as follows:Stock Prices of NVIDIA & AMD, Financial Indexes, and Commodity Prices: Retrieved from Yahoo Finance.Economic Indicators: Collected from the US Federal Reserve.The dataset was normalised to minute intervals, and the stock prices were adjusted to account for stock splits.This dataset was used for exploring the application of reinforcement learning in stock market trading. After creating the dataset, it was used in s reinforcement learning environment to train several reinforcement learning algorithms, including deep Q-learning, policy networks, policy networks with baselines, actor-critic methods, and time series incorporation. The performance of these algorithms was then compared based on profit made and other financial evaluation metrics, to investigate the application of reinforcement learning algorithms in stock market trading.The attached 'README.txt' contains methodological information and a glossary of all the variables in the .csv file.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These four labeled data sets are targeted at ordinal quantification. The goal of quantification is not to predict the label of each individual instance, but the distribution of labels in unlabeled sets of data.
With the scripts provided, you can extract CSV files from the UCI machine learning repository and from OpenML. The ordinal class labels stem from a binning of a continuous regression label.
We complement this data set with the indices of data items that appear in each sample of our evaluation. Hence, you can precisely replicate our samples by drawing the specified data items. The indices stem from two evaluation protocols that are well suited for ordinal quantification. To this end, each row in the files app_val_indices.csv, app_tst_indices.csv, app-oq_val_indices.csv, and app-oq_tst_indices.csv represents one sample.
Our first protocol is the artificial prevalence protocol (APP), where all possible distributions of labels are drawn with an equal probability. The second protocol, APP-OQ, is a variant thereof, where only the smoothest 20% of all APP samples are considered. This variant is targeted at ordinal quantification tasks, where classes are ordered and a similarity of neighboring classes can be assumed.
Usage
You can extract four CSV files through the provided script extract-oq.jl, which is conveniently wrapped in a Makefile. The Project.toml and Manifest.toml specify the Julia package dependencies, similar to a requirements file in Python.
Preliminaries: You have to have a working Julia installation. We have used Julia v1.6.5 in our experiments.
Data Extraction: In your terminal, you can call either
make
(recommended), or
julia --project="." --eval "using Pkg; Pkg.instantiate()"
julia --project="." extract-oq.jl
Outcome: The first row in each CSV file is the header. The first column, named "class_label", is the ordinal class.
Further Reading
Implementation of our experiments: https://github.com/mirkobunse/regularized-oq
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Machine learning has become a powerful tool for systems biologists, from diagnosing cancer to optimizing kinetic models and predicting the state, growth dynamics, or type of a cell. Potential predictions from complex biological data sets obtained by “omics” experiments seem endless, but are often not the main objective of biological research. Often we want to understand the molecular mechanisms of a disease to develop new therapies, or we need to justify a crucial decision that is derived from a prediction. In order to gain such knowledge from data, machine learning models need to be extended. A recent trend to achieve this is to design “interpretable” models. However, the notions around interpretability are sometimes ambiguous, and a universal recipe for building well-interpretable models is missing. With this work, we want to familiarize systems biologists with the concept of model interpretability in machine learning. We consider data sets, data preparation, machine learning methods, and software tools relevant to omics research in systems biology. Finally, we try to answer the question: “What is interpretability?” We introduce views from the interpretable machine learning community and propose a scheme for categorizing studies on omics data. We then apply these tools to review and categorize recent studies where predictive machine learning models have been constructed from non-sequential omics data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description:
The dataset ‘DMSP Particle Precipitation AI-ready Data’ accompanies the manuscript “Next generation particle precipitation: Mesoscale prediction through machine learning (a case study and framework for progress)” submitted to AGU Space Weather Journal and used to produce new machine learning models of particle precipitation from the magnetosphere to the ionosphere. Note that we have attempted to make these data ready to be used in artificial intelligence/machine learning explorations following a community definition of ‘AI-ready’ provided at https://github.com/rmcgranaghan/data_science_tools_and_resources/wiki/Curated-Reference%7CChallenge-Data-Sets
The purpose of publishing these data is two-fold:
To allow reuse of the data that led to the manuscript and extension, rather than reinvention, of the research produced there; and
To be an ‘AI-ready’ challenge data set to which the artificial intelligence/machine learning community can apply novel methods.
These data were compiled, curated, and explored by: Ryan McGranaghan, Enrico Camporeale, Kristina Lynch, Jack Ziegler, Téo Bloch, Mathew Owens, Jesper Gjerloev, Spencer Hatch, Binzheng Zhang, and Susan Skone
Pipeline for creation:
The steps to create the data were (Note that we do not provide intermediate datasets):
Access NASA-provided DMSP data at https://cdaweb.gsfc.nasa.gov/pub/data/dmsp/
Read CDF files for given satellite (e.g., F-16)
Collect the following variables at one-second cadence: SC_AACGM_LAT, SC_AACGM_LTIME, ELE_TOTAL_ENERGY_FLUX, ELE_TOTAL_ENERGY_FLUX_STD, ELE_AVG_ENERGY, ELE_AVG_ENERGY_STD, ID_SC
Sub-sample the variables to one-minute cadence and eliminate any rows for which ELE_TOTAL_ENERGY_FLUX is NaN
Combine all individual satellites into single yearly files
For each yearly file, use nasaomnireader to obtain solar wind and geomagnetic index data programmatically and timehist2 to calculate the time histories of each parameter. Collate with the DMSP observations and remove rows for which any solar wind or geomagnetic index data are missing.
For each row, calculate cyclical time variables (e.g., local time -> sin(LT) and cos(LT))
Merge all years
How to use:
The Github repository https://github.com/rmcgranaghan/precipNet is provided to detail the use of these data and to provide Jupyter notebooks to facilitate getting started. The code is implemented in Python 3 and is licensed under the GNU General Public License v3.0
Citation:
For anyone using these data, please cite each of the following papers:
McGranaghan, R. M., Ziegler, J., Bloch, T., Hatch, S., Camporeale, E., Lynch, K., et al. (2021). Toward a next generation particle precipitation model: Mesoscale prediction through machine learning (a case study and framework for progress). Space Weather, 19, e2020SW002684. https://doi.org/10.1029/2020SW002684
McGranaghan, R. (2019), Eight lessons I learned leading a scientific “design sprint”, Eos, 100, https://doi.org/10.1029/2019EO136427. Published on 11 November 2019.
For questions or comments please contact Ryan McGranaghan (ryan.mcgranaghan@gmail.com)
Facebook
TwitterGeothermal power plants typically show decreasing heat and power production rates over time. Mitigation strategies include optimizing the management of existing wells - increasing or decreasing the fluid flow rates across the wells - and drilling new wells at appropriate locations. The latter is expensive, time-consuming, and subject to many engineering constraints, but the former is a viable mechanism for periodic adjustment of the available fluid allocations. Data and supporting literature from a study describing a new approach combining reservoir modeling and machine learning to produce models that enable strategies for the mitigation of decreased heat and power production rates over time for geothermal power plants. The computational approach used enables translation of sets of potential flow rates for the active wells into reservoir-wide estimates of produced energy and discovery of optimal flow allocations among the studied sets. In our computational experiments, we utilize collections of simulations for a specific reservoir (which capture subsurface characterization and realize history matching) along with machine learning models that predict temperature and pressure timeseries for production wells. We evaluate this approach using an "open-source" reservoir we have constructed that captures many of the characteristics of Brady Hot Springs, a commercially operational geothermal field in Nevada, USA. Selected results from a reservoir model of Brady Hot Springs itself are presented to show successful application to an existing system. In both cases, energy predictions prove to be highly accurate: all observed prediction errors do not exceed 3.68% for temperatures and 4.75% for pressures. In a cumulative energy estimation, we observe prediction errors that are less than 4.04%. A typical reservoir simulation for Brady Hot Springs completes in approximately 4 hours, whereas our machine learning models yield accurate 20-year predictions for temperatures, pressures, and produced energy in 0.9 seconds. This paper aims to demonstrate how the models and techniques from our study can be applied to achieve rapid exploration of controlled parameters and optimization of other geothermal reservoirs. Includes a synthetic, yet realistic, model of a geothermal reservoir, referred to as open-source reservoir (OSR). OSR is a 10-well (4 injection wells and 6 production wells) system that resembles Brady Hot Springs (a commercially operational geothermal field in Nevada, USA) at a high level but has a number of sufficiently modified characteristics (which renders any possible similarity between specific characteristics like temperatures and pressures as purely random). We study OSR through CMG simulations with a wide range of flow allocation scenarios. Includes a dataset with 101 simulated scenarios that cover the period of time between 2020 and 2040 and a link to the published paper about this project, where we focus on the Machine Learning work for predicting OSR's energy production based on the simulation data, as well as a link to the GitHub repository where we have published the code we have developed (please refer to the repository's readme file to see instructions on how to run the code). Additional links are included to associated work led by the USGS to identify geologic factors associated with well productivity in geothermal fields. Below are the high-level steps for applying the same modeling + ML process to other geothermal reservoirs: 1. Develop a geologic model of the geothermal field. The location of faults, upflow zones, aquifers, etc. need to be accounted for as accurately as possible 2. The geologic model needs to be converted to a reservoir model that can be used in a reservoir simulator, such as, for instance, CMG STARS, TETRAD, or FALCON 3. Using native state modeling, the initial temperature and pressure distributions are evaluated, and they become the initial conditions for dynamic reservoir simulations 4. Using history matching with tracers and available production data, the model should be tuned to represent the subsurface reservoir as accurately as possible 5. A large number of simulations is run using the history-matched reservoir model. Each simulation assumes a different wellbore flow rate allocation across the injection and production wells, where the individual selected flow rates do not violate the practical constraints for the corresponding wells. 6. ML models are trained using the simulation data. The code in our GitHub repository demonstrates how these models can be trained and evaluated. 7. The trained ML models can be used to evaluate a large set of candidate flow allocations with the goal of selecting the most optimal allocations, i.e., producing the largest amounts of thermal energy over the modeled period of time. The referenced paper provides more details about this optimization process
Facebook
Twitterhttps://www.nist.gov/open/licensehttps://www.nist.gov/open/license
Round 1 Training Dataset The data being generated and disseminated is the training data used to construct trojan detection software solutions. This data, generated at NIST, consists of human level AIs trained to perform a variety of tasks (image classification, natural language processing, etc.). A known percentage of these trained AI models have been poisoned with a known trigger which induces incorrect behavior. This data will be used to develop software solutions for detecting which trained AI models have been poisoned via embedded triggers. This dataset consists of 1000 trained, human level, image classification AI models using the following architectures (Inception-v3, DenseNet-121, and ResNet50). The models were trained on synthetically created image data of non-real traffic signs superimposed on road background scenes. Half (50%) of the models have been poisoned with an embedded trigger which causes misclassification of the images when the trigger is present. Errata: This dataset had a software bug in the trigger embedding code that caused 4 models trained for this dataset to have a ground truth value of 'poisoned' but which did not contain any triggers embedded. These models should not be used. Models Without a Trigger Embedded: id-00000184 id-00000599 id-00000858 id-00001088 Google Drive Mirror: https://drive.google.com/open?id=1uwVt3UCRL2fCX9Xvi2tLoz_z-DwbU6Ce
Facebook
Twitterhttps://brightdata.com/licensehttps://brightdata.com/license
Utilize our machine learning datasets to develop and validate your models. Our datasets are designed to support a variety of machine learning applications, from image recognition to natural language processing and recommendation systems. You can access a comprehensive dataset or tailor a subset to fit your specific requirements, using data from a combination of various sources and websites, including custom ones. Popular use cases include model training and validation, where the dataset can be used to ensure robust performance across different applications. Additionally, the dataset helps in algorithm benchmarking by providing extensive data to test and compare various machine learning algorithms, identifying the most effective ones for tasks such as fraud detection, sentiment analysis, and predictive maintenance. Furthermore, it supports feature engineering by allowing you to uncover significant data attributes, enhancing the predictive accuracy of your machine learning models for applications like customer segmentation, personalized marketing, and financial forecasting.