Facebook
TwitterThis dataset has been created to demonstrate the use of a simple linear regression model. It includes two variables: an independent variable and a dependent variable. The data can be used for training, testing, and validating a simple linear regression model, making it ideal for educational purposes, tutorials, and basic predictive analysis projects. The dataset consists of 100 observations with no missing values, and it follows a linear relationship
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This is a very simple multiple linear regression dataset for beginners. This dataset has only three columns and twenty rows. There are only two independent variables and one dependent variable. The independent variables are 'age' and 'experience'. The dependent variable is 'income'.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
4 dataframes are presented for solving regression problems. Descriptions of the dataframe variables are presented in the corresponding documents .docx
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Anurag Verma
Released under Apache 2.0
Facebook
TwitterCes datasets sont utilisés pour le cours de Centrale Lille sur le Machine Learning de Pascal Yim (Image générée avec ideogram.ai)
Exemples simples pour la regression Par exemple "datareg_cos_300.csv" est un ensemble de 300 points suivant un cosinus bruité avec deux colonnes 'x' et 'y'
Estimation de la valeur moyenne des maisons (MEDV) par quartier en fonction de différentes données : - RM : nombre de chambres - LSTAT : mesure du taux de pauvreté - PTRATIO : mesure du taux d'encadrement par élève dans les écoles
Version simplifiée du dataset original UCI
Source : https://www.kaggle.com/datasets/schirmerchad/bostonhoustingmlnd
Prédiction de prix de maisons aux alentours de Seattle (district de King County)
Source : https://www.kaggle.com/datasets/harlfoxem/housesalesprediction
Prédiction de prix de maisons - Compétition Kaggle
Le geyser « Old Faithful » est un geyser en cône du parc de Yellowstone aux États-Unis
On a mesuré : - duration : la durée de l’éruption - waiting : l’intervalle de temps depuis la dernière éruption - kind : une étiquette 'short' ou 'long' du type d’éruption
Dataset pour classifier les espèces d'Iris
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQM3aH4Q3AplfE1MR3ROAp9Ok35fafmNT59ddXkdEvNdMkT8X6E">
On a les informations suivantes : - sepal_length : longueur du sépale (en cm) - sepal_width : largeur du sépale - length,petal : longueur du pétale - petal_width : largeur du pétale - species : 3 espèces d'iris : 'setosa', 'versicolor' ou 'virginica'
Source : UCI (http://archive.ics.uci.edu/)
Une version simplifiée du dataset des iris, avec seulement les mesures de pétales et 2 espèces : versicolor (0) et virginica (1)
Prédiction de malaise cardiaque (output) en fonction de différents paramètres comme l'âge, le taux de cholesterol, ...
Source : https://www.kaggle.com/rashikrahmanpritom/heart-attack-analysis-prediction-dataset
On veut prédire si une tumeur est maline ou non, en fonction de mesures sur une biopsie de la tumeur
Source : https://www.kaggle.com/uciml/breast-cancer-wisconsin-data
Dataset comparable à celui des Iris. On veut prédire l'espèce de manchots
Source : https://www.kaggle.com/ashkhagan/palmer-penguins-datasetalternative-iris-dataset
Classification d'étoiles
Source : https://www.kaggle.com/datasets/deepu1109/star-dataset
Prédire si un champignon est comestible ou non
Source : https://www.kaggle.com/uciml/mushroom-classification
Dataset très classique sur les survivants du Titanic
Source : https://www.kaggle.com/c/titanic
Dataset "PIMA Indian diabete"
Prédiction du diabète pour une population de femmes de la tribu Pima
Source : https://www.kaggle.com/datasets/uciml/pima-indians-diabetes-database
On veut prédire le départ de clients pour la concurrence de clients Orange telecom (problème de ‘churn’ ou ‘attrition’)
Version "churn-big.csv" avec plus de données
Source : https://www.kaggle.com/datasets/mnassrib/telecom-churn-datasets
Prédiction d'attaque cérébrale
Source : https://www.kaggle.com/datasets/shashwatwork/cerebral-stroke-predictionimbalaced-dataset
Prédiction de pannes (UCI)
Source : https://www.kaggle.com/datasets/shivamb/machine-predictive-maintenance-classification/code
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is a great way to practice single or multiple regression tasks. It is simple, clean, and pre-split into training and testing sets! Choose your input features and your target label(s) and start predicting! The beauty of this dataset is that you get to choose what to do with it. Like in the real world, we may have raw data, but don't have a clear path forward. Use this dataset to exercise that skill!
Facebook
TwitterThis dataset was created by Nitesh Addagatla
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset aims to predict the number of books read per month based on the number of hours spent reading each week. It provides a practical dataset for linear regression tasks, where you can explore how reading habits impact the number of books completed.
The dataset is generated based on assumption taking avg reading hour of people and avg time to read a book
HoursSpentReading (Feature): The number of hours spent reading per week, ranging from 0 to 20 hours. This feature captures the amount of time an individual dedicates to reading each week. BooksRead (Target): The number of books read per month, with values ranging from 0 to 10 books. This target variable represents the outcome influenced by the amount of weekly reading time.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset comprises 4 features and one target variable. Features: - Feature1 - Feature2 - Feature3 - Feature4
Target: - Target
We need to predict the value of Target based on the feature list
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
housing.csv: this dataset is constituted by 506 points in 14 dimensions. Each point represents a house in the Boston area, and the 14 attributes that you find orderly in each column are the following:
* CRIM - per capita crime rate by town
* ZN - proportion of residential land zoned for lots over 25,000 sq.ft.
* INDUS - proportion of non-retail business acres per town.
* CHAS - Charles River dummy variable (1 if tract bounds river; 0 otherwise)
* NOX - nitric oxides concentration (parts per 10 million)
* RM - average number of rooms per dwelling
* AGE - proportion of owner-occupied units built prior to 1940
* DIS - weighted distances to five Boston employment centres
* RAD - index of accessibility to radial highways
* TAX - full-value property-tax rate per $10,000
* PTRATIO - pupil-teacher ratio by town
* B - 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
* LSTAT - % lower status of the population
* MEDV - Median value of owner-occupied homes in $1000's
This dataset is normally associated with 2 regression tasks: predicting NOX (in which the nitrous oxide level is to be predicted); and predicting price MEDV (in which the median value of a home is to be predicted).
This dataset was also pre-processed and scaled.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Description The table below gives the heights of fathers and their sons, based on a famous experiment by Karl Pearson around 1903. The number of cases is 1078. Random noise was added to the original data, to produce heights to the nearest 0.1 inch.
Objective: Use this dataset to practice simple linear regression.
Columns - Father height - Son height
Source: Department of Statistics, University of California, Berkeley
Download TSV source file: Pearson.tsv
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The Ice Cream Selling dataset is a simple and well-suited dataset for beginners in machine learning who are looking to practice polynomial regression. It consists of two columns: temperature and the corresponding number of units of ice cream sold.
The dataset captures the relationship between temperature and ice cream sales. It serves as a practical example for understanding and implementing polynomial regression, a powerful technique for modeling nonlinear relationships in data.
The dataset is designed to be straightforward and easy to work with, making it ideal for beginners. The simplicity of the data allows beginners to focus on the fundamental concepts and steps involved in polynomial regression without overwhelming complexity.
By using this dataset, beginners can gain hands-on experience in preprocessing the data, splitting it into training and testing sets, selecting an appropriate degree for the polynomial regression model, training the model, and evaluating its performance. They can also explore techniques to address potential challenges such as overfitting.
With this dataset, beginners can practice making predictions of ice cream sales based on temperature inputs and visualize the polynomial regression curve that represents the relationship between temperature and ice cream sales.
Overall, the Ice Cream Selling dataset provides an accessible and practical learning resource for beginners to grasp the concepts and techniques of polynomial regression in the context of analyzing ice cream sales data.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Salary Dataset in CSV for Simple linear regression. It has also been used in ASPDC series "ML in one month"
There are two columns 1. Experience in years 2. Salary
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset is designed to help you practice linear regression, a fundamental concept in machine learning and statistical analysis. The dataset contains a simulated linear relationship between the number of hours a student studies and the marks they obtain. It is an ideal resource for beginners who want to understand how linear regression works, or for educators looking to provide a simple yet effective example to their students.
Facebook
TwitterSimple data created for practicing regression problems. Consist of three columns: Price , Feature 1 and Feature 2. Try to predict price using feature1 and feature2.The data is clean and data cleaning is not required.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The aim of this data set is to be used along with my notebook Linear Regression Notes which provides a guideline for applying correlation analysis and linear regression models from a statistical approach.
A fictional call center is interested in knowing the relationship between the number of personnel and some variables that measure their performance such as average answer time, average calls per hour, and average time per call. Data were simulated to represent 200 shifts.
Facebook
TwitterThis dataset is designed for beginners to practice regression problems, particularly in the context of predicting house prices. It contains 1000 rows, with each row representing a house and various attributes that influence its price. The dataset is well-suited for learning basic to intermediate-level regression modeling techniques.
Beginner Regression Projects: This dataset can be used to practice building regression models such as Linear Regression, Decision Trees, or Random Forests. The target variable (house price) is continuous, making this an ideal problem for supervised learning techniques.
Feature Engineering Practice: Learners can create new features by combining existing ones, such as the price per square foot or age of the house, providing an opportunity to experiment with feature transformations.
Exploratory Data Analysis (EDA): You can explore how different features (e.g., square footage, number of bedrooms) correlate with the target variable, making it a great dataset for learning about data visualization and summary statistics.
Model Evaluation: The dataset allows for various model evaluation techniques such as cross-validation, R-squared, and Mean Absolute Error (MAE). These metrics can be used to compare the effectiveness of different models.
The dataset is highly versatile for a range of machine learning tasks. You can apply simple linear models to predict house prices based on one or two features, or use more complex models like Random Forest or Gradient Boosting Machines to understand interactions between variables.
It can also be used for dimensionality reduction techniques like PCA or to practice handling categorical variables (e.g., neighborhood quality) through encoding techniques like one-hot encoding.
This dataset is ideal for anyone wanting to gain practical experience in building regression models while working with real-world features.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This package was build to understand Simple Linear Regression. The content in this dataset are easy to understand.
Contains Two columns:
CGPA : Aggregate Cgpa received Package : Total Package (LPA)
If like my work please UPVOTE 🙏🙏
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This is a randomly generated image dataset containing 10,000 square images with a single regular polygon each. The procedure to create this dataset can be found in the dataset Code tab.
This dataset can be used for a number of Machine Learning problems, both classification and regression.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
Logistic regression is a statistical method used for binary classification tasks, where the goal is to predict one of two possible outcomes. It's widely used in machine learning for tasks like spam detection, disease diagnosis, and customer churn prediction.
In logistic regression, the dependent variable (the outcome) is categorical and typically takes on two values (often represented as 0 and 1). The model works by estimating the probability that a given input belongs to a certain class, based on one or more predictor variables (which can be continuous or categorical).
Key points: Sigmoid Function: Logistic regression uses the sigmoid (or logistic) function, which maps any real-valued number to a value between 0 and 1. This is how the model outputs a probability.
The sigmoid function is given by:
𝑃 (
1 ∣ 𝑋
1 1 + 𝑒 − 𝑧 P(y=1∣X)= 1+e −z
1
where 𝑧 z is a linear combination of the input features:
𝛽 0 + 𝛽 1 𝑥 1 + 𝛽 2 𝑥 2 + ⋯ + 𝛽 𝑛 𝑥 𝑛 z=β 0 +β 1 x 1 +β 2 x 2 +⋯+β n x n
Here, 𝛽 0 , 𝛽 1 , … , 𝛽 𝑛 β 0 ,β 1 ,…,β n are the coefficients, and 𝑥 1 , 𝑥 2 , … , 𝑥 𝑛 x 1 ,x 2 ,…,x n are the features.
Prediction: Once the model is trained, it predicts a probability 𝑃 (
1 ∣ 𝑋 ) P(y=1∣X). A threshold (often 0.5) is used to classify the observation as belonging to one class or the other. If the probability is greater than 0.5, it predicts class 1; otherwise, it predicts class 0.
Loss Function: Logistic regression typically uses a loss function called log loss (or binary cross-entropy), which measures the difference between the predicted probabilities and the actual class labels.
Interpretability: The coefficients in logistic regression can provide insights into the relationship between each feature and the probability of the outcome. For example, a positive coefficient indicates that an increase in the corresponding feature is associated with a higher probability of the outcome being class 1.
Logistic regression is relatively simple to implement and interpret, which makes it a popular choice for many real-world classification tasks!
Facebook
TwitterThis dataset has been created to demonstrate the use of a simple linear regression model. It includes two variables: an independent variable and a dependent variable. The data can be used for training, testing, and validating a simple linear regression model, making it ideal for educational purposes, tutorials, and basic predictive analysis projects. The dataset consists of 100 observations with no missing values, and it follows a linear relationship