Facebook
TwitterThis dataset contains 55,000 entries of synthetic customer transactions, generated using Python's Faker library. The goal behind creating this dataset was to provide a resource for learners like myself to explore, analyze, and apply various data analysis techniques in a context that closely mimics real-world data.
About the Dataset: - CID (Customer ID): A unique identifier for each customer. - TID (Transaction ID): A unique identifier for each transaction. - Gender: The gender of the customer, categorized as Male or Female. - Age Group: Age group of the customer, divided into several ranges. - Purchase Date: The timestamp of when the transaction took place. - Product Category: The category of the product purchased, such as Electronics, Apparel, etc. - Discount Availed: Indicates whether the customer availed any discount (Yes/No). - Discount Name: Name of the discount applied (e.g., FESTIVE50). - Discount Amount (INR): The amount of discount availed by the customer. - Gross Amount: The total amount before applying any discount. - Net Amount: The final amount after applying the discount. - Purchase Method: The payment method used (e.g., Credit Card, Debit Card, etc.). - Location: The city where the purchase took place.
Use Cases: 1. Exploratory Data Analysis (EDA): This dataset is ideal for conducting EDA, allowing users to practice techniques such as summary statistics, visualizations, and identifying patterns within the data. 2. Data Preprocessing and Cleaning: Learners can work on handling missing data, encoding categorical variables, and normalizing numerical values to prepare the dataset for analysis. 3. Data Visualization: Use tools like Pythonβs Matplotlib, Seaborn, or Power BI to visualize purchasing trends, customer demographics, or the impact of discounts on purchase amounts. 4. Machine Learning Applications: After applying feature engineering, this dataset is suitable for supervised learning models, such as predicting whether a customer will avail a discount or forecasting purchase amounts based on the input features.
This dataset provides an excellent sandbox for honing skills in data analysis, machine learning, and visualization in a structured but flexible manner.
This is not a real dataset. This dataset was generated using Python's Faker library for the sole purpose of learning
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Hosted by: Walsoft Computer Institute π Download dataset π€ Kaggle profile
Walsoft Computer Institute runs a Business Intelligence (BI) training program for students from diverse educational, geographical, and demographic backgrounds. The institute has collected detailed data on student attributes, entry exams, study effort, and final performance in two technical subjects: Python Programming and Database Systems.
As part of an internal review, the leadership team has hired you β a Data Science Consultant β to analyze this dataset and provide clear, evidence-based recommendations on how to improve:
Answer this central question:
βUsing the BI program dataset, how can Walsoft strategically improve student success, optimize resources, and increase the effectiveness of its training program?β
You are required to analyze and provide actionable insights for the following three areas:
Should entry exams remain the primary admissions filter?
Your task is to evaluate the predictive power of entry exam scores compared to other features such as prior education, age, gender, and study hours.
β Deliverables:
Are there at-risk student groups who need extra support?
Your task is to uncover whether certain backgrounds (e.g., prior education level, country, residence type) correlate with poor performance and recommend targeted interventions.
β Deliverables:
How can we allocate resources for maximum student success?
Your task is to segment students by success profiles and suggest differentiated teaching/facility strategies.
β Deliverables:
| Column | Description |
|---|---|
fNAME, lNAME | Student first and last name |
Age | Student age (21β71 years) |
gender | Gender (standardized as "Male"/"Female") |
country | Studentβs country of origin |
residence | Student housing/residence type |
entryEXAM | Entry test score (28β98) |
prevEducation | Prior education (High School, Diploma, etc.) |
studyHOURS | Total study hours logged |
Python | Final Python exam score |
DB | Final Database exam score |
You are provided with a real-world messy dataset that reflects the types of issues data scientists face every day β from inconsistent formatting to missing values.
Download: bi.csv
This dataset includes common data quality challenges:
Country name inconsistencies
e.g. Norge β Norway, RSA β South Africa, UK β United Kingdom
Residence type variations
e.g. BI-Residence, BIResidence, BI_Residence β unify to BI Residence
Education level typos and casing issues
e.g. Barrrchelors β Bachelor, DIPLOMA, Diplomaaa β Diploma
Gender value noise
e.g. M, F, female β standardize to Male / Female
Missing scores in Python subject
Fill NaN values using column mean or suitable imputation strategy
Participants using this dataset are expected to apply data cleaning techniques such as:
- String standardization
- Null value imputation
- Type correction (e.g., scores as float)
- Validation and visual verification
β Bonus: Submissions that use and clean this dataset will earn additional Technical Competency points.
Download: cleaned_bi.csv
This version has been fully standardized and preprocessed: - All fields cleaned and renamed consistently - Missing Python scores filled with th...
Facebook
TwitterTailor made data to apply the machine learning models on the dataset. Where the newcomers can easily perform their EDA.
The data consists of all the features of the four wheelers available in the market in 1985. We need to predict the **price of the car ** using Linear Regression or PCA or SVM-R etc.,
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Context
Online e-commerce is rapidly growing in Pakistan. Sellers list thousands of products across multiple categories, each with different prices, ratings, and sales numbers. Understanding the patterns of product sales, pricing, and customer feedback is crucial for businesses and data scientists alike.
This dataset simulates a realistic snapshot of online product sales in Pakistan, including diverse categories like Electronics, Clothing, Home & Kitchen, Books, Beauty, and Sports.
Source
Generated synthetically using Python and NumPy for learning and practice purposes.
No real personal or private data is included.
Designed specifically for Kaggle competitions, notebooks, and ML/EDA exercises.
About the File
File name: Pakistan_Online_Product_Sales.csv
Rows: 1000+
Columns: 6
Purpose:
Train Machine Learning models (regression/classification)
Explore data through EDA and visualizations
Practice feature engineering and data preprocessing
Facebook
TwitterAs described in the README.md file, the GitHub repository PRTR_transfers are Python scripts written to run a data-centric and chemical-centric framework for tracking EoL chemical flow transfers, identifying potential EoL exposure scenarios, and performing Chemical Flow Analysis (CFA). Also, the created Extract, Transform, and Load (ETL) pipeline leverages publicly-accessible Pollutant Release and Transfer Register (PRTR) systems belonging to Organization for Economic Cooperation and Development (OECD) member countries. The Life Cycle Inventory (LCI) data obtained by the ETL is stored in a Structured Query Language (SQL) database called PRTR_transfers that could be connected to Machine Learning Operations (MLOps) in production environments, making the framework scalable for real-world applications. The data ingestion pipeline can supply data at an annual rate, ensuring labeled data can be ingested into data-driven models if retraining is needed, especially to face problems like data and concept drift that could drastically affect the performance of data-driven models. Also, it describes the Python libraries required for running the code, how to use it, the obtained outputs files after running the Python script, and how to obtain all manuscript figures (file Manuscript Figures-EDA.ipynb) and results. This dataset is associated with the following publication: Hernandez-Betancur, J.D., G.J. Ruiz-Mercado, and M. MartΓn. Tracking end-of-life stage of chemicals: A scalable data-centric and chemical-centric approach. Resources, Conservation and Recycling. Elsevier Science BV, Amsterdam, NETHERLANDS, 196: 107031, (2023).
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Demographic Analysis of Shopping Behavior: Insights and Recommendations
Dataset Information: The Shopping Mall Customer Segmentation Dataset comprises 15,079 unique entries, featuring Customer ID, age, gender, annual income, and spending score. This dataset assists in understanding customer behavior for strategic marketing planning.
Cleaned Data Details: Data cleaned and standardized, 15,079 unique entries with attributes including - Customer ID, age, gender, annual income, and spending score. Can be used by marketing analysts to produce a better strategy for mall specific marketing.
Challenges Faced: 1. Data Cleaning: Overcoming inconsistencies and missing values required meticulous attention. 2. Statistical Analysis: Interpreting demographic data accurately demanded collaborative effort. 3. Visualization: Crafting informative visuals to convey insights effectively posed design challenges.
Research Topics: 1. Consumer Behavior Analysis: Exploring psychological factors driving purchasing decisions. 2. Market Segmentation Strategies: Investigating effective targeting based on demographic characteristics.
Suggestions for Project Expansion: 1. Incorporate External Data: Integrate social media analytics or geographic data to enrich customer insights. 2. Advanced Analytics Techniques: Explore advanced statistical methods and machine learning algorithms for deeper analysis. 3. Real-Time Monitoring: Develop tools for agile decision-making through continuous customer behavior tracking. This summary outlines the demographic analysis of shopping behavior, highlighting key insights, dataset characteristics, team contributions, challenges, research topics, and suggestions for project expansion. Leveraging these insights can enhance marketing strategies and drive business growth in the retail sector.
References OpenAI. (2022). ChatGPT [Computer software]. Retrieved from https://openai.com/chatgpt. Mustafa, Z. (2022). Shopping Mall Customer Segmentation Data [Data set]. Kaggle. Retrieved from https://www.kaggle.com/datasets/zubairmustafa/shopping-mall-customer-segmentation-data Donkeys. (n.d.). Kaggle Python API [Jupyter Notebook]. Kaggle. Retrieved from https://www.kaggle.com/code/donkeys/kaggle-python-api/notebook Pandas-Datareader. (n.d.). Retrieved from https://pypi.org/project/pandas-datareader/
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This is the cleaned version of a real-world medical dataset that was originally noisy, incomplete, and contained various inconsistencies. The dataset was cleaned through a structured and well-documented data preprocessing pipeline using Python and Pandas. Key steps in the cleaning process included:
The purpose of cleaning this dataset was to prepare it for further exploratory data analysis (EDA), data visualization, and machine learning modeling.
This cleaned dataset is now ready for training predictive models, generating visual insights, or conducting healthcare-related research. It provides a high-quality foundation for anyone interested in medical analytics or data science practice.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Motivation:
Phishing attacks are one of the most significant cyber threats in todayβs digital era, tricking users into divulging sensitive information like passwords, credit card numbers, and personal details. This dataset aims to support research and development of machine learning models that can classify URLs as phishing or benign.
Applications:
- Building robust phishing detection systems.
- Enhancing security measures in email filtering and web browsing.
- Training cybersecurity practitioners in identifying malicious URLs.
The dataset contains diverse features extracted from URL structures, HTML content, and website metadata, enabling deep insights into phishing behavior patterns.
This dataset comprises two types of URLs:
1. Phishing URLs: Malicious URLs designed to deceive users.
2. Benign URLs: Legitimate URLs posing no harm to users.
Key Features:
- URL-based features: Domain, protocol type (HTTP/HTTPS), and IP-based links.
- Content-based features: Link density, iframe presence, external/internal links, and metadata.
- Certificate-based features: SSL/TLS details like validity period and organization.
- WHOIS data: Registration details like creation and expiration dates.
Statistics:
- Total Samples: 800 (400 phishing, 400 benign).
- Features: 22 including URL, domain, link density, and SSL attributes.
To ensure statistical reliability, a power analysis was conducted to determine the minimum sample size required for binary classification with 22 features. Using a medium effect size (0.15), alpha = 0.05, and power = 0.80, the analysis indicated a minimum sample size of ~325 per class. Our dataset exceeds this requirement with 400 examples per class, ensuring robust model training.
Insights from EDA:
- Distribution Plots: Histograms and density plots for numerical features like link density, URL length, and iframe counts.
- Bar Plots: Class distribution and protocol usage trends.
- Correlation Heatmap: Highlights relationships between numerical features to identify multicollinearity or strong patterns.
- Box Plots: For SSL certificate validity and URL lengths, comparing phishing versus benign URLs.
EDA visualizations are provided in the repository.
The repository contains the Python code used to extract features, conduct EDA, and build the dataset.
Phishing detection datasets must balance the need for security research with the risk of misuse. This dataset:
1. Protects User Privacy: No personally identifiable information is included.
2. Promotes Ethical Use: Intended solely for academic and research purposes.
3. Avoids Reinforcement of Bias: Balanced class distribution ensures fairness in training models.
Risks:
- Misuse of the dataset for creating more deceptive phishing attacks.
- Over-reliance on outdated features as phishing tactics evolve.
Researchers are encouraged to pair this dataset with continuous updates and contextual studies of real-world phishing.
This dataset is shared under the MIT License, allowing free use, modification, and distribution for academic and non-commercial purposes. License details can be found here.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Explore a comprehensive dataset of all known characters from the popular anime and manga series Kimetsu no Yaiba (Demon Slayer). This dataset includes detailed information on main, supporting, and side characters, capturing their attributes, affiliations, abilities, and story roles.
This dataset is ideal for anime enthusiasts, data scientists, and researchers who want to analyze character traits, relationships, and progression across the Demon Slayer universe. All data is structured and EDA-ready, making it easy to integrate into Python, R, or any data analysis tool.
π Note: All characters included are from both the anime and manga series, ensuring comprehensive coverage.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset contains 10,000 synthetic records simulating the migratory behavior of various bird species across global regions. Each entry represents a single bird tagged with a tracking device and includes detailed information such as flight distance, speed, altitude, weather conditions, tagging information, and migration outcomes.
The data was entirely synthetically generated using randomized yet realistic values based on known ranges from ornithological studies. It is ideal for practicing data analysis and visualization techniques without privacy concerns or real-world data access restrictions. Because itβs artificial, the dataset can be freely used in education, portfolio projects, demo dashboards, machine learning pipelines, or business intelligence training.
With over 40 columns, this dataset supports a wide array of analysis types. Analysts can explore questions like βDo certain species migrate in larger flocks?β, βHow does weather impact nesting success?β, or βWhat conditions lead to migration interruptions?β. Users can also perform geospatial mapping of start and end locations, cluster birds by behavior, or build time series models based on migration months and environmental factors.
For data visualization, tools like Power BI, Python (Matplotlib/Seaborn/Plotly), or Excel can be used to create insightful dashboards and interactive charts.
Join the Fabric Community DataViz Contest | May 2025: https://community.fabric.microsoft.com/t5/Power-BI-Community-Blog/%EF%B8%8F-Fabric-Community-DataViz-Contest-May-2025/ba-p/4668560
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
If you find this dataset useful, a quick upvote would be greatly appreciated π It helps more learners discover it!
Explore how students at different academic levels use AI tools like ChatGPT for tasks such as coding, writing, studying, and brainstorming. Designed for learning, EDA, and ML experimentation.
This dataset simulates 10,000 sessions of students interacting with an AI assistant (like ChatGPT or similar tools) for various academic tasks. Each row represents a single session, capturing the studentβs level, discipline, type of task, session length, AI effectiveness, satisfaction rating, and whether they reused the AI tool later.
As AI tools become mainstream in education, there's a need to analyze and model how students interact with them. However, no public datasets exist for this behavior. This dataset fills that gap by providing a safe, fully synthetic yet realistic simulation for:
Itβs ideal for students, data science learners, and researchers who want real-world use cases without privacy or copyright constraints.
| Column | Description |
|---|---|
SessionID | Unique session identifier |
StudentLevel | Academic level: High School, Undergraduate, Graduate |
Discipline | Studentβs field of study (e.g., CS, Psychology, etc.) |
SessionDate | Date of the session |
SessionLengthMin | Length of AI interaction in minutes |
TotalPrompts | Number of prompts/messages used |
TaskType | Nature of the task (e.g., Coding, Writing, Research) |
AI_AssistanceLevel | 1β5 scale on how helpful the AI was perceived to be |
FinalOutcome | What the student achieved: Assignment Completed, Idea Drafted, etc. |
UsedAgain | Whether the student returned to use the assistant again |
SatisfactionRating | 1β5 rating of overall satisfaction with the session |
All data is synthetically generated using controlled distributions, real-world logic, and behavioral modeling to reflect realistic usage patterns.
This dataset is rich with potential for:
UsedAgain) or final outcome
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Overview Welcome to Kaggle's second annual Machine Learning and Data Science Survey β and our first-ever survey data challenge.
This year, as last year, we set out to conduct an industry-wide survey that presents a truly comprehensive view of the state of data science and machine learning. The survey was live for one week in October, and after cleaning the data we finished with 23,859 responses, a 49% increase over last year!
There's a lot to explore here. The results include raw numbers about who is working with data, whatβs happening with machine learning in different industries, and the best ways for new data scientists to break into the field. We've published the data in as raw a format as possible without compromising anonymization, which makes it an unusual example of a survey dataset.
Challenge This year Kaggle is launching the first Data Science Survey Challenge, where we will be awarding a prize pool of $28,000 to kernel authors who tell a rich story about a subset of the data science and machine learning community..
In our second year running this survey, we were once again awed by the global, diverse, and dynamic nature of the data science and machine learning industry. This survey data EDA provides an overview of the industry on an aggregate scale, but it also leaves us wanting to know more about the many specific communities comprised within the survey. For that reason, weβre inviting the Kaggle community to dive deep into the survey datasets and help us tell the diverse stories of data scientists from around the world.
The challenge objective: tell a data story about a subset of the data science community represented in this survey, through a combination of both narrative text and data exploration. A βstoryβ could be defined any number of ways, and thatβs deliberate. The challenge is to deeply explore (through data) the impact, priorities, or concerns of a specific group of data science and machine learning practitioners. That group can be defined in the macro (for example: anyone who does most of their coding in Python) or the micro (for example: female data science students studying machine learning in masters programs). This is an opportunity to be creative and tell the story of a community you identify with or are passionate about!
Submissions will be evaluated on the following:
Composition - Is there a clear narrative thread to the story thatβs articulated and supported by data? The subject should be well defined, well researched, and well supported through the use of data and visualizations. Originality - Does the reader learn something new through this submission? Or is the reader challenged to think about something in a new way? A great entry will be informative, thought provoking, and fresh all at the same time. Documentation - Are your code, and kernel, and additional data sources well documented so a reader can understand what you did? Are your sources clearly cited? A high quality analysis should be concise and clear at each step so the rationale is easy to follow and the process is reproducible To be valid, a submission must be contained in one kernel, made public on or before the submission deadline. Participants are free to use any datasets in addition to the Kaggle Data Science survey, but those datasets must also be publicly available on Kaggle by the deadline for a submission to be valid.
While the challenge is running, Kaggle will also give a Weekly Kernel Award of $1,500 to recognize excellent kernels that are public analyses of the survey. Weekly Kernel Awards will be announced every Friday between 11/9 and 11/30.
How to Participate To make a submission, complete the submission form. Only one submission will be judged per participant, so if you make multiple submissions we will review the last (most recent) entry.
No submission is necessary for the Weekly Kernels Awards. To be eligible, a kernel must be public and use the 2018 Data Science Survey as a data source.
Timeline All dates are 11:59PM UTC
Submission deadline: December 3rd
Winners announced: December 10th
Weekly Kernels Award prize winners announcements: November 9th, 16th, 23rd, and 30th
All kernels are evaluated after the deadline.
Rules To be eligible to win a prize in either of the above prize tracks, you must be:
a registered account holder at Kaggle.com; the older of 18 years old or the age of majority in your jurisdiction of residence; and not a resident of Crimea, Cuba, Iran, Syria, North Korea, or Sudan Your kernels will only be eligible to win if they have been made public on kaggle.com by the above deadline. All prizes are awarded at the discretion of Kaggle. Kaggle reserves the right to cancel or modify prize criteria.
Unfortunately employees, interns, contractors, officers and directors of Kaggle Inc., and their parent companies, are not eligible to win any prizes.
Survey Methodology ...
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
A fictional dataset for exploratory data analysis (EDA) and to test simple prediction models.
This toy dataset features 150000 rows and 6 columns.
Note: All data is fictional. The data has been generated so that their distributions are convenient for statistical analysis.
Number: A simple index number for each row
City: The location of a person (Dallas, New York City, Los Angeles, Mountain View, Boston, Washington D.C., San Diego and Austin)
Gender: Gender of a person (Male or Female)
Age: The age of a person (Ranging from 25 to 65 years)
Income: Annual income of a person (Ranging from -674 to 177175)
Illness: Is the person Ill? (Yes or No)
Stock photo by Mika Baumeister on Unsplash.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Preventive Maintenance for Marine Engines: Data-Driven Insights
Introduction:
Marine engine failures can lead to costly downtime, safety risks and operational inefficiencies. This project leverages machine learning to predict maintenance needs, helping ship operators prevent unexpected breakdowns. Using a simulated dataset, we analyze key engine parameters and develop predictive models to classify maintenance status into three categories: Normal, Requires Maintenance, and Critical.
Overview This project explores preventive maintenance strategies for marine engines by analyzing operational data and applying machine learning techniques.
Key steps include: 1. Data Simulation: Creating a realistic dataset with engine performance metrics. 2. Exploratory Data Analysis (EDA): Understanding trends and patterns in engine behavior. 3. Model Training & Evaluation: Comparing machine learning models (Decision Tree, Random Forest, XGBoost) to predict maintenance needs. 4. Hyperparameter Tuning: Using GridSearchCV to optimize model performance.
Tools Used 1. Python: Data processing, analysis and modeling 2. Pandas & NumPy: Data manipulation 3. Scikit-Learn & XGBoost: Machine learning model training 4. Matplotlib & Seaborn: Data visualization
Skills Demonstrated β Data Simulation & Preprocessing β Exploratory Data Analysis (EDA) β Feature Engineering & Encoding β Supervised Machine Learning (Classification) β Model Evaluation & Hyperparameter Tuning
Key Insights & Findings π Engine Temperature & Vibration Level: Strong indicators of potential failures. π Random Forest vs. XGBoost: After hyperparameter tuning, both models achieved comparable performance, with Random Forest performing slightly better. π Maintenance Status Distribution: Balanced dataset ensures unbiased model training. π Failure Modes: The most common issues were Mechanical Wear & Oil Leakage, aligning with real-world engine failure trends.
Challenges Faced π§ Simulating Realistic Data: Ensuring the dataset reflects real-world marine engine behavior was a key challenge. π§ Model Performance: The accuracy was limited (~35%) due to the complexity of failure prediction. π§ Feature Selection: Identifying the most impactful features required extensive analysis.
Call to Action π Explore the Dataset & Notebook: Try running different models and tweaking hyperparameters. π Extend the Analysis: Incorporate additional sensor data or alternative machine learning techniques. π Real-World Application: This approach can be adapted for industrial machinery, aircraft engines, and power plants.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is a synthetic yet realistic simulation of newborn baby health monitoring.
It is designed for healthcare analytics, machine learning, and app development, especially for early detection of newborn health risks.
The dataset mimics daily health records of newborn babies, including vital signs, growth parameters, feeding patterns, and risk classification labels.
Newborn health is one of the most sensitive areas of healthcare.
Monitoring newborns can help detect jaundice, infections, dehydration, and respiratory issues early.
Since real newborn data is private and hard to access, this dataset provides a safe and realistic alternative for researchers, students, and developers to build and test:
- π Exploratory Data Analysis (EDA)
- π€ Machine Learning classification models
- π± Healthcare monitoring apps (Streamlit, Flask, Django, etc.)
- π₯ Predictive healthcare systems
pandas, numpy, faker) with medically-informed rules B001). The dataset was generated in Python using:
- numpy and pandas for data simulation.
- faker for generating baby names and dates.
- Medically realistic rules for vitals, growth, jaundice progression, and risk classification.
Created by [Arif Miah]
I am passionate about AI, Healthcare Analytics, and App Development.
You can connect with me:
This is a synthetic dataset created for educational and research purposes only.
It should NOT be used for actual medical diagnosis or treatment decisions.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
π³οΈ Titanic Dataset (JSON Format) π Overview
This is the classic Titanic: Machine Learning from Disaster dataset, converted into JSON format for easier use in APIs, data pipelines, and Python projects. It contains the same passenger details as the original CSV version, but stored as JSON for convenience.
π Dataset Contents
File: titanic.json
Columns: PassengerId, Survived, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked
Use Cases: Exploratory Data Analysis (EDA), feature engineering, machine learning model training, web app backends, JSON parsing practice.
π οΈ How to Use πΉ 1. Load with kagglehub import kagglehub
path = kagglehub.dataset_download("engrbasit62/titanic-json-format") print("Path to dataset files:", path)
πΉ 2. Load into Pandas import pandas as pd
df = pd.read_json(f"{path}/titanic.json")
print(df.head())
π‘ Notes
Preview truncation: Kaggle may show only part of the JSON in the preview panel because of its size. β Donβt worry β the full dataset is available when loaded via code.
Benefits of JSON format: Ideal for web apps, APIs, or projects that work with structured data. Easily convertible back to CSV if needed.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset has been cleaned and preprocessed from its raw form using Python in a Jupyter Notebook. The following steps were taken during cleaning:
1.Removed missing or null values
2.Standardized column names and data formats
3.Filtered out outliers or irrelevant rows
4.Converted categorical variables where needed
This file is ready for further exploratory data analysis (EDA), visualization, or machine learning tasks.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Context: The data was created to build a machine learning projects and to analyze. The data is useful to perform exploratory data analysis (EDA).
Source: The csv data was scrapped from https://www.vegrecipesofindia.com/recipes/beverages/ and the dataset is created using BeautifulSoup python library.
Inspiration: You can use the dataset for analyzing.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
π Rapido Ride Data β July 2025 π Overview
This dataset contains simulated Rapido ride data for July 2025, designed for data analysis, business intelligence, and machine learning use cases. It represents daily ride operations including customer bookings, driver performance, revenue generation, and service quality insights.
π― Purpose
The goal of this dataset is to help analysts and learners explore real-world mobility analytics. You can use it to:
Build interactive dashboards (Power BI, Tableau, Excel)
Perform exploratory data analysis (EDA)
Create KPI reports and trend visualizations
Train models for demand forecasting or cancellation prediction
π Dataset Details
The dataset includes realistic, time-based entries covering one month of operations.
Column Name Description ride_id Unique ID for each ride ride_date Date of the ride (July 2025) pickup_time Ride start time drop_time Ride end time ride_duration Duration of the ride (minutes) distance_km Distance travelled (in kilometers) fare_amount Fare charged to customer payment_mode Type of payment (Cash, UPI, Card) driver_id Unique driver identifier customer_id Unique customer identifier driver_rating Rating given by customer customer_rating Rating given by driver ride_status Completed, Cancelled by Driver, Cancelled by Customer city City where ride took place ride_type Bike, Auto, or Cab waiting_time Waiting time before ride started promo_used Yes/No for discount applied cancellation_reason Reason if ride cancelled revenue Net revenue earned per ride π Key Insights You Can Explore
π Ride demand patterns by day & hour
π Cancellations by weekday/weekend
π¦ Driver performance & customer satisfaction
π° Revenue trends and top-performing drivers
π City-wise ride distribution
π§ Suitable For
Data cleaning & transformation practice
Power BI / Excel dashboard building
SQL analysis & reporting
Predictive modeling (e.g., cancellation prediction, fare forecasting)
βοΈ Tools You Can Use
Power BI β For KPI dashboards & visuals
Excel β For pivot tables & charts
Python / Pandas β For EDA and ML
SQL β For query-based insights
π‘ Acknowledgment
This dataset is synthetically generated for educational and analytical purposes. It does not represent actual Rapido data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains cleaned and structured information about popular movies. It was processed using Python and Pandas to remove null values, fix inconsistent formats, and convert date columns to proper datetime types.
The dataset includes attributes such as:
π¬ Movie title
β Average rating
ποΈ Release date (converted to datetime)
π Country of origin
π£οΈ Spoken languages
This cleaned dataset can be used for:
Exploratory Data Analysis (EDA)
Visualization practice
Machine Learning experiments
Data cleaning and preprocessing tutorials
Source: IMDb Top Movies (via API / educational purpose)
Last Updated: November 2025
Facebook
TwitterThis dataset contains 55,000 entries of synthetic customer transactions, generated using Python's Faker library. The goal behind creating this dataset was to provide a resource for learners like myself to explore, analyze, and apply various data analysis techniques in a context that closely mimics real-world data.
About the Dataset: - CID (Customer ID): A unique identifier for each customer. - TID (Transaction ID): A unique identifier for each transaction. - Gender: The gender of the customer, categorized as Male or Female. - Age Group: Age group of the customer, divided into several ranges. - Purchase Date: The timestamp of when the transaction took place. - Product Category: The category of the product purchased, such as Electronics, Apparel, etc. - Discount Availed: Indicates whether the customer availed any discount (Yes/No). - Discount Name: Name of the discount applied (e.g., FESTIVE50). - Discount Amount (INR): The amount of discount availed by the customer. - Gross Amount: The total amount before applying any discount. - Net Amount: The final amount after applying the discount. - Purchase Method: The payment method used (e.g., Credit Card, Debit Card, etc.). - Location: The city where the purchase took place.
Use Cases: 1. Exploratory Data Analysis (EDA): This dataset is ideal for conducting EDA, allowing users to practice techniques such as summary statistics, visualizations, and identifying patterns within the data. 2. Data Preprocessing and Cleaning: Learners can work on handling missing data, encoding categorical variables, and normalizing numerical values to prepare the dataset for analysis. 3. Data Visualization: Use tools like Pythonβs Matplotlib, Seaborn, or Power BI to visualize purchasing trends, customer demographics, or the impact of discounts on purchase amounts. 4. Machine Learning Applications: After applying feature engineering, this dataset is suitable for supervised learning models, such as predicting whether a customer will avail a discount or forecasting purchase amounts based on the input features.
This dataset provides an excellent sandbox for honing skills in data analysis, machine learning, and visualization in a structured but flexible manner.
This is not a real dataset. This dataset was generated using Python's Faker library for the sole purpose of learning