Facebook
TwitterThis dataset was created by Martin Kanju
Released under Other (specified in description)
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Sample data for exercises in Further Adventures in Data Cleaning.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset was created by Hassane Skikri
Released under CC0: Public Domain
Facebook
TwitterThis dataset was created by Mohanad Hazem Qabil
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Restaurant Menu DatasetWith approximately 45,000 menus dating from the 1840s to the present, The New York Public Library’s restaurant menu collection is one of the largest in the world. The menu data has been transcribed, dish by dish, into this dataset. For more information, please see http://menus.nypl.org/about.This dataset is not clean and contains many missing values, making it perfect to practice data cleaning tools and techniques.Dataset Variables:id: identifier for menuname: sponsor: who sponsored the meal (organizations, people, name of restaurant)event: categoryvenue: type of place (commercial, social, professional)place: where the meal took place (often a geographic location)physical_description: dimension and material description of the menuoccasion: occasion of the meal (holidays, anniversaries, daily)notes: notes by librarians about the original materialcall_number: call number of the menukeywords: language: date: date of the menulocation: organization or business who produced the menulocation_typecurrency: system of money the menu uses (dollars, etc)currency_symbol: symbol for the currency ($, etc)status: completeness of the menu transcription (transcribed, under review, etc)page_count: how many pages the menu hasdish_count: how many dishes the menu has
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is designed specifically for beginners and intermediate learners to practice data cleaning techniques using Python and Pandas.
It includes 500 rows of simulated employee data with intentional errors such as:
Missing values in Age and Salary
Typos in email addresses (@gamil.com)
Inconsistent city name casing (e.g., lahore, Karachi)
Extra spaces in department names (e.g., " HR ")
✅ Skills You Can Practice:
Detecting and handling missing data
String cleaning and formatting
Removing duplicates
Validating email formats
Standardizing categorical data
You can use this dataset to build your own data cleaning notebook, or use it in interviews, assessments, and tutorials.
Facebook
Twitterhttps://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/
Data Cleansing Tools Market size was valued at USD 4.02 Billion in 2024 and is projected to reach USD 9.20 Billion by 2032, growing at a CAGR of 10.89% during the forecast period 2026-2032.Demand for Accurate Data Analytics: A strong demand for accurate datasets is being noticed, and the use of data cleansing techniques is expected to expand to enable trustworthy reporting and decision-making.Adoption of Cloud Platforms: Enterprise workloads are being moved to the cloud, and cloud-compatible data cleansing solutions are expected to be used to boost scalability and flexibility.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
All data are prone to error and require data cleaning prior to analysis. An important example is longitudinal growth data, for which there are no universally agreed standard methods for identifying and removing implausible values and many existing methods have limitations that restrict their usage across different domains. A decision-making algorithm that modified or deleted growth measurements based on a combination of pre-defined cut-offs and logic rules was designed. Five data cleaning methods for growth were tested with and without the addition of the algorithm and applied to five different longitudinal growth datasets: four uncleaned canine weight or height datasets and one pre-cleaned human weight dataset with randomly simulated errors. Prior to the addition of the algorithm, data cleaning based on non-linear mixed effects models was the most effective in all datasets and had on average a minimum of 26.00% higher sensitivity and 0.12% higher specificity than other methods. Data cleaning methods using the algorithm had improved data preservation and were capable of correcting simulated errors according to the gold standard; returning a value to its original state prior to error simulation. The algorithm improved the performance of all data cleaning methods and increased the average sensitivity and specificity of the non-linear mixed effects model method by 7.68% and 0.42% respectively. Using non-linear mixed effects models combined with the algorithm to clean data allows individual growth trajectories to vary from the population by using repeated longitudinal measurements, identifies consecutive errors or those within the first data entry, avoids the requirement for a minimum number of data entries, preserves data where possible by correcting errors rather than deleting them and removes duplications intelligently. This algorithm is broadly applicable to data cleaning anthropometric data in different mammalian species and could be adapted for use in a range of other domains.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The Dirty Cafe Sales dataset contains 10,000 rows of synthetic data representing sales transactions in a cafe. This dataset is intentionally "dirty," with missing values, inconsistent data, and errors introduced to provide a realistic scenario for data cleaning and exploratory data analysis (EDA). It can be used to practice cleaning techniques, data wrangling, and feature engineering.
dirty_cafe_sales.csv| Column Name | Description | Example Values |
|---|---|---|
Transaction ID | A unique identifier for each transaction. Always present and unique. | TXN_1234567 |
Item | The name of the item purchased. May contain missing or invalid values (e.g., "ERROR"). | Coffee, Sandwich |
Quantity | The quantity of the item purchased. May contain missing or invalid values. | 1, 3, UNKNOWN |
Price Per Unit | The price of a single unit of the item. May contain missing or invalid values. | 2.00, 4.00 |
Total Spent | The total amount spent on the transaction. Calculated as Quantity * Price Per Unit. | 8.00, 12.00 |
Payment Method | The method of payment used. May contain missing or invalid values (e.g., None, "UNKNOWN"). | Cash, Credit Card |
Location | The location where the transaction occurred. May contain missing or invalid values. | In-store, Takeaway |
Transaction Date | The date of the transaction. May contain missing or incorrect values. | 2023-01-01 |
Missing Values:
Item, Payment Method, Location) may contain missing values represented as None or empty cells.Invalid Values:
"ERROR" or "UNKNOWN" to simulate real-world data issues.Price Consistency:
The dataset includes the following menu items with their respective price ranges:
| Item | Price($) |
|---|---|
| Coffee | 2 |
| Tea | 1.5 |
| Sandwich | 4 |
| Salad | 5 |
| Cake | 3 |
| Cookie | 1 |
| Smoothie | 4 |
| Juice | 3 |
This dataset is suitable for: - Practicing data cleaning techniques such as handling missing values, removing duplicates, and correcting invalid entries. - Exploring EDA techniques like visualizations and summary statistics. - Performing feature engineering for machine learning workflows.
To clean this dataset, consider the following steps: 1. Handle Missing Values: - Fill missing numeric values with the median or mean. - Replace missing categorical values with the mode or "Unknown."
Handle Invalid Values:
"ERROR" and "UNKNOWN" with NaN or appropriate values.Date Consistency:
Feature Engineering:
Day of the Week or Transaction Month, for further analysis.This dataset is released under the CC BY-SA 4.0 License. You are free to use, share, and adapt it, provided you give appropriate credit.
If you have any questions or feedback, feel free to reach out through the dataset's discussion board on Kaggle.
Facebook
TwitterAccess and clean an open source herbarium dataset using Excel or RStudio.
Facebook
Twitterhttps://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Yield Data Cleaning Software market size in 2024 stands at USD 1.14 billion, with a robust compound annual growth rate (CAGR) of 13.2% expected from 2025 to 2033. By the end of 2033, the market is forecasted to reach USD 3.42 billion. This remarkable market expansion is being driven by the increasing adoption of precision agriculture technologies, the proliferation of big data analytics in farming, and the rising need for accurate, real-time agricultural data to optimize yields and resource efficiency.
One of the primary growth factors fueling the Yield Data Cleaning Software market is the rapid digital transformation within the agriculture sector. The integration of advanced sensors, IoT devices, and GPS-enabled machinery has led to an exponential increase in the volume of raw agricultural data generated on farms. However, this data often contains inconsistencies, errors, and redundancies due to equipment malfunctions, environmental factors, and human error. Yield Data Cleaning Software plays a critical role by automating the cleansing, validation, and normalization of such datasets, ensuring that only high-quality, actionable information is used for decision-making. As a result, farmers and agribusinesses can make more informed choices, leading to improved crop yields, efficient resource allocation, and reduced operational costs.
Another significant driver is the growing emphasis on sustainable agriculture and environmental stewardship. Governments and regulatory bodies across the globe are increasingly mandating the adoption of data-driven practices to minimize the environmental impact of farming activities. Yield Data Cleaning Software enables stakeholders to monitor and analyze field performance accurately, track input usage, and comply with sustainability standards. Moreover, the software’s ability to integrate seamlessly with farm management platforms and analytics tools enhances its value proposition. This trend is further bolstered by the rising demand for traceability and transparency in the food supply chain, compelling agribusinesses to invest in robust data management solutions.
The market is also witnessing substantial investments from technology providers, venture capitalists, and agricultural equipment manufacturers. Strategic partnerships and collaborations are becoming commonplace, with companies seeking to enhance their product offerings and expand their geographical footprint. The increasing awareness among farmers about the benefits of data accuracy and the availability of user-friendly, customizable software solutions are further accelerating market growth. Additionally, ongoing advancements in artificial intelligence (AI) and machine learning (ML) are enabling more sophisticated data cleaning algorithms, which can handle larger datasets and deliver deeper insights, thereby expanding the market’s potential applications.
Regionally, North America continues to dominate the Yield Data Cleaning Software market, supported by its advanced agricultural infrastructure, high rate of technology adoption, and significant investments in agri-tech startups. Europe follows closely, driven by stringent environmental regulations and a strong focus on sustainable farming practices. The Asia Pacific region is emerging as a high-growth market, fueled by the rapid modernization of agriculture, government initiatives to boost food security, and increasing awareness among farmers about the benefits of digital solutions. Latin America and the Middle East & Africa are also showing promising growth trajectories, albeit from a smaller base, as they gradually embrace precision agriculture technologies.
The Yield Data Cleaning Software market is bifurcated by component into Software and Services. The software segment currently accounts for the largest share of the market, underpinned by the increasing adoption of integrated farm management solutions and the demand for user-friendly platforms that can seamlessly process vast amounts of agricultural data. Modern yield data cleaning software solutions are equipped with advanced algorithms capable of detecting and rectifying data anomalies, thus ensuring the integrity and reliability of yield datasets. As the complexity of agricultural operations grows, the need for scalable, customizable software that can adapt to
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The mean, standard deviation, preservation of data (PD), sensitivity and specificity of five data cleaning approaches with and without an algorithm (A) compared to uncleaned longitudinal growth measurements in CLOSER data with and without simulated duplications and 1% errors.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The percentage of gold standard corrections of errors induced into CLOSER data with simulated duplications and 1% errors using the algorithmic data cleaning methods.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset was created by Ahmed Elsayed taha
Released under Apache 2.0
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The percentage of alterations made to Dogslife, SAVSNET, Banfield and CLOSER data with simulated duplications and 1% simulated errors using the NLME-A data cleaning method.
Facebook
Twitterhttps://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The Data Quality Management (DQM) market is experiencing robust growth, driven by the increasing volume and velocity of data generated across various industries. Businesses are increasingly recognizing the critical need for accurate, reliable, and consistent data to support critical decision-making, improve operational efficiency, and comply with stringent data regulations. The market is estimated to be valued at $15 billion in 2025, exhibiting a Compound Annual Growth Rate (CAGR) of 12% from 2025 to 2033. This growth is fueled by several key factors, including the rising adoption of cloud-based DQM solutions, the expanding use of advanced analytics and AI in data quality processes, and the growing demand for data governance and compliance solutions. The market is segmented by deployment (cloud, on-premises), organization size (small, medium, large enterprises), and industry vertical (BFSI, healthcare, retail, etc.), with the cloud segment exhibiting the fastest growth. Major players in the DQM market include Informatica, Talend, IBM, Microsoft, Oracle, SAP, SAS Institute, Pitney Bowes, Syncsort, and Experian, each offering a range of solutions catering to diverse business needs. These companies are constantly innovating to provide more sophisticated and integrated DQM solutions incorporating machine learning, automation, and self-service capabilities. However, the market also faces some challenges, including the complexity of implementing DQM solutions, the lack of skilled professionals, and the high cost associated with some advanced technologies. Despite these restraints, the long-term outlook for the DQM market remains positive, with continued expansion driven by the expanding digital transformation initiatives across industries and the growing awareness of the significant return on investment associated with improved data quality.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Description of the data entries, individuals, data entries per individual, mean and standard deviation of the longitudinal height or weight measurements in Dogslife, SAVSNET, Banfield and CLOSER data with and without simulated duplications and 1% errors before and after removal of duplicated measurement records.
Facebook
TwitterDescription: The NoCORA dataset represents a significant effort to compile and clean a comprehensive set of daily rainfall data for Northern Cameroon (North and Extreme North regions). This dataset, overing more than 1 million observations across 418 rainfall stations on a temporal range going from 1927 to 2022, is instrumental for researchers, meteorologists, and policymakers working in climate research, agricultural planning, and water resource management in the region. It integrates data from diverse sources, including Sodecoton rain funnels, the archive of Robert Morel (IRD), Centrale de Lagdo, the GHCN daily service, and the TAHMO network. The construction of NoCORA involved meticulous processes, including manual assembly of data, extensive data cleaning, and standardization of station names and coordinates, making it a hopefully robust and reliable resource for understanding climatic dynamics in Northern Cameroon. Data Sources: The dataset comprises eight primary rainfall data sources and a comprehensive coordinates dataset. The rainfall data sources include extensive historical and contemporary measurements, while the coordinates dataset was developed using reference data and an inference strategy for variant station names or missing coordinates. Dataset Preparation Methods: The preparation involved manual compilation, integration of machine-readable files, data cleaning with OpenRefine, and finalization using Python/Jupyter Notebook. This process should ensured the accuracy and consistency of the dataset. Discussion: NoCORA, with its extensive data compilation, presents an invaluable resource for climate-related studies in Northern Cameroon. However, users must navigate its complexities, including missing data interpretations, potential biases, and data inconsistencies. The dataset's comprehensive nature and historical span require careful handling and validation in research applications. Access to Dataset: The NoCORA dataset, while a comprehensive resource for climatological and meteorological research in Northern Cameroon, is subject to specific access conditions due to its compilation from various partner sources. The original data sources vary in their openness and accessibility, and not all partners have confirmed the open-access status of their data. As such, to ensure compliance with these varying conditions, access to the NoCORA dataset is granted on a request basis. Interested researchers and users are encouraged to contact us for permission to access the dataset. This process allows us to uphold the data sharing agreements with our partners while facilitating research and analysis within the scientific community. Authors Contributions:
Data treatment: Victor Hugo Nenwala, Carmel Foulna Tcheobe, Jérémy Lavarenne. Documentation: Jérémy Lavarenne. Funding: This project was funded by the DESIRA INNOVACC project. Changelog:
v1.0.2 : corrected interversion in column names in the coordinates dataset v1.0.1 : dataset specification file has been updated with complementary information regarding station locations v1.0.0 : initial submission
Facebook
TwitterThis dataset was created by Shiva Vashishtha
Facebook
Twitter
According to our latest research, the global Vendor Master Data Management (VMDM) market size is valued at USD 2.75 billion in 2024, reflecting a robust demand for efficient data governance and supplier relationship management across industries. The market is expected to register a compound annual growth rate (CAGR) of 13.2% during the forecast period, reaching a projected value of USD 7.77 billion by 2033. This significant expansion is primarily driven by the increasing need for centralized vendor data, compliance with regulatory frameworks, and the growing adoption of digital transformation initiatives in procurement and supply chain operations worldwide.
One of the primary growth factors propelling the Vendor Master Data Management market is the rising complexity of global supply chains and the need for organizations to manage vast volumes of vendor information efficiently. As enterprises expand their supplier networks and operate across multiple geographies, maintaining accurate, consistent, and up-to-date vendor data becomes crucial for operational efficiency and risk mitigation. The proliferation of regulatory requirements, such as Know Your Supplier (KYS) and anti-bribery laws, further necessitates robust VMDM solutions to ensure compliance and transparency. Companies are increasingly investing in advanced VMDM platforms that offer comprehensive data governance, automated workflows, and seamless integration with existing enterprise resource planning (ERP) systems to streamline vendor management processes.
Another key driver is the rapid digital transformation across various industry verticals, including BFSI, healthcare, manufacturing, and retail. Organizations are leveraging Vendor Master Data Management solutions to enhance procurement agility, improve supplier collaboration, and gain actionable insights from unified vendor data. The integration of artificial intelligence (AI), machine learning (ML), and analytics into VMDM platforms enables real-time data validation, anomaly detection, and predictive analytics, empowering businesses to make informed decisions and proactively manage supplier risks. Furthermore, the shift towards cloud-based deployment models is accelerating the adoption of VMDM solutions among small and medium enterprises (SMEs), offering scalability, cost-effectiveness, and ease of implementation without significant IT infrastructure investments.
The growing focus on data quality and governance is also contributing to market growth. As organizations recognize the strategic value of vendor data in driving competitive advantage, there is an increasing emphasis on establishing standardized data management practices and ensuring data accuracy across the vendor lifecycle. VMDM solutions facilitate centralized data repositories, automated data cleansing, and standardized workflows, minimizing data redundancies and inconsistencies. This not only enhances operational efficiency but also supports better compliance reporting, supplier performance evaluation, and strategic sourcing initiatives. The ongoing trend of mergers and acquisitions, as well as the emergence of new regulatory mandates, further underscore the importance of robust vendor data management capabilities.
Data Cleansing for Warehouse Master Data is an essential component in ensuring the accuracy and reliability of vendor information. As organizations manage vast amounts of data across multiple systems, maintaining data quality becomes a critical task. Effective data cleansing processes help eliminate duplicates, correct inaccuracies, and standardize data formats, thereby enhancing the overall integrity of the master data. This is particularly important in warehouse operations where precise data is crucial for inventory management, order fulfillment, and supply chain efficiency. By implementing robust data cleansing strategies, companies can improve decision-making, reduce operational risks, and enhance compliance with industry regulations. The integration of automated data cleansing tools within Vendor Master Data Management platforms further streamlines this process, enabling real-time updates and continuous data quality improvement.
From a regional perspective, North America continues to dominate the Vendor Master Data Management market, accounting for the largest share in 2
Facebook
TwitterThis dataset was created by Martin Kanju
Released under Other (specified in description)