27 datasets found
  1. Enriched NYTimes COVID19 U.S. County Dataset

    • kaggle.com
    zip
    Updated Jun 14, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ringhilterra17 (2020). Enriched NYTimes COVID19 U.S. County Dataset [Dataset]. https://www.kaggle.com/ringhilterra17/enrichednytimescovid19
    Explore at:
    zip(11291611 bytes)Available download formats
    Dataset updated
    Jun 14, 2020
    Authors
    ringhilterra17
    License

    Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
    License information was derived automatically

    Area covered
    United States
    Description

    Overview and Inspiration

    I wanted to make some geospatial visualizations to convey the current severity of COVID19 in different parts of the U.S..

    I liked the NYTimes COVID dataset, but it was lacking information on county boundary shape data, population per county, new cases / deaths per day, and per capita calculations, and county demographics.

    After a lot of work tracking down the different data sources I wanted and doing all of the data wrangling and joins in python, I wanted to open-source the final enriched data set in order to give others a head start in their COVID-19 related analytic, modeling, and visualization efforts.

    This dataset is enriched with county shapes, county center point coordinates, 2019 census population estimates, county population densities, cases and deaths per capita, and calculated per day cases / deaths metrics. It contains daily data per county back to January, allowing for analyizng changes over time.

    UPDATE: I have also included demographic information per county, including ages, races, and gender breakdown. This could help determine which counties are most susceptible to an outbreak.

    How this data can be used

    Geospatial analysis and visualization - Which counties are currently getting hit the hardest (per capita and totals)? - What patterns are there in the spread of the virus across counties? (network based spread simulations using county center lat / lons) -county population densities play a role in how quickly the virus spreads? -how does a specific county/state cases and deaths compare to other counties/states? Join with other county level datasets easily (with fips code column)

    Content Details

    See the column descriptions for more details on the dataset

    Visualizations and Analysis Examples

    COVID-19 U.S. Time-lapse: Confirmed Cases per County (per capita)

    https://github.com/ringhilterra/enriched-covid19-data/blob/master/example_viz/covid-cases-final-04-06.gif?raw=true" alt="">-

    Other Data Notes

    • Please review nytimes README for detailed notes on Covid-19 data - https://github.com/nytimes/covid-19-data/
    • The only update I made in regards to 'Geographic Exceptions', is that I took 'New York City' county provided in the Covid-19 data, which has all cases for 'for the five boroughs of New York City (New York, Kings, Queens, Bronx and Richmond counties) and replaced the missing FIPS for those rows with the 'New York County' fips code 36061. That way I could join to a geometry, and then I used the sum of those five boroughs population estimates for the 'New York City' estimate, which allowed me calculate 'per capita' metrics for 'New York City' entries in the Covid-19 dataset

    Acknowledgements

  2. Data Wrangling Market Analysis North America, Europe, APAC, Middle East and...

    • technavio.com
    pdf
    Updated Oct 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2024). Data Wrangling Market Analysis North America, Europe, APAC, Middle East and Africa, South America - US, UK, Germany, China, Japan - Size and Forecast 2024-2028 [Dataset]. https://www.technavio.com/report/data-wrangling-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Oct 4, 2024
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2024 - 2028
    Area covered
    United Kingdom, United States
    Description

    Snapshot img

    Data Wrangling Market Size 2024-2028

    The data wrangling market size is forecast to increase by USD 1.4 billion at a CAGR of 14.8% between 2023 and 2028. The market is experiencing significant growth due to the numerous benefits provided by data wrangling solutions, including data cleaning, transformation, and enrichment. One major trend driving market growth is the rising need for technology such as the competitive intelligence and artificial intelligence in the healthcare sector, where data wrangling is essential for managing and analyzing patient data to improve patient outcomes and reduce costs. However, a challenge facing the market is the lack of awareness of data wrangling tools among small and medium-sized enterprises (SMEs), which limits their ability to effectively manage and utilize their data. Despite this, the market is expected to continue growing as more organizations recognize the value of data wrangling in driving business insights and decision-making.

    What will be the Size of the Market During the Forecast Period?

    Request Free Sample

    The market is experiencing significant growth due to the increasing demand for data management and analysis in various industries. The market is experiencing significant growth due to the increasing volume, variety, and velocity of data being generated from various sources such as IoT devices, financial services, and smart cities. Artificial intelligence and machine learning technologies are being increasingly used for data preparation, data cleaning, and data unification. Data wrangling, also known as data munging, is the process of cleaning, transforming, and enriching raw data to make it usable for analysis. This process is crucial for businesses aiming to gain valuable insights from their data and make informed decisions. Data analytics is a primary driver for the market, as organizations seek to extract meaningful insights from their data. Cloud solutions are increasingly popular for data wrangling due to their flexibility, scalability, and cost-effectiveness.

    Furthermore, both on-premises and cloud-based solutions are being adopted by businesses to meet their specific data management requirements. Multi-cloud strategies are also gaining traction in the market, as organizations seek to leverage the benefits of multiple cloud providers. This approach allows businesses to distribute their data across multiple clouds, ensuring business continuity and disaster recovery capabilities. Data quality is another critical factor driving the market. Ensuring data accuracy, completeness, and consistency is essential for businesses to make reliable decisions. The market is expected to grow further as organizations continue to invest in big data initiatives and implement advanced technologies such as AI and ML to gain a competitive edge. Data cleaning and data unification are key processes in data wrangling that help improve data quality. The finance and insurance industries are major contributors to the market, as they generate vast amounts of data daily.

    In addition, real-time analysis is becoming increasingly important in these industries, as businesses seek to gain insights from their data in near real-time to make informed decisions. The Internet of Things (IoT) is also driving the market, as businesses seek to collect and analyze data from IoT devices to gain insights into their operations and customer behavior. Edge computing is becoming increasingly popular for processing IoT data, as it allows for faster analysis and decision-making. Self-service data preparation is another trend in the market, as businesses seek to empower their business users to prepare their data for analysis without relying on IT departments.

    Moreover, this approach allows businesses to be more agile and responsive to changing business requirements. Big data is another significant trend in the market, as businesses seek to manage and analyze large volumes of data to gain insights into their operations and customer behavior. Data wrangling is a critical process in managing big data, as it ensures that the data is clean, transformed, and enriched to make it usable for analysis. In conclusion, the market in North America is experiencing significant growth due to the increasing demand for data management and analysis in various industries. Cloud solutions, multi-cloud strategies, data quality, finance and insurance, IoT, real-time analysis, self-service data preparation, and big data are some of the key trends driving the market. Businesses that invest in data wrangling solutions can gain a competitive edge by gaining valuable insights from their data and making informed decisions.

    Market Segmentation

    The market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2024-2028, as well as historical data from 2018-2022 for the following segments.

    Sector
    
  3. Cafe Sales - Dirty Data for Cleaning Training

    • kaggle.com
    zip
    Updated Jan 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed Mohamed (2025). Cafe Sales - Dirty Data for Cleaning Training [Dataset]. https://www.kaggle.com/datasets/ahmedmohamed2003/cafe-sales-dirty-data-for-cleaning-training
    Explore at:
    zip(113510 bytes)Available download formats
    Dataset updated
    Jan 17, 2025
    Authors
    Ahmed Mohamed
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Dirty Cafe Sales Dataset

    Overview

    The Dirty Cafe Sales dataset contains 10,000 rows of synthetic data representing sales transactions in a cafe. This dataset is intentionally "dirty," with missing values, inconsistent data, and errors introduced to provide a realistic scenario for data cleaning and exploratory data analysis (EDA). It can be used to practice cleaning techniques, data wrangling, and feature engineering.

    File Information

    • File Name: dirty_cafe_sales.csv
    • Number of Rows: 10,000
    • Number of Columns: 8

    Columns Description

    Column NameDescriptionExample Values
    Transaction IDA unique identifier for each transaction. Always present and unique.TXN_1234567
    ItemThe name of the item purchased. May contain missing or invalid values (e.g., "ERROR").Coffee, Sandwich
    QuantityThe quantity of the item purchased. May contain missing or invalid values.1, 3, UNKNOWN
    Price Per UnitThe price of a single unit of the item. May contain missing or invalid values.2.00, 4.00
    Total SpentThe total amount spent on the transaction. Calculated as Quantity * Price Per Unit.8.00, 12.00
    Payment MethodThe method of payment used. May contain missing or invalid values (e.g., None, "UNKNOWN").Cash, Credit Card
    LocationThe location where the transaction occurred. May contain missing or invalid values.In-store, Takeaway
    Transaction DateThe date of the transaction. May contain missing or incorrect values.2023-01-01

    Data Characteristics

    1. Missing Values:

      • Some columns (e.g., Item, Payment Method, Location) may contain missing values represented as None or empty cells.
    2. Invalid Values:

      • Some rows contain invalid entries like "ERROR" or "UNKNOWN" to simulate real-world data issues.
    3. Price Consistency:

      • Prices for menu items are consistent but may have missing or incorrect values introduced.

    Menu Items

    The dataset includes the following menu items with their respective price ranges:

    ItemPrice($)
    Coffee2
    Tea1.5
    Sandwich4
    Salad5
    Cake3
    Cookie1
    Smoothie4
    Juice3

    Use Cases

    This dataset is suitable for: - Practicing data cleaning techniques such as handling missing values, removing duplicates, and correcting invalid entries. - Exploring EDA techniques like visualizations and summary statistics. - Performing feature engineering for machine learning workflows.

    Cleaning Steps Suggestions

    To clean this dataset, consider the following steps: 1. Handle Missing Values: - Fill missing numeric values with the median or mean. - Replace missing categorical values with the mode or "Unknown."

    1. Handle Invalid Values:

      • Replace invalid entries like "ERROR" and "UNKNOWN" with NaN or appropriate values.
    2. Date Consistency:

      • Ensure all dates are in a consistent format.
      • Fill missing dates with plausible values based on nearby records.
    3. Feature Engineering:

      • Create new columns, such as Day of the Week or Transaction Month, for further analysis.

    License

    This dataset is released under the CC BY-SA 4.0 License. You are free to use, share, and adapt it, provided you give appropriate credit.

    Feedback

    If you have any questions or feedback, feel free to reach out through the dataset's discussion board on Kaggle.

  4. H

    Drought Machine Learning Data Example

    • hydroshare.org
    • beta.hydroshare.org
    • +1more
    zip
    Updated Aug 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bryce Pulver (2023). Drought Machine Learning Data Example [Dataset]. https://www.hydroshare.org/resource/9024db8a67fd4afdab2358d1b75e7e85
    Explore at:
    zip(518.5 MB)Available download formats
    Dataset updated
    Aug 22, 2023
    Dataset provided by
    HydroShare
    Authors
    Bryce Pulver
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 1980 - Dec 1, 2020
    Area covered
    Description

    This repository showcases some examples of data wrangling and visualization using the output of the USGS's output from a drought prediction model on the Colorado River Basin and example ecology site data.

  5. Data Carpentry Genomics Curriculum Example Data

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    application/gzip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Olivier Tenaillon; Jeffrey E Barrick; Noah Ribeck; Daniel E. Deatherage; Jeffrey L. Blanchard; Aurko Dasgupta; Gabriel C. Wu; Sébastien Wielgoss; Stéphane Cruvellier; Claudine Medigue; Dominique Schneider; Richard E. Lenski; Taylor Reiter; Jessica Mizzi; Fotis Psomopoulos; Ryan Peek; Jason Williams (2023). Data Carpentry Genomics Curriculum Example Data [Dataset]. http://doi.org/10.6084/m9.figshare.7726454.v3
    Explore at:
    application/gzipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Olivier Tenaillon; Jeffrey E Barrick; Noah Ribeck; Daniel E. Deatherage; Jeffrey L. Blanchard; Aurko Dasgupta; Gabriel C. Wu; Sébastien Wielgoss; Stéphane Cruvellier; Claudine Medigue; Dominique Schneider; Richard E. Lenski; Taylor Reiter; Jessica Mizzi; Fotis Psomopoulos; Ryan Peek; Jason Williams
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    p.p1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 16.0px 'Andale Mono'; color: #29f914; background-color: #000000} span.s1 {font-variant-ligatures: no-common-ligatures} These files are intended for use with the Data Carpentry Genomics curriculum (https://datacarpentry.org/genomics-workshop/). Files will be useful for instructors teaching this curriculum in a workshop setting, as well as individuals working through these materials on their own.

    This curriculum is normally taught using Amazon Web Services (AWS). Data Carpentry maintains an AWS image that includes all of the data files needed to use these lesson materials. For information on how to set up an AWS instance from that image, see https://datacarpentry.org/genomics-workshop/setup.html. Learners and instructors who would prefer to teach on a different remote computing system can access all required files from this FigShare dataset.

    This curriculum uses data from a long term evolution experiment published in 2016: Tempo and mode of genome evolution in a 50,000-generation experiment (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4988878/) by Tenaillon O, Barrick JE, Ribeck N, Deatherage DE, Blanchard JL, Dasgupta A, Wu GC, Wielgoss S, Cruveiller S, Médigue C, Schneider D, and Lenski RE. (doi: 10.1038/nature18959). All sequencing data sets are available in the NCBI BioProject database under accession number PRJNA294072 (https://www.ncbi.nlm.nih.gov/bioproject/?term=PRJNA294072).

    backup.tar.gz: contains original fastq files, reference genome, and subsampled fastq files. Directions for obtaining these files from public databases are given during the lesson https://datacarpentry.org/wrangling-genomics/02-quality-control/index.html). On the AWS image, these files are stored in ~/.backup directory. 1.3Gb in size.

    Ecoli_metadata.xlsx: an example Excel file to be loaded during the R lesson.

    shell_data.tar.gz: contains the files used as input to the Introduction to the Command Line for Genomics lesson (https://datacarpentry.org/shell-genomics/).

    sub.tar.gz: contains subsampled fastq files that are used as input to the Data Wrangling and Processing for Genomics lesson (https://datacarpentry.org/wrangling-genomics/). 109Mb in size.

    solutions: contains the output files of the Shell Genomics and Wrangling Genomics lessons, including fastqc output, sam, bam, bcf, and vcf files.

    vcf_clean_script.R: converts vcf output in .solutions/wrangling_solutions/variant_calling_auto to single tidy data frame.

    combined_tidy_vcf.csv: output of vcf_clean_script.R

  6. Superstore Sales: The Data Quality Challenge

    • kaggle.com
    zip
    Updated Oct 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Obsession (2025). Superstore Sales: The Data Quality Challenge [Dataset]. https://www.kaggle.com/datasets/dataobsession/superstore-sales-the-data-quality-challenge
    Explore at:
    zip(1512911 bytes)Available download formats
    Dataset updated
    Oct 25, 2025
    Authors
    Data Obsession
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Superstore Sales - The Data Quality Challenge Edition (25K Records)

    This dataset is an expanded version of the popular "Sample - Superstore Sales" dataset, commonly used for introductory data analysis and visualization. It contains detailed transactional data for a US-based retail company, covering orders, products, and customer information.

    This version is specifically designed for practicing Data Quality (DQ) and Data Wrangling skills, featuring a unique set of real-world "dirty data" problems (like those encountered in tools like SPSS Modeler, Tableau Prep, or Alteryx) that must be cleaned before any analysis or machine learning can begin.

    This dataset combines the original Superstore data with 15,000 plausibly generated synthetic records, totaling 25,000 rows of transactional data. It includes 21 columns detailing: - Order Information: Order ID, Order Date, Ship Date, Ship Mode. - Customer Information: Customer ID, Customer Name, Segment. - Geographic Information: Country, City, State, Postal Code, Region. - Product Information: Product ID, Category, Sub-Category, Product Name. - Financial Metrics: Sales, Quantity, Discount, and Profit.

    🚨 Introduced Data Quality Challenges (The Dirty Data)

    This dataset is intentionally corrupted to provide a robust practice environment for data cleaning. Challenges include: Missing/Inconsistent Values: Deliberate gaps in Profit and Discount, and multiple inconsistent entries (-- or blank) in the Region column.

    • Data Type Mismatches: Order Date and Ship Date are stored as text strings, and the Profit column is polluted with comma-formatted strings (e.g., "1,234.56"), forcing the entire column to be read as an object (string) type.

    • Categorical Inconsistencies: The Category field contains variations and typos like "Tech", "technologies", "Furni", and "OfficeSupply" that require standardization.

    • Outliers and Invalid Data: Extreme outliers have been added to the Sales and Profit fields, alongside a subset of transactions with an invalid Sales value of 0.

    • Duplicate Records: Over 200 rows are duplicated (with slight financial variations) to test your deduplication logic.

    ❓ Suggested Analysis and Modeling Tasks

    This dataset is ideal for:

    Data Wrangling/Cleaning (Primary Focus): Fix all the intentional data quality issues before proceeding.

    Exploratory Data Analysis (EDA): Analyze sales distribution by region, segment, and category.

    Regression: Predict the Profit based on Sales, Discount, and product features.

    Classification: Build an RFM Model (Recency, Frequency, Monetary) and create a target variable (HighValueCustomer = 1 if total sales are* $>$ $1000$*) to be predicted by logistical regression or decision trees.

    Time Series Analysis: Aggregate sales by month/year to perform forecasting.

    Acknowledgements

    This dataset is an expanded and corrupted derivative of the original Sample Superstore dataset, credited to Tableau and widely shared for educational purposes. All synthetic records were generated to follow the plausible distribution of the original data.

  7. Wikipedia SQLITE Portable DB, Huge 5M+ Rows

    • kaggle.com
    zip
    Updated Jun 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    christernyc (2024). Wikipedia SQLITE Portable DB, Huge 5M+ Rows [Dataset]. https://www.kaggle.com/datasets/christernyc/wikipedia-sqlite-portable-db-huge-5m-rows/code
    Explore at:
    zip(6064169983 bytes)Available download formats
    Dataset updated
    Jun 29, 2024
    Authors
    christernyc
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The "Wikipedia SQLite Portable DB" is a compact and efficient database derived from the Kensho Derived Wikimedia Dataset (KDWD). This dataset provides a condensed subset of raw Wikimedia data in a format optimized for natural language processing (NLP) research and applications.

    I am not affiliated or partnered with the Kensho in any way, just really like the dataset for giving my agents to query easily.

    Key Features:

    Contains over 5 million rows of data from English Wikipedia and Wikidata Stored in a portable SQLite database format for easy integration and querying Includes a link-annotated corpus of English Wikipedia pages and a compact sample of the Wikidata knowledge base Ideal for NLP tasks, machine learning, data analysis, and research projects

    The database consists of four main tables:

    • items: Contains information about Wikipedia items, including labels and descriptions
    • properties: Stores details about Wikidata properties, such as labels and descriptions
    • pages: Provides metadata for Wikipedia pages, including page IDs, item IDs, titles, and view counts
    • link_annotated_text: Contains the link-annotated text of Wikipedia pages, divided into sections

    This dataset is derived from the Kensho Derived Wikimedia Dataset (KDWD), which is built from the English Wikipedia snapshot from December 1, 2019, and the Wikidata snapshot from December 2, 2019. The KDWD is a condensed subset of the raw Wikimedia data in a form that is helpful for NLP work, and it is released under the CC BY-SA 3.0 license. Credits: The "Wikipedia SQLite Portable DB" is derived from the Kensho Derived Wikimedia Dataset (KDWD), created by the Kensho R&D group. The KDWD is based on data from Wikipedia and Wikidata, which are crowd-sourced projects supported by the Wikimedia Foundation. We would like to acknowledge and thank the Kensho R&D group for their efforts in creating the KDWD and making it available for research and development purposes. By providing this portable SQLite database, we aim to make Wikipedia data more accessible and easier to use for researchers, data scientists, and developers working on NLP tasks, machine learning projects, and other data-driven applications. We hope that this dataset will contribute to the advancement of NLP research and the development of innovative applications utilizing Wikipedia data.

    https://www.kaggle.com/datasets/kenshoresearch/kensho-derived-wikimedia-data/data

    Tags: encyclopedia, wikipedia, sqlite, database, reference, knowledge-base, articles, information-retrieval, natural-language-processing, nlp, text-data, large-dataset, multi-table, data-science, machine-learning, research, data-analysis, data-mining, content-analysis, information-extraction, text-mining, text-classification, topic-modeling, language-modeling, question-answering, fact-checking, entity-recognition, named-entity-recognition, link-prediction, graph-analysis, network-analysis, knowledge-graph, ontology, semantic-web, structured-data, unstructured-data, data-integration, data-processing, data-cleaning, data-wrangling, data-visualization, exploratory-data-analysis, eda, corpus, document-collection, open-source, crowdsourced, collaborative, online-encyclopedia, web-data, hyperlinks, categories, page-views, page-links, embeddings

    Usage with LIKE queries: ``` import aiosqlite import asyncio

    class KenshoDatasetQuery: def init(self, db_file): self.db_file = db_file

    async def _aenter_(self):
      self.conn = await aiosqlite.connect(self.db_file)
      return self
    
    async def _aexit_(self, exc_type, exc_val, exc_tb):
      await self.conn.close()
    
    async def search_pages_by_title(self, title):
      query = """
      SELECT pages.page_id, pages.item_id, pages.title, pages.views, 
          items.labels AS item_labels, items.description AS item_description,
          link_annotated_text.sections
      FROM pages 
      JOIN items ON pages.item_id = items.id
      JOIN link_annotated_text ON pages.page_id = link_annotated_text.page_id
      WHERE pages.title LIKE ?
      """
      async with self.conn.execute(query, (f"%{title}%",)) as cursor:
        return await cursor.fetchall()
    
    async def search_items_by_label_or_description(self, keyword):
      query = """
      SELECT id, labels, description 
      FROM items
      WHERE labels LIKE ? OR description LIKE ?
      """
      async with self.conn.execute(query, (f"%{keyword}%", f"%{keyword}%")) as cursor:
        return await cursor.fetchall()
    
    async def search_items_by_label(self, label):
      query = """
      SELECT id, labels, description
      FROM items 
      WHERE labels LIKE ?
      """
      async with self.conn.execute(query, (f"%{label}%",)) as cursor:
        return await cursor.fetchall()
    
    async def search_properties_by_label_or_desc...
    
  8. Additional file 2 of Genomic data integration and user-defined sample-set...

    • springernature.figshare.com
    xlsx
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tommaso Alfonsi; Anna Bernasconi; Arif Canakoglu; Marco Masseroli (2023). Additional file 2 of Genomic data integration and user-defined sample-set extraction for population variant analysis [Dataset]. http://doi.org/10.6084/m9.figshare.21251615.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Tommaso Alfonsi; Anna Bernasconi; Arif Canakoglu; Marco Masseroli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional file 2. Example of transformed metadata: In this .xlsx (MS Excel) file, we list all the output metadata categories generated for each sample from the transformation of the 1KGP input datasets. The output metadata include information collected from all the four 1KGP metadata files considered. Some categories are not reported in the source metadata files—they are identified by the label manually_curated_...—and were added by the developed pipeline to store technical details (e.g., download date, the md5 hash of the source file, file size, etc.) and information derived from the knowledge of the source, such as the species, the processing pipeline used in the source and the health status. For every information category, the table reports a possible value. The third column (cardinality > 1) tells whether the same key can appear multiple times in the output GDM metadata file. This is used to represent multi-valued metadata categories; for example, in a GDM metadata file, the key manually_curated_chromosome appears once for every chromosome mutated by the variants of the sample.

  9. n

    Data from: Generalizable EHR-R-REDCap pipeline for a national...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jan 9, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sophia Shalhout; Farees Saqlain; Kayla Wright; Oladayo Akinyemi; David Miller (2022). Generalizable EHR-R-REDCap pipeline for a national multi-institutional rare tumor patient registry [Dataset]. http://doi.org/10.5061/dryad.rjdfn2zcm
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 9, 2022
    Dataset provided by
    Harvard Medical School
    Massachusetts General Hospital
    Authors
    Sophia Shalhout; Farees Saqlain; Kayla Wright; Oladayo Akinyemi; David Miller
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Objective: To develop a clinical informatics pipeline designed to capture large-scale structured EHR data for a national patient registry.

    Materials and Methods: The EHR-R-REDCap pipeline is implemented using R-statistical software to remap and import structured EHR data into the REDCap-based multi-institutional Merkel Cell Carcinoma (MCC) Patient Registry using an adaptable data dictionary.

    Results: Clinical laboratory data were extracted from EPIC Clarity across several participating institutions. Labs were transformed, remapped and imported into the MCC registry using the EHR labs abstraction (eLAB) pipeline. Forty-nine clinical tests encompassing 482,450 results were imported into the registry for 1,109 enrolled MCC patients. Data-quality assessment revealed highly accurate, valid labs. Univariate modeling was performed for labs at baseline on overall survival (N=176) using this clinical informatics pipeline.

    Conclusion: We demonstrate feasibility of the facile eLAB workflow. EHR data is successfully transformed, and bulk-loaded/imported into a REDCap-based national registry to execute real-world data analysis and interoperability.

    Methods eLAB Development and Source Code (R statistical software):

    eLAB is written in R (version 4.0.3), and utilizes the following packages for processing: DescTools, REDCapR, reshape2, splitstackshape, readxl, survival, survminer, and tidyverse. Source code for eLAB can be downloaded directly (https://github.com/TheMillerLab/eLAB).

    eLAB reformats EHR data abstracted for an identified population of patients (e.g. medical record numbers (MRN)/name list) under an Institutional Review Board (IRB)-approved protocol. The MCCPR does not host MRNs/names and eLAB converts these to MCCPR assigned record identification numbers (record_id) before import for de-identification.

    Functions were written to remap EHR bulk lab data pulls/queries from several sources including Clarity/Crystal reports or institutional EDW including Research Patient Data Registry (RPDR) at MGB. The input, a csv/delimited file of labs for user-defined patients, may vary. Thus, users may need to adapt the initial data wrangling script based on the data input format. However, the downstream transformation, code-lab lookup tables, outcomes analysis, and LOINC remapping are standard for use with the provided REDCap Data Dictionary, DataDictionary_eLAB.csv. The available R-markdown ((https://github.com/TheMillerLab/eLAB) provides suggestions and instructions on where or when upfront script modifications may be necessary to accommodate input variability.

    The eLAB pipeline takes several inputs. For example, the input for use with the ‘ehr_format(dt)’ single-line command is non-tabular data assigned as R object ‘dt’ with 4 columns: 1) Patient Name (MRN), 2) Collection Date, 3) Collection Time, and 4) Lab Results wherein several lab panels are in one data frame cell. A mock dataset in this ‘untidy-format’ is provided for demonstration purposes (https://github.com/TheMillerLab/eLAB).

    Bulk lab data pulls often result in subtypes of the same lab. For example, potassium labs are reported as “Potassium,” “Potassium-External,” “Potassium(POC),” “Potassium,whole-bld,” “Potassium-Level-External,” “Potassium,venous,” and “Potassium-whole-bld/plasma.” eLAB utilizes a key-value lookup table with ~300 lab subtypes for remapping labs to the Data Dictionary (DD) code. eLAB reformats/accepts only those lab units pre-defined by the registry DD. The lab lookup table is provided for direct use or may be re-configured/updated to meet end-user specifications. eLAB is designed to remap, transform, and filter/adjust value units of semi-structured/structured bulk laboratory values data pulls from the EHR to align with the pre-defined code of the DD.

    Data Dictionary (DD)

    EHR clinical laboratory data is captured in REDCap using the ‘Labs’ repeating instrument (Supplemental Figures 1-2). The DD is provided for use by researchers at REDCap-participating institutions and is optimized to accommodate the same lab-type captured more than once on the same day for the same patient. The instrument captures 35 clinical lab types. The DD serves several major purposes in the eLAB pipeline. First, it defines every lab type of interest and associated lab unit of interest with a set field/variable name. It also restricts/defines the type of data allowed for entry for each data field, such as a string or numerics. The DD is uploaded into REDCap by every participating site/collaborator and ensures each site collects and codes the data the same way. Automation pipelines, such as eLAB, are designed to remap/clean and reformat data/units utilizing key-value look-up tables that filter and select only the labs/units of interest. eLAB ensures the data pulled from the EHR contains the correct unit and format pre-configured by the DD. The use of the same DD at every participating site ensures that the data field code, format, and relationships in the database are uniform across each site to allow for the simple aggregation of the multi-site data. For example, since every site in the MCCPR uses the same DD, aggregation is efficient and different site csv files are simply combined.

    Study Cohort

    This study was approved by the MGB IRB. Search of the EHR was performed to identify patients diagnosed with MCC between 1975-2021 (N=1,109) for inclusion in the MCCPR. Subjects diagnosed with primary cutaneous MCC between 2016-2019 (N= 176) were included in the test cohort for exploratory studies of lab result associations with overall survival (OS) using eLAB.

    Statistical Analysis

    OS is defined as the time from date of MCC diagnosis to date of death. Data was censored at the date of the last follow-up visit if no death event occurred. Univariable Cox proportional hazard modeling was performed among all lab predictors. Due to the hypothesis-generating nature of the work, p-values were exploratory and Bonferroni corrections were not applied.

  10. Data Visualization Tools Market Analysis, Size, and Forecast 2025-2029:...

    • technavio.com
    pdf
    Updated Feb 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Data Visualization Tools Market Analysis, Size, and Forecast 2025-2029: North America (Mexico), Europe (France, Germany, and UK), Middle East and Africa (UAE), APAC (Australia, China, India, Japan, and South Korea), South America (Brazil), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/data-visualization-tools-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Feb 6, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Description

    Snapshot img

    Data Visualization Tools Market Size 2025-2029

    The data visualization tools market size is forecast to increase by USD 7.95 billion at a CAGR of 11.2% between 2024 and 2029.

    The market is experiencing significant growth due to the increasing demand for business intelligence and AI-powered insights. Companies are recognizing the value of transforming complex data into easily digestible visual representations to inform strategic decision-making. However, this market faces challenges as data complexity and massive data volumes continue to escalate. Organizations must invest in advanced data visualization tools to effectively manage and analyze their data to gain a competitive edge. The ability to automate data visualization processes and integrate AI capabilities will be crucial for companies to overcome the challenges posed by data complexity and volume. By doing so, they can streamline their business operations, enhance data-driven insights, and ultimately drive growth in their respective industries.

    What will be the Size of the Data Visualization Tools Market during the forecast period?

    Request Free SampleIn today's data-driven business landscape, the market continues to evolve, integrating advanced capabilities to support various sectors in making informed decisions. Data storytelling and preparation are crucial elements, enabling organizations to effectively communicate complex data insights. Real-time data visualization ensures agility, while data security safeguards sensitive information. Data dashboards facilitate data exploration and discovery, offering data-driven finance, strategy, and customer experience. Big data visualization tackles complex datasets, enabling data-driven decision making and innovation. Data blending and filtering streamline data integration and analysis. Data visualization software supports data transformation, cleaning, and aggregation, enhancing data-driven operations and healthcare. On-premises and cloud-based solutions cater to diverse business needs. Data governance, ethics, and literacy are integral components, ensuring data-driven product development, government, and education adhere to best practices. Natural language processing, machine learning, and visual analytics further enrich data-driven insights, enabling interactive charts and data reporting. Data connectivity and data-driven sales fuel business intelligence and marketing, while data discovery and data wrangling simplify data exploration and preparation. The market's continuous dynamism underscores the importance of data culture, data-driven innovation, and data-driven HR, as organizations strive to leverage data to gain a competitive edge.

    How is this Data Visualization Tools Industry segmented?

    The data visualization tools industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. DeploymentOn-premisesCloudCustomer TypeLarge enterprisesSMEsComponentSoftwareServicesApplicationHuman resourcesFinanceOthersEnd-userBFSIIT and telecommunicationHealthcareRetailOthersGeographyNorth AmericaUSMexicoEuropeFranceGermanyUKMiddle East and AfricaUAEAPACAustraliaChinaIndiaJapanSouth KoreaSouth AmericaBrazilRest of World (ROW)

    By Deployment Insights

    The on-premises segment is estimated to witness significant growth during the forecast period.The market has experienced notable expansion as businesses across diverse sectors acknowledge the significance of data analysis and representation to uncover valuable insights and inform strategic decisions. Data visualization plays a pivotal role in this domain. On-premises deployment, which involves implementing data visualization tools within an organization's physical infrastructure or dedicated data centers, is a popular choice. This approach offers organizations greater control over their data, ensuring data security, privacy, and adherence to data governance policies. It caters to industries dealing with sensitive data, subject to regulatory requirements, or having stringent security protocols that prohibit cloud-based solutions. Data storytelling, data preparation, data-driven product development, data-driven government, real-time data visualization, data security, data dashboards, data-driven finance, data-driven strategy, big data visualization, data-driven decision making, data blending, data filtering, data visualization software, data exploration, data-driven insights, data-driven customer experience, data mapping, data culture, data cleaning, data-driven operations, data aggregation, data transformation, data-driven healthcare, on-premises data visualization, data governance, data ethics, data discovery, natural language processing, data reporting, data visualization platforms, data-driven innovation, data wrangling, data-driven sales, data connectivit

  11. 🔥 Most Viewed Hindi Music Videos on YouTube 🎵

    • kaggle.com
    zip
    Updated Jan 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kanchana1990 (2024). 🔥 Most Viewed Hindi Music Videos on YouTube 🎵 [Dataset]. https://www.kaggle.com/datasets/kanchana1990/most-viewed-hindi-music-videos-on-youtube/code
    Explore at:
    zip(95539 bytes)Available download formats
    Dataset updated
    Jan 4, 2024
    Authors
    Kanchana1990
    License

    Open Data Commons Attribution License (ODC-By) v1.0https://www.opendatacommons.org/licenses/by/1.0/
    License information was derived automatically

    Area covered
    YouTube
    Description

    Title: "🔥 Most Viewed Hindi Music Videos on YouTube 🎵"

    Description:

    Dive into a rich dataset capturing the essence of India's vibrant music industry, featuring some of the most viewed Hindi music videos on YouTube. This unique collection amalgamates data from the three paramount channels of Indian music: T-Series, Zee Music, and Tips Official. Each entry is a testament to the songs and artists that have left an indelible mark on audiences nationwide and globally.

    Dataset Compilation: Compiled from the extensive libraries of T-Series, Zee Music, and Tips Official, this dataset offers a diverse range of videos, each resonating with millions of fans. As you navigate through the data, keep an eye out for duplicates or near duplicates, especially for popular videos featured across multiple channels. We encourage users to employ their data wrangling skills to identify and manage these instances for a more streamlined analysis.

    Column Descriptors:

    • Channel: The originating YouTube channel of the video.
    • Title: The title of the video as it appears on YouTube.
    • Published_At: The date and time the video was made available on YouTube.
    • Duration: The total length of the video.
    • View_Count: The number of times the video has been viewed.
    • Like_Count: The number of likes the video has received.
    • Comment_Count: The number of comments on the video.

    Potential Uses:

    • Trend Analysis: Discover the evolution of music tastes over time by analyzing view counts, likes, and comments.
    • Popularity Metrics: Dive into what makes a video successful by exploring the correlation between different engagement metrics.
    • Comparative Study: Compare the content strategy and audience engagement across the three channels to understand the competitive landscape.

    Ethical Consideration and Acknowledgements:

    We obtained this data respectfully, using Google's YouTube Data API with strict adherence to rate limits and quota management, ensuring ethical data mining practices. Our heartfelt thanks go to T-Series, Zee Music, and Tips Official for creating content that resonates across borders, and to YouTube for providing a platform that bridges creators and audiences. This dataset stands as a testament to their collective impact on the music industry.

    Closing Note:

    As you delve into the dataset, remember that each data point represents a story - a song that might have been someone's first dance, a comfort in tough times, or a tune that brings back a flood of memories. Happy analyzing, and may your insights be as profound as the music itself!

  12. Data Analytics Market Analysis, Size, and Forecast 2025-2029: North America...

    • technavio.com
    pdf
    Updated Jan 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Data Analytics Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, and UK), Middle East and Africa (UAE), APAC (China, India, Japan, and South Korea), South America (Brazil), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/data-analytics-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jan 11, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Description

    Snapshot img

    Data Analytics Market Size 2025-2029

    The data analytics market size is forecast to increase by USD 288.7 billion, at a CAGR of 14.7% between 2024 and 2029.

    The market is driven by the extensive use of modern technology in company operations, enabling businesses to extract valuable insights from their data. The prevalence of the Internet and the increased use of linked and integrated technologies have facilitated the collection and analysis of vast amounts of data from various sources. This trend is expected to continue as companies seek to gain a competitive edge by making data-driven decisions. However, the integration of data from different sources poses significant challenges. Ensuring data accuracy, consistency, and security is crucial as companies deal with large volumes of data from various internal and external sources. Additionally, the complexity of data analytics tools and the need for specialized skills can hinder adoption, particularly for smaller organizations with limited resources. Companies must address these challenges by investing in robust data management systems, implementing rigorous data validation processes, and providing training and development opportunities for their employees. By doing so, they can effectively harness the power of data analytics to drive growth and improve operational efficiency.

    What will be the Size of the Data Analytics Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
    Request Free SampleIn the dynamic and ever-evolving the market, entities such as explainable AI, time series analysis, data integration, data lakes, algorithm selection, feature engineering, marketing analytics, computer vision, data visualization, financial modeling, real-time analytics, data mining tools, and KPI dashboards continue to unfold and intertwine, shaping the industry's landscape. The application of these technologies spans various sectors, from risk management and fraud detection to conversion rate optimization and social media analytics. ETL processes, data warehousing, statistical software, data wrangling, and data storytelling are integral components of the data analytics ecosystem, enabling organizations to extract insights from their data. Cloud computing, deep learning, and data visualization tools further enhance the capabilities of data analytics platforms, allowing for advanced data-driven decision making and real-time analysis. Marketing analytics, clustering algorithms, and customer segmentation are essential for businesses seeking to optimize their marketing strategies and gain a competitive edge. Regression analysis, data visualization tools, and machine learning algorithms are instrumental in uncovering hidden patterns and trends, while predictive modeling and causal inference help organizations anticipate future outcomes and make informed decisions. Data governance, data quality, and bias detection are crucial aspects of the data analytics process, ensuring the accuracy, security, and ethical use of data. Supply chain analytics, healthcare analytics, and financial modeling are just a few examples of the diverse applications of data analytics, demonstrating the industry's far-reaching impact. Data pipelines, data mining, and model monitoring are essential for maintaining the continuous flow of data and ensuring the accuracy and reliability of analytics models. The integration of various data analytics tools and techniques continues to evolve, as the industry adapts to the ever-changing needs of businesses and consumers alike.

    How is this Data Analytics Industry segmented?

    The data analytics industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD billion' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. ComponentServicesSoftwareHardwareDeploymentCloudOn-premisesTypePrescriptive AnalyticsPredictive AnalyticsCustomer AnalyticsDescriptive AnalyticsOthersApplicationSupply Chain ManagementEnterprise Resource PlanningDatabase ManagementHuman Resource ManagementOthersGeographyNorth AmericaUSCanadaEuropeFranceGermanyUKMiddle East and AfricaUAEAPACChinaIndiaJapanSouth KoreaSouth AmericaBrazilRest of World (ROW)

    By Component Insights

    The services segment is estimated to witness significant growth during the forecast period.The market is experiencing significant growth as businesses increasingly rely on advanced technologies to gain insights from their data. Natural language processing is a key component of this trend, enabling more sophisticated analysis of unstructured data. Fraud detection and data security solutions are also in high demand, as companies seek to protect against threats and maintain customer trust. Data analytics platforms, including cloud-based offerings, are driving innovatio

  13. iNeuron Projectathon Oct-Nov'21

    • kaggle.com
    zip
    Updated Oct 22, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aman Anand (2021). iNeuron Projectathon Oct-Nov'21 [Dataset]. https://www.kaggle.com/yekahaaagayeham/ineuron-projectathon-octnov21
    Explore at:
    zip(3335989 bytes)Available download formats
    Dataset updated
    Oct 22, 2021
    Authors
    Aman Anand
    Description

    iNeuron-Projectathon-Oct-Nov-21

    Problem Statement:

    Design a web portal to automate the various operation performed in machine learning projects to solve specific problems related to supervised or unsupervised use case.. Web portal must have the capabilities to perform below-mentioned task: 1. Extract Transform Load: a. Extract: Portal should provide the capabilities to configure any data source example. Cloud Storage (AWS, Azure, GCP), Database (RDBMS, NoSQL,), and real-time streaming data to extract data into portportal. (Allow feasibility to write cucustom script if required to connect to any data source to extract data) b. Transform: Portal should provide various inbuilt functions/components to apply rich set of transformation to transform extracted data into desired format. c. Load: Portal should be able to save data into any of the cloud storage after extracted data transformed into desired format. d. Allow user to write custom script in python if some of the functionality is not present in the portal. 2. Exploratory Data Analysis: Portal should allow users to perform exploratory data analysis. 3. Data Preparation: data wrangling, feature extraction and feature selection should be automation with minimal user intervention. 4. Application must suggest appropriate machine learning algorithm which is best suitable for the use case and perform best model search operation to automate model development operation. 5. Application should provide feature to deploy model in any of the cloud and application should create prediction API to predict new instances. 6. Application should log each and every detail so that each activity can be audited in future to investigate any of the event. 7. Detail report should be generated for ETL, EDA, Data preparation and Model development and deployment. 8. Create a dashboard to monitor model performance and create various alert mechanism to notify appropriate user to take necessary precaution. 9. Create functionality to implement retraining for existing model if it is necessary. 10.Portal must be designed in such a way that it can be used by multiple organization/user where each organization/user is isolated from other. 11.Portal should provide functionality to manage user. Similar to RBAC concept used in Cloud. (It is not necessary to build so many role but design it in such a way that it can add role in future so that newly created role can also be applied to users.) Organization/User can have multiple user and each user will have specific role. 12.Portal should have a scheduler to schedule training or prediction task and appropriate alert regarding to scheduled job should be notified to subscriber/configured email id. 13.Implement watcher functionality to perform prediction as soon as file arrived at input location.

    Approach:

    1. Follow standard guild line to write quality solution for web portal.
    2. Follow OOPS to design solution.
    3. Implement REST API wherever possible.
    4. Implement CI, CD pipeline with automated testing and dockerization. (Use container or Kubernetes to deploy your dockerized application)
    5. CI, CD pipeline should have different environment example ST, SST, Production. Note: Feel free to use any of the technology to design your solution.

    Results:

    You have to build a solution that should summarize the various news articles from different reading categories.

    Project Evaluation metrics:

    Code:  You are supposed to write a code in a modular fashion  Safe: It can be used without causing harm.  Testable: It can be tested at the code level.  Maintainable: It can be maintained, even as your codebase grows.  Portable: It works the same in every environment (operating system)  You have to maintain your code on GitHub.  You have to keep your GitHub repo public so that anyone can check your code.  Proper readme file you have to maintain for any project development.  You should include basic workflow and execution of the entire project in the readme

    file on GitHub  Follow the coding standards: https://www.python.org/dev/peps/pep-0008/

    Database: Based on development requirement feel free to choose any database (SQL,

    NoSQL) or use multiple database.

    Cloud:

     You can use any cloud platform for this entire solution hosting like AWS, Azure or GCP.

    API Details or User Interface:

    1. Web portal should be designed like any cloud platform.
    2. Model developed using web portal should have functionality to expose API to test prediction.

    Logging:

     Logging is a must for every action performed by your code use the python logging library for this.

    DevOps Pipeline:

    Use source version control tool to implement CI, CD pipeline, e.g.: Azure Devops, Github, Circle CI.

    Deployment:

     You can host your application in the cloud platform using automated CI, CD pipeline.

    Solutions Design:

     You have to submit complete solution design strate...

  14. Wrangling Phosphoproteomic Data to Elucidate Cancer Signaling Pathways

    • plos.figshare.com
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark L. Grimes; Wan-Jui Lee; Laurens van der Maaten; Paul Shannon (2023). Wrangling Phosphoproteomic Data to Elucidate Cancer Signaling Pathways [Dataset]. http://doi.org/10.1371/journal.pone.0052884
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Mark L. Grimes; Wan-Jui Lee; Laurens van der Maaten; Paul Shannon
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The interpretation of biological data sets is essential for generating hypotheses that guide research, yet modern methods of global analysis challenge our ability to discern meaningful patterns and then convey results in a way that can be easily appreciated. Proteomic data is especially challenging because mass spectrometry detectors often miss peptides in complex samples, resulting in sparsely populated data sets. Using the R programming language and techniques from the field of pattern recognition, we have devised methods to resolve and evaluate clusters of proteins related by their pattern of expression in different samples in proteomic data sets. We examined tyrosine phosphoproteomic data from lung cancer samples. We calculated dissimilarities between the proteins based on Pearson or Spearman correlations and on Euclidean distances, whilst dealing with large amounts of missing data. The dissimilarities were then used as feature vectors in clustering and visualization algorithms. The quality of the clusterings and visualizations were evaluated internally based on the primary data and externally based on gene ontology and protein interaction networks. The results show that t-distributed stochastic neighbor embedding (t-SNE) followed by minimum spanning tree methods groups sparse proteomic data into meaningful clusters more effectively than other methods such as k-means and classical multidimensional scaling. Furthermore, our results show that using a combination of Spearman correlation and Euclidean distance as a dissimilarity representation increases the resolution of clusters. Our analyses show that many clusters contain one or more tyrosine kinases and include known effectors as well as proteins with no known interactions. Visualizing these clusters as networks elucidated previously unknown tyrosine kinase signal transduction pathways that drive cancer. Our approach can be applied to other data types, and can be easily adopted because open source software packages are employed.

  15. f

    MOESM3 of Wrangling environmental exposure data: guidance for getting the...

    • datasetcatalog.nlm.nih.gov
    • springernature.figshare.com
    Updated Nov 22, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rudel, Ruthann; Udesky, Julia; Perovich, Laura; Dodson, Robin (2019). MOESM3 of Wrangling environmental exposure data: guidance for getting the best information from your laboratory measurements [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000171611
    Explore at:
    Dataset updated
    Nov 22, 2019
    Authors
    Rudel, Ruthann; Udesky, Julia; Perovich, Laura; Dodson, Robin
    Description

    Additional file 3: Example of report formatting request to send to the lab.

  16. Owl Observations and Photoperiod

    • kaggle.com
    zip
    Updated Dec 10, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rebecca Ware (2024). Owl Observations and Photoperiod [Dataset]. https://www.kaggle.com/datasets/rebeccaware/dw-final-project-owls-and-sun
    Explore at:
    zip(1143982 bytes)Available download formats
    Dataset updated
    Dec 10, 2024
    Authors
    Rebecca Ware
    Description

    As a final project for Data Wrangling this fall (2024) we were tasked with using our new skills in collecting and importing data using web scraping, online API queries, and file import to create a relational data set of 3 tables with 2 related. We also had to use our tidying skills to clean and transform our inputted data to prepare it for visualization and analysis by focusing on column types and names, categorical variables, etc.

    An example notebook of analysis of this information is provided with 5 example of analysis using mutating joins, tidying, and/or ggplot.

  17. US stocks short volumes (FINRA)

    • kaggle.com
    zip
    Updated Mar 16, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DenzilG (2021). US stocks short volumes (FINRA) [Dataset]. https://www.kaggle.com/denzilg/finra-short-volumes-us-equities
    Explore at:
    zip(46837645 bytes)Available download formats
    Dataset updated
    Mar 16, 2021
    Authors
    DenzilG
    License

    https://www.usa.gov/government-works/https://www.usa.gov/government-works/

    Description

    Inspiration

    Originally, I was planning to use the Python Quandl api to get the data from here because it is already conveniently in time-series format. However, the data is split by reporting agency which makes it difficult to get an accurate image of the true short ratio because of missing data/difficulty in aggregation. So, I clicked on the source link which turned out to be a gold mine because of their consolidated data. Only downside was that it was all in .txt format so I had to use regex to parse through and data scraping to get the information from the website but that was a good refresher 😄.

    For better understanding of what the values in the text file mean, you can read this pdf from FINRA: https://www.finra.org/sites/default/files/2020-12/short-sale-volume-user-guide.pdf

    Functionality

    I condensed all the individual text files into a single .txt file such that it's much faster and less complex to write code compared to having to iterate through each individual .txt file. I created several functions for this dataset so please check out my workbook "FINRA Short Ratio functions" where I have described step by step on how I gathered the data and formatted it so that you can understand and modify them to fit your needs. Note that the data is only for the range of 1st April 2020 onwards (20200401 to 20210312 as of gathering the data) and the contents are separated by | delimiters so I used \D (non-digit) in regex to avoid confusion with the (a|b) pattern syntax.

    If you need historical data before April 2020, you can use the quandl database but it has non-consolidated information and you have to make a reference call for each individual stock for each agency so you would need to manually input tickers or get a list of all tickers through regex of the txt files or something like that 😅.

    Thoughts

    An excellent task to combine regular expressions (regex), web scraping, plotting, and data wrangling... see my notebook for an example with annotated workflow. Please comment and feel free to fork and modify my workbook to change the functionality. Possibly the short volumes can be combined with p/b ratios or price data to see the correlation --> can use seaborn pairgrid to visualise this for multiple stocks?

  18. Census Income dataset

    • kaggle.com
    zip
    Updated Oct 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    tawfik elmetwally (2023). Census Income dataset [Dataset]. https://www.kaggle.com/datasets/tawfikelmetwally/census-income-dataset
    Explore at:
    zip(707150 bytes)Available download formats
    Dataset updated
    Oct 28, 2023
    Authors
    tawfik elmetwally
    Description

    This intermediate level data set was extracted from the census bureau database. There are 48842 instances of data set, mix of continuous and discrete (train=32561, test=16281).

    The data set has 15 attribute which include age, sex, education level and other relevant details of a person. The data set will help to improve your skills in Exploratory Data Analysis, Data Wrangling, Data Visualization and Classification Models.

    Feel free to explore the data set with multiple supervised and unsupervised learning techniques. The Following description gives more details on this data set:

    • age: the age of an individual.
    • workclass: The type of work or employment of an individual. It can have the following categories:
      • Private: Working in the private sector.
      • Self-emp-not-inc: Self-employed individuals who are not incorporated.
      • Self-emp-inc: Self-employed individuals who are incorporated.
      • Federal-gov: Working for the federal government.
      • Local-gov: Working for the local government.
      • State-gov: Working for the state government.
      • Without-pay: Not working and without pay.
      • Never-worked: Never worked before.
    • Final Weight: The weights on the CPS files are controlled to independent estimates of the civilian noninstitutional population of the US. These are prepared monthly for us by Population Division here at the Census Bureau. We use 3 sets of controls.

    These are: 1. A single cell estimate of the population 16+ for each state. 2. Controls for Hispanic Origin by age and sex. 3. Controls by Race, age and sex.

    We use all three sets of controls in our weighting program and "rake" through them 6 times so that by the end we come back to all the controls we used.

    People with similar demographic characteristics should have similar weights. There is one important caveat to remember about this statement. That is that since the CPS sample is actually a collection of 51 state samples, each with its own probability of selection, the statement only applies within state.

    • education: The highest level of education completed.
    • education-num: The number of years of education completed.
    • marital-status: The marital status.
    • occupation: Type of work performed by an individual.
    • relationship: The relationship status.
    • race: The race of an individual.
    • sex: The gender of an individual.
    • capital-gain: The amount of capital gain (financial profit).
    • capital-loss: The amount of capital loss an individual has incurred.
    • hours-per-week: The number of hours works per week.
    • native-country: The country of origin or the native country.
    • income: The income level of an individual and serves as the target variable. It indicates whether the income is greater than $50,000 or less than or equal to $50,000, denoted as (>50K, <=50K).
  19. NBA data from 1996-2021

    • kaggle.com
    zip
    Updated Mar 2, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    patrick (2022). NBA data from 1996-2021 [Dataset]. https://www.kaggle.com/patrickhallila1994/nba-data-from-basketball-reference
    Explore at:
    zip(178270973 bytes)Available download formats
    Dataset updated
    Mar 2, 2022
    Authors
    patrick
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    These datasets are scraped from basketball-reference.com and include all NBA games between seasons 1996-97 to 2020-21. My goal in this project is to create datasets that can be used by beginners to practice basic data science skills, such as data wrangling and cleaning, using a setting in which it is easy to go to the raw data to understand surprising results. For example, outliers can be difficult to understand when working with a taxi dataset, whereas NBA has a large community of reporters, experts and game videos that may help you understand what is going on with the data.

    Web scrapers used to collect can be found from: https://github.com/PatrickH1994/nba_webscrapes

    Content

    The dataset will include all the information available at basketball-reference.com once the project is done.

    Current files: - Games - Play-by-play - Player stats - Salary data

    Acknowledgements

    We wouldn't be here without the help of others. If you owe any attributions or thanks, include them here along with any citations of past research.

    Inspiration

    Your data will be in front of the world's largest data science community. What questions do you want to see answered?

  20. NiftyOptionChainDataset

    • kaggle.com
    zip
    Updated May 13, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nikhil Ulahannan (2021). NiftyOptionChainDataset [Dataset]. https://www.kaggle.com/nikhilulahannan/niftyoptionchaindataset
    Explore at:
    zip(31131 bytes)Available download formats
    Dataset updated
    May 13, 2021
    Authors
    Nikhil Ulahannan
    Description

    Context

    Option Chain data is a product of complex calculations yet unorganised because of its inherent non uniform data relevance structure which makes it harder to use for data analytics.

    Content

    Dataset contains 3 adjacent week raw option chain data(calls, puts, iv etc) in the month of May 2021. An additional data file is added with minor modification(clean-sample) for better utilization of data explorer features.

    Acknowledgements

    National Stock Exchange (NSE) website.

    Inspiration

    Develop code framework for data cleaning, wrangling and visualization of option chain data. Exploratory Data Analysis (EDA) Analyse evolution of option premiums, iv etc and its impact over a month. Insights for better straddles and strangles(option strategies).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
ringhilterra17 (2020). Enriched NYTimes COVID19 U.S. County Dataset [Dataset]. https://www.kaggle.com/ringhilterra17/enrichednytimescovid19
Organization logo

Enriched NYTimes COVID19 U.S. County Dataset

NYTimes U.S. County Data joined + enriched with geospatial, per-capita, and more

Explore at:
zip(11291611 bytes)Available download formats
Dataset updated
Jun 14, 2020
Authors
ringhilterra17
License

Attribution-ShareAlike 3.0 (CC BY-SA 3.0)https://creativecommons.org/licenses/by-sa/3.0/
License information was derived automatically

Area covered
United States
Description

Overview and Inspiration

I wanted to make some geospatial visualizations to convey the current severity of COVID19 in different parts of the U.S..

I liked the NYTimes COVID dataset, but it was lacking information on county boundary shape data, population per county, new cases / deaths per day, and per capita calculations, and county demographics.

After a lot of work tracking down the different data sources I wanted and doing all of the data wrangling and joins in python, I wanted to open-source the final enriched data set in order to give others a head start in their COVID-19 related analytic, modeling, and visualization efforts.

This dataset is enriched with county shapes, county center point coordinates, 2019 census population estimates, county population densities, cases and deaths per capita, and calculated per day cases / deaths metrics. It contains daily data per county back to January, allowing for analyizng changes over time.

UPDATE: I have also included demographic information per county, including ages, races, and gender breakdown. This could help determine which counties are most susceptible to an outbreak.

How this data can be used

Geospatial analysis and visualization - Which counties are currently getting hit the hardest (per capita and totals)? - What patterns are there in the spread of the virus across counties? (network based spread simulations using county center lat / lons) -county population densities play a role in how quickly the virus spreads? -how does a specific county/state cases and deaths compare to other counties/states? Join with other county level datasets easily (with fips code column)

Content Details

See the column descriptions for more details on the dataset

Visualizations and Analysis Examples

COVID-19 U.S. Time-lapse: Confirmed Cases per County (per capita)

https://github.com/ringhilterra/enriched-covid19-data/blob/master/example_viz/covid-cases-final-04-06.gif?raw=true" alt="">-

Other Data Notes

  • Please review nytimes README for detailed notes on Covid-19 data - https://github.com/nytimes/covid-19-data/
  • The only update I made in regards to 'Geographic Exceptions', is that I took 'New York City' county provided in the Covid-19 data, which has all cases for 'for the five boroughs of New York City (New York, Kings, Queens, Bronx and Richmond counties) and replaced the missing FIPS for those rows with the 'New York County' fips code 36061. That way I could join to a geometry, and then I used the sum of those five boroughs population estimates for the 'New York City' estimate, which allowed me calculate 'per capita' metrics for 'New York City' entries in the Covid-19 dataset

Acknowledgements

Search
Clear search
Close search
Google apps
Main menu