100+ datasets found
  1. n

    Demo dataset for: SPACEc, a streamlined, interactive Python workflow for...

    • data.niaid.nih.gov
    • datadryad.org
    zip
    Updated Jul 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yuqi Tan; Tim Kempchen (2024). Demo dataset for: SPACEc, a streamlined, interactive Python workflow for multiplexed image processing and analysis [Dataset]. http://doi.org/10.5061/dryad.brv15dvj1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 8, 2024
    Dataset provided by
    Stanford University School of Medicine
    Authors
    Yuqi Tan; Tim Kempchen
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    Multiplexed imaging technologies provide insights into complex tissue architectures. However, challenges arise due to software fragmentation with cumbersome data handoffs, inefficiencies in processing large images (8 to 40 gigabytes per image), and limited spatial analysis capabilities. To efficiently analyze multiplexed imaging data, we developed SPACEc, a scalable end-to-end Python solution, that handles image extraction, cell segmentation, and data preprocessing and incorporates machine-learning-enabled, multi-scaled, spatial analysis, operated through a user-friendly and interactive interface. The demonstration dataset was derived from a previous analysis and contains TMA cores from a human tonsil and tonsillitis sample that were acquired with the Akoya PhenocyclerFusion platform. The dataset can be used to test the workflow and establish it on a user’s system or to familiarize oneself with the pipeline. Methods Tissue samples: Tonsil cores were extracted from a larger multi-tumor tissue microarray (TMA), which included a total of 66 unique tissues (51 malignant and semi-malignant tissues, as well as 15 non-malignant tissues). Representative tissue regions were annotated on corresponding hematoxylin and eosin (H&E)-stained sections by a board-certified surgical pathologist (S.Z.). Annotations were used to generate the 66 cores each with cores of 1mm diameter. FFPE tissue blocks were retrieved from the tissue archives of the Institute of Pathology, University Medical Center Mainz, Germany, and the Department of Dermatology, University Medical Center Mainz, Germany. The multi-tumor-TMA block was sectioned at 3µm thickness onto SuperFrost Plus microscopy slides before being processed for CODEX multiplex imaging as previously described. CODEX multiplexed imaging and processing To run the CODEX machine, the slide was taken from the storage buffer and placed in PBS for 10 minutes to equilibrate. After drying the PBS with a tissue, a flow cell was sealed onto the tissue slide. The assembled slide and flow cell were then placed in a PhenoCycler Buffer made from 10X PhenoCycler Buffer & Additive for at least 10 minutes before starting the experiment. A 96-well reporter plate was prepared with each reporter corresponding to the correct barcoded antibody for each cycle, with up to 3 reporters per cycle per well. The fluorescence reporters were mixed with 1X PhenoCycler Buffer, Additive, nuclear-staining reagent, and assay reagent according to the manufacturer's instructions. With the reporter plate and assembled slide and flow cell placed into the CODEX machine, the automated multiplexed imaging experiment was initiated. Each imaging cycle included steps for reporter binding, imaging of three fluorescent channels, and reporter stripping to prepare for the next cycle and set of markers. This was repeated until all markers were imaged. After the experiment, a .qptiff image file containing individual antibody channels and the DAPI channel was obtained. Image stitching, drift compensation, deconvolution, and cycle concatenation are performed within the Akoya PhenoCycler software. The raw imaging data output (tiff, 377.442nm per pixel for 20x CODEX) is first examined with QuPath software (https://qupath.github.io/) for inspection of staining quality. Any markers that produce unexpected patterns or low signal-to-noise ratios should be excluded from the ensuing analysis. The qptiff files must be converted into tiff files for input into SPACEc. Data preprocessing includes image stitching, drift compensation, deconvolution, and cycle concatenation performed using the Akoya Phenocycler software. The raw imaging data (qptiff, 377.442 nm/pixel for 20x CODEX) files from the Akoya PhenoCycler technology were first examined with QuPath software (https://qupath.github.io/) to inspect staining qualities. Markers with untenable patterns or low signal-to-noise ratios were excluded from further analysis. A custom CODEX analysis pipeline was used to process all acquired CODEX data (scripts available upon request). The qptiff files were converted into tiff files for tissue detection (watershed algorithm) and cell segmentation.

  2. f

    Table_1_Overview of data preprocessing for machine learning applications in...

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    • +1more
    Updated Oct 5, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lopes, Marta B.; Marcos-Zambrano, Laura Judith; Simeon, Andrea; Berland, Magali; Hron, Karel; Stres, Blaž; Ibrahimi, Eliana; Dhamo, Xhilda; D’Elia, Domenica; Shigdel, Rajesh (2023). Table_1_Overview of data preprocessing for machine learning applications in human microbiome research.XLSX [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001030486
    Explore at:
    Dataset updated
    Oct 5, 2023
    Authors
    Lopes, Marta B.; Marcos-Zambrano, Laura Judith; Simeon, Andrea; Berland, Magali; Hron, Karel; Stres, Blaž; Ibrahimi, Eliana; Dhamo, Xhilda; D’Elia, Domenica; Shigdel, Rajesh
    Description

    Although metagenomic sequencing is now the preferred technique to study microbiome-host interactions, analyzing and interpreting microbiome sequencing data presents challenges primarily attributed to the statistical specificities of the data (e.g., sparse, over-dispersed, compositional, inter-variable dependency). This mini review explores preprocessing and transformation methods applied in recent human microbiome studies to address microbiome data analysis challenges. Our results indicate a limited adoption of transformation methods targeting the statistical characteristics of microbiome sequencing data. Instead, there is a prevalent usage of relative and normalization-based transformations that do not specifically account for the specific attributes of microbiome data. The information on preprocessing and transformations applied to the data before analysis was incomplete or missing in many publications, leading to reproducibility concerns, comparability issues, and questionable results. We hope this mini review will provide researchers and newcomers to the field of human microbiome research with an up-to-date point of reference for various data transformation tools and assist them in choosing the most suitable transformation method based on their research questions, objectives, and data characteristics.

  3. BudgetWise Personal Finance Dataset

    • kaggle.com
    zip
    Updated Sep 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Arfath R (2025). BudgetWise Personal Finance Dataset [Dataset]. https://www.kaggle.com/datasets/mohammedarfathr/budgetwise-personal-finance-dataset
    Explore at:
    zip(589253 bytes)Available download formats
    Dataset updated
    Sep 29, 2025
    Authors
    Mohammed Arfath R
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    🎯 Dataset Overview

    A intentionally messy synthetic personal finance dataset designed for practicing real-world data preprocessing challenges before building AI-based expense forecasting models.

    💡 Context & Inspiration

    Created for BudgetWise - an AI expense forecasting tool. This dataset simulates real-world financial transaction data with all the messiness data scientists encounter in production: inconsistent formats, typos, duplicates, outliers, and missing values.

    🔍 What Makes This Dataset Special?

    • Realistic Data Quality Issues: ~30% of data contains intentional errors
    • Class Imbalance: 85% expenses vs 15% income (perfect for SMOTE practice)
    • Multi-format Dates: 4 different date formats mixed throughout
    • Currency Chaos: Mixed symbols (₹, $, Rs.) in amounts
    • Text Inconsistencies: Typos, case variations, and duplicates

    📊 Key Statistics

    • 15,000+ transactions
    • 150 unique users
    • 4-year period (2021-2024)
    • 9 feature columns
    • ~6% duplicate rows
    • ~5% missing values per column

    🎓 Learning Opportunities

    Perfect for practicing: - Data cleaning & normalization - Handling missing values - Date parsing & time-series analysis - Currency extraction & conversion - Outlier detection - Feature engineering - Class balancing (SMOTE) - Text standardization - Duplicate detection

  4. Sales and workload in retail industry

    • kaggle.com
    zip
    Updated Dec 11, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dennis Gluesenkamp (2019). Sales and workload in retail industry [Dataset]. https://www.kaggle.com/dgluesen/sales-and-workload-data-from-retail-industry
    Explore at:
    zip(454426 bytes)Available download formats
    Dataset updated
    Dec 11, 2019
    Authors
    Dennis Gluesenkamp
    Description

    Context

    Raw data of real analytical use cases in a number of industries and companies is frequently provided in an Excel-based form. These files usually cannot be processed directly in machine learning models, but must first be cleaned and preprocessed. In this procedure, many different types of pitfalls may occur. This makes data preprocessing an essential time factor in the daily work of a data scientist.

    Here, an Excel spreadsheet will be presented which in this form is closely oriented to a real case but contains only simulated figures for reasons of data and business results protection. The form and structure of the file correspond to a real case and could be encountered by a data scientist in a company in this way. Such a file can be the result of a download from a financial controlling system, e.g. SAP.

    Content

    The data includes information about sold goods resp. product units, the associated turnover and hours worked. This information is grouped by month, store and department of the retailer. Moreover, information about the sales area in a specific department as well as about the opening hours of the store is provided.

    Possible objectives

    The following goals of data cleansing might be addressed:

    • Import the Excel-file
    • Inspect the dataset
    • Check data types and do meaningful modifications
    • Handle missings/data gaps
    • Find and solve data inconsistencies
    • Rename columns for improved usage
    • Join tables to a single one

    Furthermore, the data can be investigated with regard to correlations between different features and/or a regression model.

    License

    GNU General Public License v3.0 - https://www.gnu.org/licenses/gpl-3.0.en.html

  5. Data from: Enriching time series datasets using Nonparametric kernel...

    • figshare.com
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohamad Ivan Fanany (2023). Enriching time series datasets using Nonparametric kernel regression to improve forecasting accuracy [Dataset]. http://doi.org/10.6084/m9.figshare.1609661.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Mohamad Ivan Fanany
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Improving the accuracy of prediction on future values based on the past and current observations has been pursued by enhancing the prediction's methods, combining those methods or performing data pre-processing. In this paper, another approach is taken, namely by increasing the number of input in the dataset. This approach would be useful especially for a shorter time series data. By filling the in-between values in the time series, the number of training set can be increased, thus increasing the generalization capability of the predictor. The algorithm used to make prediction is Neural Network as it is widely used in literature for time series tasks. For comparison, Support Vector Regression is also employed. The dataset used in the experiment is the frequency of USPTO's patents and PubMed's scientific publications on the field of health, namely on Apnea, Arrhythmia, and Sleep Stages. Another time series data designated for NN3 Competition in the field of transportation is also used for benchmarking. The experimental result shows that the prediction performance can be significantly increased by filling in-between data in the time series. Furthermore, the use of detrend and deseasonalization which separates the data into trend, seasonal and stationary time series also improve the prediction performance both on original and filled dataset. The optimal number of increase on the dataset in this experiment is about five times of the length of original dataset.

  6. Data from: A Python-based pipeline for preprocessing LC-MS data for...

    • data.niaid.nih.gov
    xml
    Updated Nov 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NICOLAS ZABALEGUI (2020). A Python-based pipeline for preprocessing LC-MS data for untargeted metabolomics workflows [Dataset]. https://data.niaid.nih.gov/resources?id=mtbls1919
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Nov 21, 2020
    Dataset provided by
    CIBION-CONICET
    Authors
    NICOLAS ZABALEGUI
    Variables measured
    Metabolomics
    Description

    Preprocessing data in a reproducible and robust way is one of the current challenges in untargeted metabolomics workflows. Data curation in liquid chromatography-mass spectrometry (LC-MS) involves the removal of unwanted features (retention time; m/z pairs) to retain only high-quality data for subsequent analysis and interpretation. The present work introduces a package for the Python programming language for pre-processing LC-MS data for quality control procedures in untargeted metabolomics workflows. It is a versatile strategy that can be customized or fit for purpose according to the specific metabolomics application. It allows performing quality control procedures to ensure accuracy and reliability in LC-MS measurements, and it allows preprocessing metabolomics data to obtain cleaned matrices for subsequent statistical analysis. The capabilities of the package are showcased with pipelines for an LC-MS system suitability check, system conditioning, signal drift evaluation, and data curation. These applications were implemented to preprocess data corresponding to a new suite of plasma candidate plasma reference materials developed by the National Institute of Standards and Technology (NIST; hypertriglyceridemic, diabetic, and African-American plasma pools) to be used in untargeted metabolomics studies. in addition to NIST SRM 1950 – Metabolites in Frozen Human Plasma. The package offers a rapid and reproducible workflow that can be used in an automated or semi-automated fashion, and it is an open and free tool available to all users.

  7. RNA data preprocessing toolkits

    • kaggle.com
    zip
    Updated Jan 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neuron Engineer (2022). RNA data preprocessing toolkits [Dataset]. https://www.kaggle.com/datasets/ratthachat/rna-data-preprocessing-toolkits
    Explore at:
    zip(82487484 bytes)Available download formats
    Dataset updated
    Jan 18, 2022
    Authors
    Neuron Engineer
    Description

    Usage

    This is a collection of tools which can be used to extract features from RNA string for deep learning models Please see the https://www.kaggle.com/ratthachat/preprocessing-deep-learning-input-from-rna-string for full details

  8. Z

    Data Analysis for the Systematic Literature Review of DL4SE

    • data.niaid.nih.gov
    • data-staging.niaid.nih.gov
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Cody Watson; Nathan Cooper; David Nader; Kevin Moran; Denys Poshyvanyk (2024). Data Analysis for the Systematic Literature Review of DL4SE [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4768586
    Explore at:
    Dataset updated
    Jul 19, 2024
    Dataset provided by
    Washington and Lee University
    College of William and Mary
    Authors
    Cody Watson; Nathan Cooper; David Nader; Kevin Moran; Denys Poshyvanyk
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data Analysis is the process that supports decision-making and informs arguments in empirical studies. Descriptive statistics, Exploratory Data Analysis (EDA), and Confirmatory Data Analysis (CDA) are the approaches that compose Data Analysis (Xia & Gong; 2014). An Exploratory Data Analysis (EDA) comprises a set of statistical and data mining procedures to describe data. We ran EDA to provide statistical facts and inform conclusions. The mined facts allow attaining arguments that would influence the Systematic Literature Review of DL4SE.

    The Systematic Literature Review of DL4SE requires formal statistical modeling to refine the answers for the proposed research questions and formulate new hypotheses to be addressed in the future. Hence, we introduce DL4SE-DA, a set of statistical processes and data mining pipelines that uncover hidden relationships among Deep Learning reported literature in Software Engineering. Such hidden relationships are collected and analyzed to illustrate the state-of-the-art of DL techniques employed in the software engineering context.

    Our DL4SE-DA is a simplified version of the classical Knowledge Discovery in Databases, or KDD (Fayyad, et al; 1996). The KDD process extracts knowledge from a DL4SE structured database. This structured database was the product of multiple iterations of data gathering and collection from the inspected literature. The KDD involves five stages:

    Selection. This stage was led by the taxonomy process explained in section xx of the paper. After collecting all the papers and creating the taxonomies, we organize the data into 35 features or attributes that you find in the repository. In fact, we manually engineered features from the DL4SE papers. Some of the features are venue, year published, type of paper, metrics, data-scale, type of tuning, learning algorithm, SE data, and so on.

    Preprocessing. The preprocessing applied was transforming the features into the correct type (nominal), removing outliers (papers that do not belong to the DL4SE), and re-inspecting the papers to extract missing information produced by the normalization process. For instance, we normalize the feature “metrics” into “MRR”, “ROC or AUC”, “BLEU Score”, “Accuracy”, “Precision”, “Recall”, “F1 Measure”, and “Other Metrics”. “Other Metrics” refers to unconventional metrics found during the extraction. Similarly, the same normalization was applied to other features like “SE Data” and “Reproducibility Types”. This separation into more detailed classes contributes to a better understanding and classification of the paper by the data mining tasks or methods.

    Transformation. In this stage, we omitted to use any data transformation method except for the clustering analysis. We performed a Principal Component Analysis to reduce 35 features into 2 components for visualization purposes. Furthermore, PCA also allowed us to identify the number of clusters that exhibit the maximum reduction in variance. In other words, it helped us to identify the number of clusters to be used when tuning the explainable models.

    Data Mining. In this stage, we used three distinct data mining tasks: Correlation Analysis, Association Rule Learning, and Clustering. We decided that the goal of the KDD process should be oriented to uncover hidden relationships on the extracted features (Correlations and Association Rules) and to categorize the DL4SE papers for a better segmentation of the state-of-the-art (Clustering). A clear explanation is provided in the subsection “Data Mining Tasks for the SLR od DL4SE”. 5.Interpretation/Evaluation. We used the Knowledge Discover to automatically find patterns in our papers that resemble “actionable knowledge”. This actionable knowledge was generated by conducting a reasoning process on the data mining outcomes. This reasoning process produces an argument support analysis (see this link).

    We used RapidMiner as our software tool to conduct the data analysis. The procedures and pipelines were published in our repository.

    Overview of the most meaningful Association Rules. Rectangles are both Premises and Conclusions. An arrow connecting a Premise with a Conclusion implies that given some premise, the conclusion is associated. E.g., Given that an author used Supervised Learning, we can conclude that their approach is irreproducible with a certain Support and Confidence.

    Support = Number of occurrences this statement is true divided by the amount of statements Confidence = The support of the statement divided by the number of occurrences of the premise

  9. Global Data Prep Market By Platform (Self-Service Data Prep, Data...

    • verifiedmarketresearch.com
    Updated Sep 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    VERIFIED MARKET RESEARCH (2024). Global Data Prep Market By Platform (Self-Service Data Prep, Data Integration), By Tools (Data Curation, Data Cataloging, Data Quality, Data Ingestion, Data Governance), By Geographic Scope and Forecast [Dataset]. https://www.verifiedmarketresearch.com/product/data-prep-market/
    Explore at:
    Dataset updated
    Sep 29, 2024
    Dataset provided by
    Verified Market Researchhttps://www.verifiedmarketresearch.com/
    Authors
    VERIFIED MARKET RESEARCH
    License

    https://www.verifiedmarketresearch.com/privacy-policy/https://www.verifiedmarketresearch.com/privacy-policy/

    Time period covered
    2024 - 2031
    Area covered
    Global
    Description

    Data Prep Market size was valued at USD 4.02 Billion in 2024 and is projected to reach USD 16.12 Billion by 2031, growing at a CAGR of 19% from 2024 to 2031.

    Global Data Prep Market Drivers

    Increasing Demand for Data Analytics: Businesses across all industries are increasingly relying on data-driven decision-making, necessitating the need for clean, reliable, and useful information. This rising reliance on data increases the demand for better data preparation technologies, which are required to transform raw data into meaningful insights. Growing Volume and Complexity of Data: The increase in data generation continues unabated, with information streaming in from a variety of sources. This data frequently lacks consistency or organization, therefore effective data preparation is critical for accurate analysis. To assure quality and coherence while dealing with such a large and complicated data landscape, powerful technologies are required. Increased Use of Self-Service Data Preparation Tools: User-friendly, self-service data preparation solutions are gaining popularity because they enable non-technical users to access, clean, and prepare data. independently. This democratizes data access, decreases reliance on IT departments, and speeds up the data analysis process, making data-driven insights more available to all business units. Integration of AI and ML: Advanced data preparation technologies are progressively using AI and machine learning capabilities to improve their effectiveness. These technologies automate repetitive activities, detect data quality issues, and recommend data transformations, increasing productivity and accuracy. The use of AI and ML streamlines the data preparation process, making it faster and more reliable. Regulatory Compliance Requirements: Many businesses are subject to tight regulations governing data security and privacy. Data preparation technologies play an important role in ensuring that data meets these compliance requirements. By giving functions that help manage and protect sensitive information these technologies help firms negotiate complex regulatory climates. Cloud-based Data Management: The transition to cloud-based data storage and analytics platforms needs data preparation solutions that can work smoothly with cloud-based data sources. These solutions must be able to integrate with a variety of cloud settings to assist effective data administration and preparation while also supporting modern data infrastructure.

  10. Ecommerce Dataset for Data Analysis

    • kaggle.com
    zip
    Updated Sep 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shrishti Manja (2024). Ecommerce Dataset for Data Analysis [Dataset]. https://www.kaggle.com/datasets/shrishtimanja/ecommerce-dataset-for-data-analysis/code
    Explore at:
    zip(2028853 bytes)Available download formats
    Dataset updated
    Sep 19, 2024
    Authors
    Shrishti Manja
    Description

    This dataset contains 55,000 entries of synthetic customer transactions, generated using Python's Faker library. The goal behind creating this dataset was to provide a resource for learners like myself to explore, analyze, and apply various data analysis techniques in a context that closely mimics real-world data.

    About the Dataset: - CID (Customer ID): A unique identifier for each customer. - TID (Transaction ID): A unique identifier for each transaction. - Gender: The gender of the customer, categorized as Male or Female. - Age Group: Age group of the customer, divided into several ranges. - Purchase Date: The timestamp of when the transaction took place. - Product Category: The category of the product purchased, such as Electronics, Apparel, etc. - Discount Availed: Indicates whether the customer availed any discount (Yes/No). - Discount Name: Name of the discount applied (e.g., FESTIVE50). - Discount Amount (INR): The amount of discount availed by the customer. - Gross Amount: The total amount before applying any discount. - Net Amount: The final amount after applying the discount. - Purchase Method: The payment method used (e.g., Credit Card, Debit Card, etc.). - Location: The city where the purchase took place.

    Use Cases: 1. Exploratory Data Analysis (EDA): This dataset is ideal for conducting EDA, allowing users to practice techniques such as summary statistics, visualizations, and identifying patterns within the data. 2. Data Preprocessing and Cleaning: Learners can work on handling missing data, encoding categorical variables, and normalizing numerical values to prepare the dataset for analysis. 3. Data Visualization: Use tools like Python’s Matplotlib, Seaborn, or Power BI to visualize purchasing trends, customer demographics, or the impact of discounts on purchase amounts. 4. Machine Learning Applications: After applying feature engineering, this dataset is suitable for supervised learning models, such as predicting whether a customer will avail a discount or forecasting purchase amounts based on the input features.

    This dataset provides an excellent sandbox for honing skills in data analysis, machine learning, and visualization in a structured but flexible manner.

    This is not a real dataset. This dataset was generated using Python's Faker library for the sole purpose of learning

  11. Method-level statistics of the preprocessed ELFF datasets.

    • plos.figshare.com
    • figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ebubeogu Amarachukwu Felix; Sai Peck Lee (2023). Method-level statistics of the preprocessed ELFF datasets. [Dataset]. http://doi.org/10.1371/journal.pone.0229131.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ebubeogu Amarachukwu Felix; Sai Peck Lee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Method-level statistics of the preprocessed ELFF datasets.

  12. R

    Car Highway Dataset

    • universe.roboflow.com
    zip
    Updated Sep 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sallar (2023). Car Highway Dataset [Dataset]. https://universe.roboflow.com/sallar/car-highway/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 13, 2023
    Dataset authored and provided by
    Sallar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Vehicles Bounding Boxes
    Description

    Car-Highway Data Annotation Project

    Introduction

    In this project, we aim to annotate car images captured on highways. The annotated data will be used to train machine learning models for various computer vision tasks, such as object detection and classification.

    Project Goals

    • Collect a diverse dataset of car images from highway scenes.
    • Annotate the dataset to identify and label cars within each image.
    • Organize and format the annotated data for machine learning model training.

    Tools and Technologies

    For this project, we will be using Roboflow, a powerful platform for data annotation and preprocessing. Roboflow simplifies the annotation process and provides tools for data augmentation and transformation.

    Annotation Process

    1. Upload the raw car images to the Roboflow platform.
    2. Use the annotation tools in Roboflow to draw bounding boxes around each car in the images.
    3. Label each bounding box with the corresponding class (e.g., car).
    4. Review and validate the annotations for accuracy.

    Data Augmentation

    Roboflow offers data augmentation capabilities, such as rotation, flipping, and resizing. These augmentations can help improve the model's robustness.

    Data Export

    Once the data is annotated and augmented, Roboflow allows us to export the dataset in various formats suitable for training machine learning models, such as YOLO, COCO, or TensorFlow Record.

    Milestones

    1. Data Collection and Preprocessing
    2. Annotation of Car Images
    3. Data Augmentation
    4. Data Export
    5. Model Training

    Conclusion

    By completing this project, we will have a well-annotated dataset ready for training machine learning models. This dataset can be used for a wide range of applications in computer vision, including car detection and tracking on highways.

  13. D

    Data Balance Optimization AI Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Data Balance Optimization AI Market Research Report 2033 [Dataset]. https://dataintelo.com/report/data-balance-optimization-ai-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Balance Optimization AI Market Outlook




    According to our latest research, the global Data Balance Optimization AI market size in 2024 stands at USD 2.18 billion, with a robust compound annual growth rate (CAGR) of 23.7% projected from 2025 to 2033. By the end of 2033, the market is forecasted to reach an impressive USD 17.3 billion. This substantial growth is driven by the surging demand for AI-powered analytics and increasing adoption of data-intensive applications across industries, establishing Data Balance Optimization AI as a critical enabler for enterprise digital transformation.




    One of the primary growth factors fueling the Data Balance Optimization AI market is the exponential surge in data generation across various sectors. Organizations are increasingly leveraging digital technologies, IoT devices, and cloud platforms, resulting in vast, complex, and often imbalanced datasets. The need for advanced AI solutions that can optimize, balance, and manage these datasets has become paramount to ensure high-quality analytics, accurate machine learning models, and improved business decision-making. Enterprises recognize that imbalanced data can severely skew AI outcomes, leading to biases and reduced operational efficiency. Consequently, the demand for Data Balance Optimization AI tools is accelerating as businesses strive to extract actionable insights from diverse and voluminous data sources.




    Another critical driver is the rapid evolution of AI and machine learning algorithms, which require balanced and high-integrity datasets for optimal performance. As industries such as healthcare, finance, and retail increasingly rely on predictive analytics and automation, the integrity of underlying data becomes a focal point. Data Balance Optimization AI technologies are being integrated into data pipelines to automatically detect and correct imbalances, ensuring that AI models are trained on representative and unbiased data. This not only enhances model accuracy but also helps organizations comply with stringent regulatory requirements related to data fairness and transparency, further reinforcing the market’s upward trajectory.




    The proliferation of cloud computing and the shift toward hybrid IT infrastructures are also significant contributors to market growth. Cloud-based Data Balance Optimization AI solutions offer scalability, flexibility, and cost-effectiveness, making them attractive to both large enterprises and small and medium-sized businesses. These solutions facilitate seamless integration with existing data management systems, enabling real-time optimization and balancing of data across distributed environments. Furthermore, the rise of data-centric business models in sectors such as e-commerce, telecommunications, and manufacturing is amplifying the need for robust data optimization frameworks, propelling further adoption of Data Balance Optimization AI technologies globally.




    From a regional perspective, North America currently dominates the Data Balance Optimization AI market, accounting for the largest share due to its advanced technological infrastructure, high investment in AI research, and the presence of leading technology firms. However, the Asia Pacific region is poised to experience the fastest growth during the forecast period, driven by rapid digitalization, expanding IT ecosystems, and increasing adoption of AI-powered solutions in emerging economies such as China, India, and Southeast Asia. Europe also presents significant opportunities, particularly in regulated industries such as finance and healthcare, where data integrity and compliance are paramount. Collectively, these regional trends underscore the global momentum behind Data Balance Optimization AI adoption.



    Component Analysis




    The Data Balance Optimization AI market by component is segmented into software, hardware, and services, each playing a pivotal role in the overall ecosystem. The software segment commands the largest market share, driven by the continuous evolution of AI algorithms, data preprocessing tools, and machine learning frameworks designed to address data imbalance challenges. Organizations are increasingly investing in advanced software solutions that automate data balancing, cleansing, and augmentation processes, ensuring the reliability of AI-driven analytics. These software platforms often integrate seamlessly with existing data management systems, providing us

  14. Average classifier performance on the ELFF datasets before data...

    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ebubeogu Amarachukwu Felix; Sai Peck Lee (2023). Average classifier performance on the ELFF datasets before data preprocessing. [Dataset]. http://doi.org/10.1371/journal.pone.0229131.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Ebubeogu Amarachukwu Felix; Sai Peck Lee
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Average classifier performance on the ELFF datasets before data preprocessing.

  15. R

    Data Hp_preprocessing Train+val Dataset

    • universe.roboflow.com
    zip
    Updated May 7, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Silvyaniza Briliananda (2022). Data Hp_preprocessing Train+val Dataset [Dataset]. https://universe.roboflow.com/silvyaniza-briliananda/data-hp_preprocessing-train-val/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 7, 2022
    Dataset authored and provided by
    Silvyaniza Briliananda
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Tool Wear
    Description

    Data HP_Preprocessing Train+Val

    ## Overview
    
    Data HP_Preprocessing Train+Val is a dataset for classification tasks - it contains Tool Wear annotations for 223 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  16. Pump sensor data (preprocess missing values)

    • kaggle.com
    zip
    Updated May 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    spin fit (2022). Pump sensor data (preprocess missing values) [Dataset]. https://www.kaggle.com/kimalpha/pump-sensor-data-preprocess-missing-values
    Explore at:
    zip(69063256 bytes)Available download formats
    Dataset updated
    May 20, 2022
    Authors
    spin fit
    Description

    Original Source pre-processing Kernel

    Missing value processed data version

  17. US Deep Learning Market Analysis, Size, and Forecast 2025-2029

    • technavio.com
    pdf
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). US Deep Learning Market Analysis, Size, and Forecast 2025-2029 [Dataset]. https://www.technavio.com/report/us-deep-learning-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 8, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Description

    Snapshot img

    US Deep Learning Market Size 2025-2029

    The deep learning market size in US is forecast to increase by USD 5.02 billion at a CAGR of 30.1% between 2024 and 2029.

    The deep learning market is experiencing robust growth, driven by the increasing adoption of artificial intelligence (AI) in various industries for advanced solutioning. This trend is fueled by the availability of vast amounts of data, which is a key requirement for deep learning algorithms to function effectively. Industry-specific solutions are gaining traction, as businesses seek to leverage deep learning for specific use cases such as image and speech recognition, fraud detection, and predictive maintenance. Alongside, intuitive data visualization tools are simplifying complex neural network outputs, helping stakeholders understand and validate insights. 
    
    
    However, challenges remain, including the need for powerful computing resources, data privacy concerns, and the high cost of implementing and maintaining deep learning systems. Despite these hurdles, the market's potential for innovation and disruption is immense, making it an exciting space for businesses to explore further. Semi-supervised learning, data labeling, and data cleaning facilitate efficient training of deep learning models. Cloud analytics is another significant trend, as companies seek to leverage cloud computing for cost savings and scalability. 
    

    What will be the Size of the market During the Forecast Period?

    Request Free Sample

    Deep learning, a subset of machine learning, continues to shape industries by enabling advanced applications such as image and speech recognition, text generation, and pattern recognition. Reinforcement learning, a type of deep learning, gains traction, with deep reinforcement learning leading the charge. Anomaly detection, a crucial application of unsupervised learning, safeguards systems against security vulnerabilities. Ethical implications and fairness considerations are increasingly important in deep learning, with emphasis on explainable AI and model interpretability. Graph neural networks and attention mechanisms enhance data preprocessing for sequential data modeling and object detection. Time series forecasting and dataset creation further expand deep learning's reach, while privacy preservation and bias mitigation ensure responsible use.

    In summary, deep learning's market dynamics reflect a constant pursuit of innovation, efficiency, and ethical considerations. The Deep Learning Market in the US is flourishing as organizations embrace intelligent systems powered by supervised learning and emerging self-supervised learning techniques. These methods refine predictive capabilities and reduce reliance on labeled data, boosting scalability. BFSI firms utilize AI image recognition for various applications, including personalizing customer communication, maintaining a competitive edge, and automating repetitive tasks to boost productivity. Sophisticated feature extraction algorithms now enable models to isolate patterns with high precision, particularly in applications such as image classification for healthcare, security, and retail.

    How is this market segmented and which is the largest segment?

    The market research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Application
    
      Image recognition
      Voice recognition
      Video surveillance and diagnostics
      Data mining
    
    
    Type
    
      Software
      Services
      Hardware
    
    
    End-user
    
      Security
      Automotive
      Healthcare
      Retail and commerce
      Others
    
    
    Geography
    
      North America
    
        US
    

    By Application Insights

    The Image recognition segment is estimated to witness significant growth during the forecast period. In the realm of artificial intelligence (AI) and machine learning, image recognition, a subset of computer vision, is gaining significant traction. This technology utilizes neural networks, deep learning models, and various machine learning algorithms to decipher visual data from images and videos. Image recognition is instrumental in numerous applications, including visual search, product recommendations, and inventory management. Consumers can take photographs of products to discover similar items, enhancing the online shopping experience. In the automotive sector, image recognition is indispensable for advanced driver assistance systems (ADAS) and autonomous vehicles, enabling the identification of pedestrians, other vehicles, road signs, and lane markings.

    Furthermore, image recognition plays a pivotal role in augmented reality (AR) and virtual reality (VR) applications, where it tracks physical objects and overlays digital content onto real-world scenarios. The model training process involves the backpropagation algorithm, which calculates the loss fu

  18. Z

    Data from: Machine Learning for Software Engineering: A Tertiary Study

    • data-staging.niaid.nih.gov
    • data.niaid.nih.gov
    Updated Sep 16, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kotti, Zoe; Galanopoulou, Rafaila; Spinellis, Diomidis (2022). Machine Learning for Software Engineering: A Tertiary Study [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_5715474
    Explore at:
    Dataset updated
    Sep 16, 2022
    Dataset provided by
    Athens University of Economics and Business
    Authors
    Kotti, Zoe; Galanopoulou, Rafaila; Spinellis, Diomidis
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset of the research paper: Machine Learning for Software Engineering: A Tertiary Study

    Machine learning (ML) techniques increase the effectiveness of software engineering (SE) lifecycle activities. We systematically collected, quality-assessed, summarized, and categorized 83 reviews in ML for SE published between 2009–2022, covering 6,117 primary studies. The SE areas most tackled with ML are software quality and testing, while human-centered areas appear more challenging for ML. We propose a number of ML for SE research challenges and actions including: conducting further empirical validation and industrial studies on ML; reconsidering deficient SE methods; documenting and automating data collection and pipeline processes; reexamining how industrial practitioners distribute their proprietary data; and implementing incremental ML approaches.

    The following data and source files are included.

    review-protocol.md: The protocol employed in this tertiary study

    data/

    dl-search/

    input/
    

    acm_comput_surveys_overviews.bib: Surveys of ACM Computing Surveys journal

    acm_comput_surveys_overviews_titles.txt: Titles of surveys

    acm_comput_ml_surveys.bib: Machine learning (ML)-related surveys of ACM Computing Surveys journal

    acm_comput_ml_surveys_titles.txt: Titles of ML-related surveys

    dl_search_queries.txt: Search queries applied to IEEE Xplore, ACM Digital Library, and Elsevier Scopus

    ml_keywords.txt: ML-related keywords extracted from ML-related survey titles and used in the search queries

    se_keywords.txt: Software Engineering (SE)-related keywords derived from the 15 SWEBOK Knowledge Areas (KAs—except for Computing Foundations, Mathematical Foundations, and Engineering Foundations) and used in the search queries

    secondary_studies_keywords.txt: Survey-related keywords composed of the 15 keywords introduced in the tertiary study on SLRs in SE by Kitchenham et al. (2010), and the survey titles, and used in the search queries

    output/
    

    acm/

    acm{1–9}.bib: Search results from ACM Digital Library

    ieee.csv: Search results from IEEE Xplore

    scopus_analyze_year.csv: Yearly distribution of ML and SE documents extracted from Scopus's Analyze search results page

    scopus.csv: Search results from Scopus

    study-selection/

    backward_snowballing.csv: Additional secondary studies found through the backward snowballing process

    backward_snowballing_references.csv: References of quality-accepted secondary studies

    cohen_kappa_agreement.csv: Inter-rater reliability of reviewers in study selection

    dl_search_results.csv: Aggregated search results of all three digital libraries

    forward_snowballing_reviewer_{1,2}.csv: Divided forward snowballing citations of quality-accepted studies assessed by reviewer 1 and 2, correspondingly, based on IC/EC

    study_selection_reviewer_{1,2}.csv: Divided search results assessed by reviewer 1 and 2, correspondingly, based on IC/EC

    quality-assessment/

    dare_assessment.csv: Quality assessment (QA) of selected secondary studies based on the Database of Abstracts of Reviews of Effects (DARE) criteria by York University, Centre for Reviews and Dissemination

    quality_accepted_studies.csv: Details of quality-accepted studies

    studies_for_review.bib: Bibliography details and QA scores of selected secondary studies

    data-extraction/

    further_research.csv: Recommendations for further research of quality-accepted studies

    further_research_general.csv: The complete list of associated studies for each general recommendation

    knowledge_areas.csv: Classification of quality-accepted studies using the SWEBOK KAs and subareas

    ml_techniques.csv: Classification of the quality-accepted studies based on a four-axis ML classification scheme, along with extracted ML techniques employed in the studies

    primary_studies.csv: Details of reviewed primary studies by the quality-accepted secondary

    research_methods.csv: Citations of the research methods employed by the quality-accepted studies

    research_types_methods.csv: Research types and methods employed by the quality-accepted studies

    src/

    data-analysis.ipynb: Analysis of data extraction results (data preprocessing, top authors and institutions, study types, yearly distribution of publishers, QA scores, and SWEBOK KAs) and creation of all figures included in the study

    scopus-year-analysis.ipynb: Yearly distribution of ML and SE publications retrieved from Elsevier Scopus

    study-selection-preprocessing.ipynb: Processing of digital library search results to conduct the inter-rater reliability estimation and study selection process

  19. H

    CsvReader

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Jan 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tao HU (2025). CsvReader [Dataset]. http://doi.org/10.7910/DVN/XT2MWH
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 30, 2025
    Dataset provided by
    Harvard Dataverse
    Authors
    Tao HU
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The CsvReader is a component designed to read and process CSV (Comma-Separated Values) files, which are widely used for storing tabular data. This component can be used to load CSV files, perform operations like filtering and aggregation, and then output the results. It is a valuable tool for data preprocessing in various workflows, including data analysis and machine learning pipelines.

  20. D

    Single-Cell Data Analysis Software Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Single-Cell Data Analysis Software Market Research Report 2033 [Dataset]. https://dataintelo.com/report/single-cell-data-analysis-software-market
    Explore at:
    pptx, pdf, csvAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Single-Cell Data Analysis Software Market Outlook



    According to our latest research, the global Single-Cell Data Analysis Software market size reached USD 498.6 million in 2024, driven by increasing demand for high-resolution cellular analysis in life sciences and healthcare. The market is experiencing robust expansion with a CAGR of 15.2% from 2025 to 2033, and is projected to reach USD 1,522.9 million by 2033. This impressive growth trajectory is primarily attributed to advancements in single-cell sequencing technologies, the proliferation of precision medicine, and the rising adoption of artificial intelligence and machine learning in bioinformatics.



    The growth of the Single-Cell Data Analysis Software market is significantly propelled by the rapid evolution of next-generation sequencing (NGS) technologies and the increasing need for comprehensive single-cell analysis in both research and clinical settings. As researchers strive to unravel cellular heterogeneity and gain deeper insights into complex biological systems, the demand for robust data analysis tools has surged. Single-cell data analysis software enables scientists to process, visualize, and interpret large-scale datasets, facilitating the identification of rare cell populations, novel biomarkers, and disease mechanisms. The integration of advanced algorithms and user-friendly interfaces has further enhanced the accessibility and adoption of these solutions across various end-user segments, including academic and research institutes, biotechnology and pharmaceutical companies, and hospitals and clinics.



    Another key driver for market growth is the expanding application of single-cell analysis in precision medicine and drug discovery. The ability to analyze gene expression, protein levels, and epigenetic modifications at the single-cell level has revolutionized the understanding of disease pathogenesis and therapeutic response. This has led to a surge in demand for specialized software capable of managing complex, multi-omics datasets and generating actionable insights for personalized treatment strategies. Furthermore, the ongoing trend of integrating artificial intelligence and machine learning in single-cell data analysis is enabling more accurate predictions and faster data processing, thus accelerating the pace of biomedical research and clinical diagnostics.



    The increasing collaboration between academia, industry, and government agencies is also contributing to market expansion. Public and private investments in single-cell genomics research are fostering innovation in data analysis software, while strategic partnerships and acquisitions are facilitating the development of comprehensive, end-to-end solutions. Additionally, the growing awareness of the potential of single-cell analysis in oncology, immunology, and regenerative medicine is encouraging the adoption of advanced software platforms worldwide. However, challenges such as data privacy concerns, high implementation costs, and the need for skilled personnel may pose restraints to market growth, particularly in low-resource settings.



    From a regional perspective, North America continues to dominate the Single-Cell Data Analysis Software market, owing to its well-established healthcare infrastructure, strong presence of leading biotechnology and pharmaceutical companies, and substantial investments in genomics research. Europe follows closely, supported by robust government funding and a thriving life sciences sector. The Asia Pacific region is emerging as a lucrative market, driven by rising healthcare expenditure, expanding research capabilities, and increasing adoption of advanced technologies in countries such as China, Japan, and India. Latin America and the Middle East & Africa are also witnessing gradual growth, albeit at a slower pace, due to improving healthcare infrastructure and growing awareness of single-cell analysis applications.



    Component Analysis



    The Single-Cell Data Analysis Software market by component is broadly segmented into software and services, each playing a pivotal role in the overall ecosystem. Software solutions form the backbone of this market, offering a wide array of functionalities such as data preprocessing, quality control, clustering, visualization, and integration of multi-omics data. The increasing complexity and volume of single-cell datasets have driven the development of sophisticated software platforms equipped with advanced analytics, machine learning algorithms, and intuitive user interfaces. These platfo

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Yuqi Tan; Tim Kempchen (2024). Demo dataset for: SPACEc, a streamlined, interactive Python workflow for multiplexed image processing and analysis [Dataset]. http://doi.org/10.5061/dryad.brv15dvj1

Demo dataset for: SPACEc, a streamlined, interactive Python workflow for multiplexed image processing and analysis

Explore at:
zipAvailable download formats
Dataset updated
Jul 8, 2024
Dataset provided by
Stanford University School of Medicine
Authors
Yuqi Tan; Tim Kempchen
License

https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

Description

Multiplexed imaging technologies provide insights into complex tissue architectures. However, challenges arise due to software fragmentation with cumbersome data handoffs, inefficiencies in processing large images (8 to 40 gigabytes per image), and limited spatial analysis capabilities. To efficiently analyze multiplexed imaging data, we developed SPACEc, a scalable end-to-end Python solution, that handles image extraction, cell segmentation, and data preprocessing and incorporates machine-learning-enabled, multi-scaled, spatial analysis, operated through a user-friendly and interactive interface. The demonstration dataset was derived from a previous analysis and contains TMA cores from a human tonsil and tonsillitis sample that were acquired with the Akoya PhenocyclerFusion platform. The dataset can be used to test the workflow and establish it on a user’s system or to familiarize oneself with the pipeline. Methods Tissue samples: Tonsil cores were extracted from a larger multi-tumor tissue microarray (TMA), which included a total of 66 unique tissues (51 malignant and semi-malignant tissues, as well as 15 non-malignant tissues). Representative tissue regions were annotated on corresponding hematoxylin and eosin (H&E)-stained sections by a board-certified surgical pathologist (S.Z.). Annotations were used to generate the 66 cores each with cores of 1mm diameter. FFPE tissue blocks were retrieved from the tissue archives of the Institute of Pathology, University Medical Center Mainz, Germany, and the Department of Dermatology, University Medical Center Mainz, Germany. The multi-tumor-TMA block was sectioned at 3µm thickness onto SuperFrost Plus microscopy slides before being processed for CODEX multiplex imaging as previously described. CODEX multiplexed imaging and processing To run the CODEX machine, the slide was taken from the storage buffer and placed in PBS for 10 minutes to equilibrate. After drying the PBS with a tissue, a flow cell was sealed onto the tissue slide. The assembled slide and flow cell were then placed in a PhenoCycler Buffer made from 10X PhenoCycler Buffer & Additive for at least 10 minutes before starting the experiment. A 96-well reporter plate was prepared with each reporter corresponding to the correct barcoded antibody for each cycle, with up to 3 reporters per cycle per well. The fluorescence reporters were mixed with 1X PhenoCycler Buffer, Additive, nuclear-staining reagent, and assay reagent according to the manufacturer's instructions. With the reporter plate and assembled slide and flow cell placed into the CODEX machine, the automated multiplexed imaging experiment was initiated. Each imaging cycle included steps for reporter binding, imaging of three fluorescent channels, and reporter stripping to prepare for the next cycle and set of markers. This was repeated until all markers were imaged. After the experiment, a .qptiff image file containing individual antibody channels and the DAPI channel was obtained. Image stitching, drift compensation, deconvolution, and cycle concatenation are performed within the Akoya PhenoCycler software. The raw imaging data output (tiff, 377.442nm per pixel for 20x CODEX) is first examined with QuPath software (https://qupath.github.io/) for inspection of staining quality. Any markers that produce unexpected patterns or low signal-to-noise ratios should be excluded from the ensuing analysis. The qptiff files must be converted into tiff files for input into SPACEc. Data preprocessing includes image stitching, drift compensation, deconvolution, and cycle concatenation performed using the Akoya Phenocycler software. The raw imaging data (qptiff, 377.442 nm/pixel for 20x CODEX) files from the Akoya PhenoCycler technology were first examined with QuPath software (https://qupath.github.io/) to inspect staining qualities. Markers with untenable patterns or low signal-to-noise ratios were excluded from further analysis. A custom CODEX analysis pipeline was used to process all acquired CODEX data (scripts available upon request). The qptiff files were converted into tiff files for tissue detection (watershed algorithm) and cell segmentation.

Search
Clear search
Close search
Google apps
Main menu