100+ datasets found
  1. Data from: Software-Alternatives

    • kaggle.com
    Updated Mar 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alexander Gusev (2023). Software-Alternatives [Dataset]. https://www.kaggle.com/alexandrgusev/software-alternatives/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 7, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Alexander Gusev
    Description

    Dataset

    This dataset was created by Alexander Gusev

    Contents

  2. Deribit BTC options information

    • kaggle.com
    Updated Jun 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    HsergeyFrolov (2024). Deribit BTC options information [Dataset]. https://www.kaggle.com/datasets/hsergeyfrolov/deribit-btc-options-information
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 6, 2024
    Dataset provided by
    Kaggle
    Authors
    HsergeyFrolov
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Dataset consist of options information snapshot for all BTC series. It may be helpful to those who want to explore options. You can build volatility smile or calculate some greeks/ try to find misprice opportunities.

  3. Retail Transactions Dataset

    • kaggle.com
    Updated May 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Prasad Patil (2024). Retail Transactions Dataset [Dataset]. https://www.kaggle.com/datasets/prasad22/retail-transactions-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 18, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Prasad Patil
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset was created to simulate a market basket dataset, providing insights into customer purchasing behavior and store operations. The dataset facilitates market basket analysis, customer segmentation, and other retail analytics tasks. Here's more information about the context and inspiration behind this dataset:

    Context:

    Retail businesses, from supermarkets to convenience stores, are constantly seeking ways to better understand their customers and improve their operations. Market basket analysis, a technique used in retail analytics, explores customer purchase patterns to uncover associations between products, identify trends, and optimize pricing and promotions. Customer segmentation allows businesses to tailor their offerings to specific groups, enhancing the customer experience.

    Inspiration:

    The inspiration for this dataset comes from the need for accessible and customizable market basket datasets. While real-world retail data is sensitive and often restricted, synthetic datasets offer a safe and versatile alternative. Researchers, data scientists, and analysts can use this dataset to develop and test algorithms, models, and analytical tools.

    Dataset Information:

    The columns provide information about the transactions, customers, products, and purchasing behavior, making the dataset suitable for various analyses, including market basket analysis and customer segmentation. Here's a brief explanation of each column in the Dataset:

    • Transaction_ID: A unique identifier for each transaction, represented as a 10-digit number. This column is used to uniquely identify each purchase.
    • Date: The date and time when the transaction occurred. It records the timestamp of each purchase.
    • Customer_Name: The name of the customer who made the purchase. It provides information about the customer's identity.
    • Product: A list of products purchased in the transaction. It includes the names of the products bought.
    • Total_Items: The total number of items purchased in the transaction. It represents the quantity of products bought.
    • Total_Cost: The total cost of the purchase, in currency. It represents the financial value of the transaction.
    • Payment_Method: The method used for payment in the transaction, such as credit card, debit card, cash, or mobile payment.
    • City: The city where the purchase took place. It indicates the location of the transaction.
    • Store_Type: The type of store where the purchase was made, such as a supermarket, convenience store, department store, etc.
    • Discount_Applied: A binary indicator (True/False) representing whether a discount was applied to the transaction.
    • Customer_Category: A category representing the customer's background or age group.
    • Season: The season in which the purchase occurred, such as spring, summer, fall, or winter.
    • Promotion: The type of promotion applied to the transaction, such as "None," "BOGO (Buy One Get One)," or "Discount on Selected Items."

    Use Cases:

    • Market Basket Analysis: Discover associations between products and uncover buying patterns.
    • Customer Segmentation: Group customers based on purchasing behavior.
    • Pricing Optimization: Optimize pricing strategies and identify opportunities for discounts and promotions.
    • Retail Analytics: Analyze store performance and customer trends.

    Note: This dataset is entirely synthetic and was generated using the Python Faker library, which means it doesn't contain real customer data. It's designed for educational and research purposes.

  4. TacTip alternative morph D

    • kaggle.com
    Updated Aug 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dexter Shepherd (2025). TacTip alternative morph D [Dataset]. https://www.kaggle.com/datasets/dextershepherd/tactip-alternative-morph-d
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 1, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Dexter Shepherd
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by Dexter Shepherd

    Released under CC0: Public Domain

    Contents

  5. SPY options 2010-2023 EOD

    • kaggle.com
    Updated Jan 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    No name (2025). SPY options 2010-2023 EOD [Dataset]. https://www.kaggle.com/datasets/benjaminbtang/spy-2023-options
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    No name
    Description

    Data from: https://www.optionsdx.com

    2010-2023 Option data for SPY EOD

    If you want it to be put into one CSV file without nan's, go here:

    https://www.kaggle.com/datasets/benjaminbtang/spy-options-2010-2023-eod/data

  6. Alternative Metal Bands

    • kaggle.com
    Updated Jun 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Alberto Lawant (2025). Alternative Metal Bands [Dataset]. https://www.kaggle.com/datasets/aslawant/alternative-metal-bands/discussion
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 9, 2025
    Dataset provided by
    Kaggle
    Authors
    Alberto Lawant
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    Explore the world of alternative metal with this curated dataset featuring 200+ bands from around the globe. Each entry includes the band’s name, country of origin, current activity status, and a breakdown of their genre influences, from nu metal and industrial to post-grunge and progressive metal.

    Whether you're analyzing musical trends, building a genre classification model, or just exploring the heavy side of music data, this dataset is your backstage pass to the alternative metal scene.

    Columns:

    Band: The name of the alternative metal band.

    Origin: The country where the band originated.

    Active: Indicates whether the band is currently active (Yes or No).

    Genres: A comma-separated list of genres associated with the band (e.g., Nu Metal, Industrial Metal, Post-Grunge).

  7. Diets, Recipes And Their Nutrients

    • kaggle.com
    Updated Oct 18, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Diets, Recipes And Their Nutrients [Dataset]. https://www.kaggle.com/datasets/thedevastator/healthy-diet-recipes-a-comprehensive-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 18, 2022
    Dataset provided by
    Kaggle
    Authors
    The Devastator
    Description

    Recipes And Nutrients Per Diet

    A dataset of diets and recipes

    About this dataset

    Do you want to nourish your body in the best and healthiest way possible? If so, then this dataset is for you! It consists of recipes from different diets and cuisines, all of which are aimed at providing healthy and nutritious meal options. The dataset includes information on the macronutrients of each recipe, as well as the extraction day and time. This makes it an incredibly valuable resource for those interested in following a healthy diet, as well as for researchers studying the relationship between diet and health. So what are you waiting for? Start exploring today!

    How to use the dataset

    This dataset can be used to find healthy and nutritious recipes from different diets and cuisines. The macronutrient information can be used to make sure that the recipes fit into a healthy diet plan. The extraction day and time can be used to find recipes that were extracted recently or to find recipes that have been extracted on a particular day

    Research Ideas

    • This dataset can be used to create a healthy meal plan for those interested in following a nutritious diet.
    • This dataset can be used to study the relationship between diet and health.
    • This dataset can be used to create healthy recipes that are suitable for different diets and cuisines

    Acknowledgements

    We would like to thank the following people for their contributions to this dataset:

    -The anonymous recipe creators who have shared their healthy and nutritious recipes with us -The researchers who have studied the relationship between diet and health, and have helped to inform our choices of recipes

    License

    See the dataset description for more information.

    Columns

    File: All_Diets.csv | Column name | Description | |:-------------------|:---------------------------------------------| | Diet_type | The type of diet the recipe is for. (String) | | Recipe_name | The name of the recipe. (String) | | Cuisine_type | The cuisine the recipe is from. (String) | | Protein(g) | The amount of protein in grams. (Float) | | Carbs(g) | The amount of carbs in grams. (Float) | | Fat(g) | The amount of fat in grams. (Float) | | Extraction_day | The day the recipe was extracted. (String) |

    File: dash.csv | Column name | Description | |:-------------------|:---------------------------------------------| | Diet_type | The type of diet the recipe is for. (String) | | Recipe_name | The name of the recipe. (String) | | Cuisine_type | The cuisine the recipe is from. (String) | | Protein(g) | The amount of protein in grams. (Float) | | Carbs(g) | The amount of carbs in grams. (Float) | | Fat(g) | The amount of fat in grams. (Float) | | Extraction_day | The day the recipe was extracted. (String) |

    File: keto.csv | Column name | Description | |:-------------------|:---------------------------------------------| | Diet_type | The type of diet the recipe is for. (String) | | Recipe_name | The name of the recipe. (String) | | Cuisine_type | The cuisine the recipe is from. (String) | | Protein(g) | The amount of protein in grams. (Float) | | Carbs(g) | The amount of carbs in grams. (Float) | | Fat(g) | The amount of fat in grams. (Float) | | Extraction_day | The day the recipe was extracted. (String) |

    File: mediterranean.csv | Column name | Description | |:-------------------|:---------------------------------------------| | Diet_type | The type of diet the recipe is for. (String) | | Recipe_name | The name of the recipe. (String) | | Cuisine_type | The cuisine the recipe is from. (String) | | Protein(g) | The amount of protein in grams. (Float) | | Carbs(g) | The amount of carbs in grams. (Float) | | Fat(g) | The amount of fat in grams. (Float) | | Extraction_day | The day the recipe was extracted. (String) |

    File: paleo.csv | Column name | Description | |:-------------------|:---------------------------------------------| | Diet_type | The type of diet the recipe is for. (String) | | Recipe_name | The name of the recipe. (String) | | Cuisine_type | The cuisine the recipe is from. (String) | | Protein(g) | The amount of protein in grams. (Float) | | Carbs(g) | The amount of carb...

  8. AltLetterRecognition

    • kaggle.com
    zip
    Updated Aug 26, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Raoul (2020). AltLetterRecognition [Dataset]. https://www.kaggle.com/datasniffer/altletterrecognition
    Explore at:
    zip(1979085 bytes)Available download formats
    Dataset updated
    Aug 26, 2020
    Authors
    Raoul
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Dataset

    This dataset was created by Raoul

    Released under CC0: Public Domain

    Contents

    It contains the following files:

  9. Backloggd (Games Dataset)

    • kaggle.com
    Updated Oct 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Garanin (2024). Backloggd (Games Dataset) [Dataset]. https://www.kaggle.com/datasets/gsimonx37/backloggd
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 28, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Simon Garanin
    License

    https://www.gnu.org/licenses/gpl-3.0.htmlhttps://www.gnu.org/licenses/gpl-3.0.html

    Description

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15126770%2F7d5374215511bb7cf264fab8a294bc3a%2Fheader.jpg?generation=1704969406449875&alt=media" alt="">

    Data obtained using a program from the site backloggd.com.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15126770%2F9bb6a4f0ee6d69ea160b12f4d1ca3e30%2Fdata_1.jpg?generation=1704968884700538&alt=media" alt="">

    About backloggd.com

    "Backloggd is a place to keep your personal video game collection. Every game from every platform is here for you to log into your journal. Follow friends along the way to share your reviews and compare ratings. Then use filters to sort through your collection and see what matters to you. Keep a backlog of what you are currently playing and what you want to play, see the numbers change as you continue to log your playthroughs. There's Goodreads for books, Letterboxd for movies, and now Backloggd for games." - from the site backloggd.com.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15126770%2F4e12014a1f38e1167a5cf66202ebf9d7%2Fdata_2.jpg?generation=1704968935015630&alt=media" alt="">

    "All game related metadata comes from the community driven database IGDB. This includes all game, company and platform data you see on the site." - from the site backloggd.com

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F15126770%2F799831cb18c3b74f1c3f6e8a023af723%2Fdata_3.jpg?generation=1704968996471054&alt=media" alt="">

    What can you do with the data set?

    If you are new to data analytics, try answering the following questions: - in what year did the active growth in the number of video games produced begin? What year was the most successful from this point of view? - on what day and month were the largest number of video games released? What could be the reason for this pattern? - is there a dependence of the rating of a video game on the number of reviews left or the total number of players? - which game genres, platforms and developers are the most common (the most video games released of all time)? - which game genres, platforms and developers have the highest total number of players (have the highest total number of players ever)? - which game genres, platforms and developers have the highest average video game ratings?

    If you have enough experience, try solving a multi-label classification problem. Train a model that can classify a video game description into one or more genres: - which models are best suited for this, and which should not be used? - what is the best way to convert text to features? How will lemmatization of text affect the predictive ability of the model? - which metric should be chosen to evaluate the model? - Is the model calibrated enough after training to trust its probabilistic forecasts? - can adding new data improve the predictive ability of the model?

    Field descriptions:

    The data contains the following fields: 1. games - basic data: - id - video game identifier (primary key); - name - name of the video game; - date - release date of the video game; - rating - average rating of the video game; - reviews - number of reviews; - plays - total number of players; - playing - number of players currently; - backlogs - the number of additions of a video game to the backlog; - wishlists - the number of times a video game has been added to “favorites”; - description - description of the video game. 2. developers - developers (publishers): - id - video game identifier (foreign key); - developer - developer (publisher) of a video game. 3. platforms - gaming platforms: - id - video game identifier (foreign key); - platform - gaming platform. 4. genres - game genres: - id - video game identifier (foreign key); - genre - video game genre. 5. scores - user ratings: - id - video game identifier (foreign key); - score - score (from 0.5 to 5 in increments of 0.5); - amount - number of users. 6. Video game posters.

    Data update

    The website backloggd.com contains detailed roadmap with changes that may be implemented over time on the website, among them: - additional information about the game: DLC status, all companies, alternative names and other extensive information about the game; - categorization of games: which games are DLC, demo versions, canceled, beta versions, etc.; - personalized game covers: IGDB now supports localized covers; - release dates: games with one date are too easy, in this case, multiple release dates will be shown for different stages/regions.

    !...

  10. Job Offers Web Scraping Search

    • kaggle.com
    Updated Feb 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Job Offers Web Scraping Search [Dataset]. https://www.kaggle.com/datasets/thedevastator/job-offers-web-scraping-search
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 11, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Job Offers Web Scraping Search

    Targeted Results to Find the Optimal Work Solution

    By [source]

    About this dataset

    This dataset collects job offers from web scraping which are filtered according to specific keywords, locations and times. This data gives users rich and precise search capabilities to uncover the best working solution for them. With the information collected, users can explore options that match with their personal situation, skillset and preferences in terms of location and schedule. The columns provide detailed information around job titles, employer names, locations, time frames as well as other necessary parameters so you can make a smart choice for your next career opportunity

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset is a great resource for those looking to find an optimal work solution based on keywords, location and time parameters. With this information, users can quickly and easily search through job offers that best fit their needs. Here are some tips on how to use this dataset to its fullest potential:

    • Start by identifying what type of job offer you want to find. The keyword column will help you narrow down your search by allowing you to search for job postings that contain the word or phrase you are looking for.

    • Next, consider where the job is located – the Location column tells you where in the world each posting is from so make sure it’s somewhere that suits your needs!

    • Finally, consider when the position is available – look at the Time frame column which gives an indication of when each posting was made as well as if it’s a full-time/ part-time role or even if it’s a casual/temporary position from day one so make sure it meets your requirements first before applying!

    • Additionally, if details such as hours per week or further schedule information are important criteria then there is also info provided under Horari and Temps Oferta columns too! Now that all three criteria have been ticked off - key words, location and time frame - then take a look at Empresa (Company Name) and Nom_Oferta (Post Name) columns too in order to get an idea of who will be employing you should you land the gig!

      All these pieces of data put together should give any motivated individual all they need in order to seek out an optimal work solution - keep hunting good luck!

    Research Ideas

    • Machine learning can be used to groups job offers in order to facilitate the identification of similarities and differences between them. This could allow users to specifically target their search for a work solution.
    • The data can be used to compare job offerings across different areas or types of jobs, enabling users to make better informed decisions in terms of their career options and goals.
    • It may also provide an insight into the local job market, enabling companies and employers to identify where there is potential for new opportunities or possible trends that simply may have previously gone unnoticed

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: web_scraping_information_offers.csv | Column name | Description | |:-----------------|:------------------------------------| | Nom_Oferta | Name of the job offer. (String) | | Empresa | Company offering the job. (String) | | Ubicació | Location of the job offer. (String) | | Temps_Oferta | Time of the job offer. (String) | | Horari | Schedule of the job offer. (String) |

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit .

  11. Top Remittance Options for Sending Money

    • kaggle.com
    zip
    Updated Mar 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    rumi rayan (2024). Top Remittance Options for Sending Money [Dataset]. https://www.kaggle.com/datasets/rumirayan/top-remittance-options-for-sending-money
    Explore at:
    zip(0 bytes)Available download formats
    Dataset updated
    Mar 26, 2024
    Authors
    rumi rayan
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset

    This dataset was created by rumi rayan

    Released under Apache 2.0

    Contents

  12. Indian Nifty and Banknifty Options Data 2020-2024

    • kaggle.com
    Updated Nov 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ayush Gupta (2024). Indian Nifty and Banknifty Options Data 2020-2024 [Dataset]. https://www.kaggle.com/datasets/ayushsacri/indian-nifty-and-banknifty-options-data-2020-2024
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 23, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Ayush Gupta
    Description

    Dataset cantains spot,future and options Data for Nifty and Banknifty from 2020 jan to 2024 october.

  13. Mental Health in Tech Survey

    • kaggle.com
    Updated Jan 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Mental Health in Tech Survey [Dataset]. https://www.kaggle.com/datasets/thedevastator/mental-health-in-tech-survey
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 20, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    The Devastator
    Description

    Mental Health in Tech Survey

    Understanding Employee Mental Health Needs in the Tech Industry

    By Stephen Myers [source]

    About this dataset

    This dataset contains survey responses from individuals in the tech industry about their mental health, including questions about treatment, workplace resources, and attitudes towards discussing mental health in the workplace. Mental health is an issue that affects all people of all ages, genders and walks of life. The prevalence of these issues within the tech industry–one that places hard demands on those who work in it–is no exception. By analyzing this dataset, we can better understand how prevalent mental health issues are among those who work in the tech sector.–and what kinds of resources they rely upon to find help–so that more can be done to create a healthier working environment for all.

    This dataset tracks key measures such as age, gender and country to determine overall prevalence, along with responses surrounding employee access to care options; whether mental health or physical illness are being taken as seriously by employers; whether or not anonymity is protected with regards to seeking help; and how coworkers may perceive those struggling with mental illness issues such as depression or anxiety. With an ever-evolving landscape due to new technology advancing faster than ever before – these statistics have never been more important for us to analyze if we hope remain true promoters of a healthy world inside and outside our office walls

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    In this dataset you will find data on age, gender, country, and state of survey respondents in addition to numerous questions that assess an individual's mental state including: self-employment status, family history of mental illness, treatment status and access or lack thereof; how their mental health condition affects their work; number of employees at the company they work for; remote work status; tech company status; benefit information from employers such as mental health benefits and wellness program availability; anonymity protection if seeking treatment resources for substance abuse or mental health issues ; ease (or difficulty) for medical leave for a mental health condition ; whether discussing physical or medical matters with employers have negative consequences. You will also find comments from survey participants.

    To use this dataset effectively: - Clean the data by removing invalid responses/duplicates/missing values - you can do this with basic Pandas commands like .dropna() , .drop_duplicates(), .replace(). - Utilize descriptive statistics such as mean and median to draw general conclusions about patterns of responses - you can do this with Pandas tools such as .groupby() and .describe(). - Run various types analyses such as mean comparisons on different kinds of variables(age vs gender), correlations between different features etc using appropriate statistical methods - use commands like Statsmodels' OLS models (.smf) , calculate z-scores , run hypothesis tests etc depending on what analysis is needed. Make sure you are aware any underlying assumptions your analysis requires beforehand !
    - Visualize your results with plotting libraries like Matplotlib/Seaborn to easily interpret these findings! Use boxplots/histograms/heatmaps where appropriate depending on your question !

    Research Ideas

    • Using the results of this survey, you could develop targeted outreach campaigns directed at underrepresented groups that answer “No” to questions about their employers providing resources for mental health or discussing it as part of wellness programs.
    • Analyzing the employee characteristics (e.g., age and gender) of those who reported negative consequences from discussing their mental health in the workplace could inform employer policies to support individuals with mental health conditions and reduce stigma and discrimination in the workplace.
    • Correlating responses to questions about remote work, leave policies, and anonymity with whether or not individuals have sought treatment for a mental health condition may provide insight into which types of workplace resources are most beneficial for supporting employees dealing with these issues

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: Dataset copyright by authors - You are free to: - Share - copy and redi...

  14. Synthetic Passports Dataset

    • kaggle.com
    Updated May 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Unidata (2025). Synthetic Passports Dataset [Dataset]. https://www.kaggle.com/datasets/unidpro/synthetic-passports-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 26, 2025
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Unidata
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Passport Photos Dataset

    This dataset contains over 100,000 passport photos from 100+ countries, making it a valuable resource for researchers and developers working on computer vision tasks related to passport verification, biometric identification, and document analysis. This dataset allows researchers and developers to train and evaluate their models without the ethical and legal concerns associated with using real passport data.

    By leveraging this dataset, developers can build robust and efficient document processing algorithms, contributing significantly to advancements in computer vision and identity verification. - Get the data

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2Faebcdc96f2160742bf8f5683e273aeec%2FFrame%20135.png?generation=1729689336288410&alt=media" alt="">

    The dataset is solely for informational or educational purposes and should not be used for any fraudulent or deceptive activities.

    Images in the dataset

    The dataset includes a wide variety of passport photos, showcasing various background colors. It is designed to help developers and researchers build and train machine learning models that can accurately detect and analyze passport photos.

    💵 Buy the Dataset: This is a limited preview of the data. To access the full dataset, please contact us at https://unidata.pro to discuss your requirements and pricing options.

    Photos in the dataset: 1. With background 2. Without background

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F22059654%2Feb869c5ccd7f6615754a5e8954675d5a%2FFrame%20127.png?generation=1729604678859593&alt=media" alt="">

    The dataset can facilitate the development of applications aimed at improving security measures in border control and immigration processes. By utilizing advanced algorithms trained on diverse passport images, authorities can enhance the accuracy and speed of identity verification, reducing the risk of fraudulent activities.

    🌐 UniData provides high-quality datasets, content moderation, data collection and annotation for your AI/ML projects

  15. B2B Technographic Data in the US Techsalerator

    • kaggle.com
    Updated Sep 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Techsalerator (2024). B2B Technographic Data in the US Techsalerator [Dataset]. https://www.kaggle.com/datasets/techsalerator/technographic-data-in-the-united-states
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 8, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Techsalerator
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Techsalerator’s Business Technographic Data for United States provides a thorough and insightful collection of information essential for businesses, market analysts, and technology vendors. This dataset offers a deep dive into the technological landscape of companies operating in United States, capturing and categorizing data related to their technology stacks, digital tools, and IT infrastructure.

    Please reach out to us at info@techsalerator.com or https://www.techsalerator.com/contact-us

    Top 5 Most Utilized Data Fields Company Name: This field lists the name of the company being analyzed. Understanding the companies helps technology vendors target their solutions and enables market analysts to evaluate technology adoption trends within specific businesses. Technology Stack: This field details the technologies and software solutions a company utilizes, such as CRM systems, ERP software, and cloud services. Knowledge of a company’s technology stack is vital for understanding its operational capabilities and technology needs. Deployment Status: This field indicates whether the technology is currently in use, planned for deployment, or under evaluation. This status helps vendors gauge the level of interest and current adoption among businesses. Industry Sector: This field identifies the industry sector in which the company operates, such as finance, manufacturing, or retail. Segmenting by industry sector helps vendors tailor their offerings to specific market needs and trends. Geographic Location: This field provides the geographic location of the company's headquarters or primary operations within United States. This information is useful for regional market analysis and understanding local technology adoption patterns. Top 5 Technology Trends in the United States Artificial Intelligence and Machine Learning: AI and ML continue to drive innovation across various sectors, from autonomous vehicles and healthcare to finance and customer service. Key advancements include natural language processing, computer vision, and reinforcement learning. Cloud Computing and Edge Computing: The shift towards cloud computing remains strong, with major providers like AWS, Azure, and Google Cloud leading the way. Edge computing is also gaining traction, enabling faster processing and data analysis closer to the source, which is crucial for IoT applications. 5G Technology: The rollout of 5G networks is transforming connectivity, enabling faster data speeds, lower latency, and new applications in IoT, smart cities, and augmented reality (AR). Major telecom companies and technology providers are heavily invested in this technology. Cybersecurity and Privacy: As digital threats become more sophisticated, there is an increased focus on cybersecurity solutions, including threat detection, data encryption, and privacy protection. Innovations in this space aim to combat ransomware, data breaches, and other cyber risks. Blockchain and Decentralized Finance (DeFi): Blockchain technology is expanding beyond cryptocurrencies, with applications in supply chain management, digital identity, and smart contracts. DeFi is a growing sector within blockchain, offering decentralized financial services and products. Top 5 Companies with Notable Technographic Data in the United States Microsoft: A leading technology company known for its software, cloud computing services (Azure), and AI research. Microsoft's diverse portfolio includes operating systems, enterprise solutions, and gaming (Xbox). Google (Alphabet Inc.): A major player in search engines, cloud computing, AI, and consumer electronics. Google is at the forefront of innovations in machine learning, autonomous driving (Waymo), and digital advertising. Amazon: Known for its e-commerce platform, Amazon is also a significant force in cloud computing (AWS), AI, and logistics. AWS is a leading cloud service provider, and Amazon's technology initiatives span various industries. Apple Inc.: Renowned for its consumer electronics, including iPhones, iPads, and Macs. Apple is also investing in emerging technologies such as AR, wearable technology (Apple Watch), and health tech. IBM: A historic leader in technology and consulting services, IBM focuses on enterprise solutions, cloud computing, AI (IBM Watson), and quantum computing. The company is known for its research and development in cutting-edge technologies. Accessing Techsalerator’s Business Technographic Data If you’re interested in obtaining Techsalerator’s Business Technographic Data for United States, please contact info@techsalerator.com with your specific requirements. Techsalerator will provide a customized quote based on the number of data fields and records you need, with the dataset available for delivery within 24 hours. Ongoing access options can also be discussed as needed.

    Included Data Fields Company Name Technology Stack Depl...

  16. Data from: Time Travel Paradox Dataset

    • kaggle.com
    Updated Jul 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ANMOL BAJPAI (2023). Time Travel Paradox Dataset [Dataset]. https://www.kaggle.com/datasets/anmolbajpai/time-travel-story-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jul 12, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    ANMOL BAJPAI
    Description

    Welcome to the enigmatic realm of time travel paradoxes, where reality twists, causality bends, and chronological order becomes an intricate puzzle. This dataset delves into the captivating realm of time travel, offering a unique collection of stories that will tickle your imagination and leave you pondering the mysteries of temporal anomalies.

    Imagine a quirky protagonist named Max, an eccentric inventor who inadvertently constructs a malfunctioning time machine. On an eventful day, as Max activates the contraption, a blinding flash engulfs the room, transporting him to a bizarre reality where time itself dances to its whims. Max finds himself entangled in a web of mind-bending paradoxes and conflicting temporal events, a captivating journey that will keep you on the edge of your seat.

    Within this dataset, you'll uncover a diverse assortment of stories, each brimming with paradoxical twists and perplexing temporal conflicts. The tales feature a range of characters and scenarios, showcasing instances where the past and future collide, causality becomes convoluted, and timelines intertwine in unexpected ways.

    Each story is composed of several key elements. The "premise" sets the stage, providing the context for the narrative. The "initial" event kickstarts the adventure, while the "counterfactual" event introduces a tantalizing paradox or temporal contradiction. The "original_ending" describes the consequences of the story, considering the paradoxical elements, and finally, the "edited_endings" present alternative outcomes that defy the laws of time and possibility.

    This dataset offers a treasure trove for researchers, storytellers, and those with a penchant for unraveling the enigmas of time travel. From sentiment analysis to paradox detection, this collection invites you to explore the myriad of ML and NLP applications that lie within. Delve into the depths of these stories, extract temporal relationships, predict endings, and embark on an intellectual journey into the perplexing world of time travel paradoxes.

    Embark on this extraordinary adventure, where time bends, paradoxes abound, and the limits of our comprehension are tested. Will you dare to unravel the secrets hidden within the folds of time?

    Each example in the dataset includes the following fields:

    • "story_id": A unique identifier for each story or example.
    • "premise": A statement that sets the context or situation for the story.
    • "initial": Describes an initial event that takes place in the story.
    • "counterfactual": Represents a counterfactual event, which introduces a paradoxical element or a conflicting temporal scenario.
    • "original_ending": Describes the consequences or outcome of the story in the original version, considering the paradoxical elements.
    • "edited_ending": This field contains a list of alternative endings, where each element represents a modified version of the original ending.

    Note: This dataset is purely fictional, designed to inspire creativity, and is not indicative of real-world time travel phenomena.

    Counterfactual Story Reasoning and Generation Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi EMNLP 2019

    Paper link: https://arxiv.org/abs/1909.04076

    We would also like to acknowledge and credit the original creators of this captivating dataset. Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi presented this extraordinary work at EMNLP 2019. Their research, titled "Counterfactual Story Reasoning and Generation," forms the foundation of this dataset and explores the fascinating realms of counterfactual narratives and temporal reasoning.

    Their paper and associated GitHub repository provide invaluable insights into the generation and analysis of counterfactual stories, offering a deeper understanding of the underlying principles and methodologies employed. We express our gratitude to these talented researchers for their contributions, which have inspired further exploration into the quirks and complexities of time travel paradoxes.

    Please refer to the paper and GitHub repository for additional details on the original research and methodologies employed in creating this dataset.

  17. AI medical chatbot

    • kaggle.com
    Updated Aug 15, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yousef Saeedian (2024). AI medical chatbot [Dataset]. https://www.kaggle.com/datasets/yousefsaeedian/ai-medical-chatbot
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 15, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Yousef Saeedian
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Description:

    This dataset comprises transcriptions of conversations between doctors and patients, providing valuable insights into the dynamics of medical consultations. It includes a wide range of interactions, covering various medical conditions, patient concerns, and treatment discussions. The data is structured to capture both the questions and concerns raised by patients, as well as the medical advice, diagnoses, and explanations provided by doctors.

    Key Features:

    • Doctor and Patient Roles: Each conversation is annotated with the role of the speaker (doctor or patient), making it easy to analyze communication patterns.
    • Medical Context: The dataset includes diverse scenarios, from routine check-ups to more complex medical discussions, offering a broad spectrum of healthcare dialogues.
    • Natural Language: The conversations are presented in natural language, allowing for the development and testing of NLP models focused on healthcare communication.
    • Applications: This dataset can be used for various applications, such as building dialogue systems, analyzing communication efficacy, developing medical NLP models, and enhancing patient care through better understanding of doctor-patient interactions.

    Potential Use Cases:

    • NLP Model Training: Train models to understand and generate medical dialogues.
    • Healthcare Communication Studies: Analyze communication strategies between doctors and patients to improve healthcare delivery.
    • Medical Chatbots: Develop intelligent medical chatbots that can simulate doctor-patient conversations.
    • Patient Experience Enhancement: Identify common patient concerns and doctor responses to enhance patient care strategies.

    This dataset is a valuable resource for researchers, data scientists, and healthcare professionals interested in the intersection of technology and medicine, aiming to improve healthcare communication through data-driven approaches.

  18. Alpha Insights: US Funds

    • kaggle.com
    Updated Feb 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    willian oliveira gibin (2024). Alpha Insights: US Funds [Dataset]. http://doi.org/10.34740/kaggle/dsv/7614015
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 12, 2024
    Dataset provided by
    Kaggle
    Authors
    willian oliveira gibin
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F2b87409e296a59d20dab602e6501f340%2Ffile9e063b84e35.gif?generation=1707771596337465&alt=media" alt="">

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F9d574862156fdd14299b6bcdf1d7c0e8%2Ffile9e048912e2.gif?generation=1707771713059014&alt=media" alt="">

    US Funds Dataset: Unlocking Insights for Informed Investment Decisions

    Exchange-Traded Funds (ETFs) have gained significant popularity in recent years as a low-cost alternative to Mutual Funds. This dataset, compiled from Yahoo Finance, offers a comprehensive overview of the US funds market, encompassing 23,783 Mutual Funds and 2,310 ETFs.

    Data

    The dataset provides a wealth of information on each fund, including:

    General fund aspects: total net assets, fund family, inception date, expense ratios, and more. Portfolio indicators: cash allocation, sector weightings, holdings diversification, and other key metrics. Historical returns: year-to-date, 1-year, 3-year, and other performance data for different time periods. Financial ratios: price/earnings ratio, Treynor and Sharpe ratios, alpha, beta, and ESG scores. Applications

    This dataset can be leveraged by investors, researchers, and financial professionals for a variety of purposes, including:

    Investment analysis: comparing the performance and characteristics of Mutual Funds and ETFs to make informed investment decisions. Portfolio construction: using the data to build diversified portfolios that align with investment goals and risk tolerance. Research and analysis: studying market trends, fund behavior, and other factors to gain insights into the US funds market. Inspiration and Updates

    The dataset was inspired by the surge of interest in ETFs in 2017 and the subsequent shift away from Mutual Funds. The data is sourced from Yahoo Finance, a publicly available website, ensuring transparency and accessibility. Updates are planned every 1-2 semesters to keep the data current and relevant.

    Conclusion

    This comprehensive dataset offers a valuable resource for anyone seeking to gain a deeper understanding of the US funds market. By providing detailed information on a wide range of funds, the dataset empowers investors to make informed decisions and build successful investment portfolios.

    Access the dataset and unlock the insights it offers to make informed investment decisions.

  19. The Simpsons character classification, an example

    • kaggle.com
    zip
    Updated Apr 27, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Juan (2020). The Simpsons character classification, an example [Dataset]. https://www.kaggle.com/jfgm2018/the-simpsons-dataset-compilation-49-characters
    Explore at:
    zip(1531502158 bytes)Available download formats
    Dataset updated
    Apr 27, 2020
    Authors
    Juan
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    After reading the inspiring work from Alexandre Attia (https://medium.com/alex-attia-blog/the-simpsons-character-recognition-using-keras-d8e1796eae36), and with the goal to learn how to benefit from the machine learning potential, I made my own approach to "The Simpsons" character recognition and detection problem.

    Most part of the work collecting and labeling the images was already done by Alexandre (https://www.kaggle.com/alexattia/the-simpsons-characters-dataset). Also, his approach was clearly explained, and I would say, it is in most senses, the best I found. My approach will be far more simple, and also less efficient, as I was considering a rough approach to the problem, from the point my point of view, a complete beginner.

    At the time I began making this project (2019), the set from Alexandre was not as complete as it is nowadays. For this reason, I complete some of the characters from the original 22 when I first downloaded the character set up to 49 characters.

    This approach is based on Haar Cascades for character recognition and CNN for clasification.

    Content

    The CNN to recognize data characters needs the help of a Haar Cascade. Haar Cascades are not the best approach to this problem, but as they are easy to train and we can use the dataset for the CNN and the Haar Cascade, it was adopted for this version. In order to make Haar Casacades more efficient, datasets were processed so only the faces of the characters were used for training. That means also that our CNN will recognize the characters only by the face and not the shape of their bodies.

    Haar Cascades were trained in the usual way, using opencv and the procedure stated elsewhere (there are several works about Haar Cascades). To make a brief review of the process: 1. Create the set of images that will be used as "positives" images, i.e., those that contains the face of the character. Usually storage in a folder called "info". 2. Create a file ("info.lst") with a list of all the images that will be used for the Haar Cascade training. In this step, data were processed so only the face was in the image. As in the "info.lst" it should be the name of the file and the coordinates of the region of interest, that means that all the image is our region of interest. As we have enough images in this case, positive images creation from negative images (those where the face was not in) has not been done. 3. Create a folder with the set of all negative images (usually, "neg"). 4. Run the command from opencv with the desired options (all these options are explained on https://docs.opencv.org/master/dc/d88/tutorial_traincascade.html). For example:

    opencv_traincascade -data data -vec positives.vec -bg bg.txt -numPos 6000 -numNeg 3000 -numStages 50 -w 64 -h 64 -precalcValBufSize 16384 -precalIdxBufSize 16384

    Once trained, the Haar Cascade will give a "cascade.xml" file as a result. In order to test the CNN we will use the trained CNN with the trained cascade, through a python program. The Haar Cascade will take the image and recognize that there is a face. The face will be "cut" from the image and send to the CNN model that will recognize the character.

    Acknowledgements

    I would to acknowledge the really good approach and motivation from Alexandre Attia and the help of Javier Sotres that makes possible the training of this test.

    Inspiration

    The main insipiration of this test was to make a quick approach to the problem of feature recognition, although using a funnier dataset. The best conclusion we could make from this effort is that machine learning could be easily implemented for quick routines in a laboratory, where images are one of the most important sources of data.

  20. Risk of Palm Oil

    • kaggle.com
    Updated Jun 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    willian oliveira gibin (2024). Risk of Palm Oil [Dataset]. http://doi.org/10.34740/kaggle/dsv/8650059
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 9, 2024
    Dataset provided by
    Kaggle
    Authors
    willian oliveira gibin
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    this graph was created in Loocker Studio, Tableau and PowerBi:

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2Faa30bfda8161a2ccb5532fb461d5c5ca%2Fgraph1.png?generation=1717963934031440&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2F698cf099eee5fd39d7357707c23b9f83%2Fgraph2.jpg?generation=1717963939898552&alt=media" alt=""> https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F16731800%2Ffbf5fac4f84f95d65738cb0e3df61df8%2Fgraph3.jpg?generation=1717963944992929&alt=media" alt="">

    large-scale consumer survey across the UK population on the perceptions of vegetable oils, palm oil was deemed to be the least environmentally friendly.1

    It wasn’t even close. 41% of people thought palm oil was ‘environmentally unfriendly,’ compared to 15% for soybean oil, 9% for rapeseed, 5% for sunflower, and 2% for olive oil. 43% also answered ‘Don’t know,’ meaning that almost no one thought it was environmentally friendly.

    Retailers know that this is becoming an important driver of consumer choices. From shampoos to detergents and from chocolate to cookies, companies are trying to eliminate palm oil from their products. There are now long lists of companies that have done so [Google ‘palm oil free’ and you will find an endless supply]. Many online grocery stores now offer the option to apply a ‘palm-oil free’ filter when browsing their products.2

    Why are consumers turning their back on palm oil? And is this reputation justified?

    In this article, I address some key questions about palm oil production: how has it changed, where is it grown, and how has this affected deforestation and biodiversity? The story of palm oil is more complex than it is often portrayed.

    Global demand for vegetable oils has increased rapidly over the last 50 years. As palm oil is the most productive oil crop, it has taken up a lot of this production. This has had a negative impact on the environment, particularly in Indonesia and Malaysia. But it’s not clear that the alternatives would have fared any better. In fact, because we can produce up to 20 times as much oil per hectare from palm versus the alternatives, it has probably spared a lot of environmental impacts from elsewhere.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Alexander Gusev (2023). Software-Alternatives [Dataset]. https://www.kaggle.com/alexandrgusev/software-alternatives/discussion
Organization logo

Data from: Software-Alternatives

Related Article
Explore at:
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Mar 7, 2023
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Alexander Gusev
Description

Dataset

This dataset was created by Alexander Gusev

Contents

Search
Clear search
Close search
Google apps
Main menu