86 datasets found
  1. d

    Data from: Mining Distance-Based Outliers in Near Linear Time

    • catalog.data.gov
    • datasets.ai
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Mining Distance-Based Outliers in Near Linear Time [Dataset]. https://catalog.data.gov/dataset/mining-distance-based-outliers-in-near-linear-time
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    Full title: Mining Distance-Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule Abstract: Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  2. G

    Data Mining Tools Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Data Mining Tools Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/data-mining-tools-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Aug 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Mining Tools Market Outlook




    According to our latest research, the global Data Mining Tools market size reached USD 1.93 billion in 2024, reflecting robust industry momentum. The market is expected to grow at a CAGR of 12.7% from 2025 to 2033, reaching a projected value of USD 5.69 billion by 2033. This growth is primarily driven by the increasing adoption of advanced analytics across diverse industries, rapid digital transformation, and the necessity for actionable insights from massive data volumes.




    One of the pivotal growth factors propelling the Data Mining Tools market is the exponential rise in data generation, particularly through digital channels, IoT devices, and enterprise applications. Organizations across sectors are leveraging data mining tools to extract meaningful patterns, trends, and correlations from structured and unstructured data. The need for improved decision-making, operational efficiency, and competitive advantage has made data mining an essential component of modern business strategies. Furthermore, advancements in artificial intelligence and machine learning are enhancing the capabilities of these tools, enabling predictive analytics, anomaly detection, and automation of complex analytical tasks, which further fuels market expansion.




    Another significant driver is the growing demand for customer-centric solutions in industries such as retail, BFSI, and healthcare. Data mining tools are increasingly being used for customer relationship management, targeted marketing, fraud detection, and risk management. By analyzing customer behavior and preferences, organizations can personalize their offerings, optimize marketing campaigns, and mitigate risks. The integration of data mining tools with cloud platforms and big data technologies has also simplified deployment and scalability, making these solutions accessible to small and medium-sized enterprises (SMEs) as well as large organizations. This democratization of advanced analytics is creating new growth avenues for vendors and service providers.




    The regulatory landscape and the increasing emphasis on data privacy and security are also shaping the development and adoption of Data Mining Tools. Compliance with frameworks such as GDPR, HIPAA, and CCPA necessitates robust data governance and transparent analytics processes. Vendors are responding by incorporating features like data masking, encryption, and audit trails into their solutions, thereby enhancing trust and adoption among regulated industries. Additionally, the emergence of industry-specific data mining applications, such as fraud detection in BFSI and predictive diagnostics in healthcare, is expanding the addressable market and fostering innovation.




    From a regional perspective, North America currently dominates the Data Mining Tools market owing to the early adoption of advanced analytics, strong presence of leading technology vendors, and high investments in digital transformation. However, the Asia Pacific region is emerging as a lucrative market, driven by rapid industrialization, expansion of IT infrastructure, and growing awareness of data-driven decision-making in countries like China, India, and Japan. Europe, with its focus on data privacy and digital innovation, also represents a significant market share, while Latin America and the Middle East & Africa are witnessing steady growth as organizations in these regions modernize their operations and adopt cloud-based analytics solutions.





    Component Analysis




    The Component segment of the Data Mining Tools market is bifurcated into Software and Services. Software remains the dominant segment, accounting for the majority of the market share in 2024. This dominance is attributed to the continuous evolution of data mining algorithms, the proliferation of user-friendly graphical interfaces, and the integration of advanced analytics capabilities such as machine learning, artificial intelligence, and natural language pro

  3. m

    Educational Attainment in North Carolina Public Schools: Use of statistical...

    • data.mendeley.com
    Updated Nov 14, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scott Herford (2018). Educational Attainment in North Carolina Public Schools: Use of statistical modeling, data mining techniques, and machine learning algorithms to explore 2014-2017 North Carolina Public School datasets. [Dataset]. http://doi.org/10.17632/6cm9wyd5g5.1
    Explore at:
    Dataset updated
    Nov 14, 2018
    Authors
    Scott Herford
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The purpose of data mining analysis is always to find patterns of the data using certain kind of techiques such as classification or regression. It is not always feasible to apply classification algorithms directly to dataset. Before doing any work on the data, the data has to be pre-processed and this process normally involves feature selection and dimensionality reduction. We tried to use clustering as a way to reduce the dimension of the data and create new features. Based on our project, after using clustering prior to classification, the performance has not improved much. The reason why it has not improved could be the features we selected to perform clustering are not well suited for it. Because of the nature of the data, classification tasks are going to provide more information to work with in terms of improving knowledge and overall performance metrics. From the dimensionality reduction perspective: It is different from Principle Component Analysis which guarantees finding the best linear transformation that reduces the number of dimensions with a minimum loss of information. Using clusters as a technique of reducing the data dimension will lose a lot of information since clustering techniques are based a metric of 'distance'. At high dimensions euclidean distance loses pretty much all meaning. Therefore using clustering as a "Reducing" dimensionality by mapping data points to cluster numbers is not always good since you may lose almost all the information. From the creating new features perspective: Clustering analysis creates labels based on the patterns of the data, it brings uncertainties into the data. By using clustering prior to classification, the decision on the number of clusters will highly affect the performance of the clustering, then affect the performance of classification. If the part of features we use clustering techniques on is very suited for it, it might increase the overall performance on classification. For example, if the features we use k-means on are numerical and the dimension is small, the overall classification performance may be better. We did not lock in the clustering outputs using a random_state in the effort to see if they were stable. Our assumption was that if the results vary highly from run to run which they definitely did, maybe the data just does not cluster well with the methods selected at all. Basically, the ramification we saw was that our results are not much better than random when applying clustering to the data preprocessing. Finally, it is important to ensure a feedback loop is in place to continuously collect the same data in the same format from which the models were created. This feedback loop can be used to measure the model real world effectiveness and also to continue to revise the models from time to time as things change.

  4. G

    Data Mining Software Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Data Mining Software Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/data-mining-software-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Aug 22, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Mining Software Market Outlook



    According to our latest research, the global Data Mining Software market size in 2024 stands at USD 12.7 billion. This market is experiencing robust expansion, driven by the growing demand for actionable insights across industries, and is expected to reach USD 38.1 billion by 2033, registering a remarkable CAGR of 13.1% during the forecast period. The proliferation of big data, increasing adoption of artificial intelligence, and the need for advanced analytics are the primary growth factors propelling the market forward.




    The accelerating digitization across sectors is a key factor fueling the growth of the Data Mining Software market. Organizations are generating and collecting vast amounts of data at unprecedented rates, requiring sophisticated tools to extract meaningful patterns and actionable intelligence. The rise of Internet of Things (IoT) devices, social media platforms, and connected infrastructure has further intensified the need for robust data mining solutions. Businesses are leveraging data mining software to enhance decision-making, optimize operations, and gain a competitive edge. The integration of machine learning and artificial intelligence algorithms into data mining tools is enabling organizations to automate complex analytical tasks, uncover hidden trends, and predict future outcomes with greater accuracy. As enterprises continue to recognize the value of data-driven strategies, the demand for advanced data mining software is poised for sustained growth.




    Another significant factor contributing to the market’s expansion is the increasing regulatory pressure on data management and security. Regulatory frameworks such as GDPR, HIPAA, and CCPA are compelling organizations to adopt comprehensive data management practices, which include advanced data mining software for compliance monitoring and risk assessment. These regulations are driving investments in software that can efficiently process, analyze, and secure large data sets while ensuring transparency and accountability. Additionally, the surge in cyber threats and data breaches has heightened the importance of robust analytics solutions for anomaly detection, fraud prevention, and real-time threat intelligence. As a result, sectors such as BFSI, healthcare, and government are prioritizing the deployment of data mining solutions to safeguard sensitive information and maintain regulatory compliance.




    The growing emphasis on customer-centric strategies is also playing a pivotal role in the expansion of the Data Mining Software market. Organizations across retail, telecommunications, and financial services are utilizing data mining tools to personalize customer experiences, enhance marketing campaigns, and improve customer retention rates. By analyzing customer behavior, preferences, and feedback, businesses can tailor their offerings and communication strategies to meet evolving consumer demands. The ability to derive granular insights from vast customer data sets enables companies to innovate rapidly and stay ahead of market trends. Furthermore, the integration of data mining with customer relationship management (CRM) and enterprise resource planning (ERP) systems is streamlining business processes and fostering a culture of data-driven decision-making.




    From a regional perspective, North America currently dominates the Data Mining Software market, supported by a mature technological infrastructure, high adoption of cloud-based analytics, and a strong presence of leading software vendors. Europe follows closely, driven by stringent data privacy regulations and increasing investments in digital transformation initiatives. The Asia Pacific region is emerging as a high-growth market, fueled by rapid industrialization, expanding IT sectors, and the proliferation of digital services across economies such as China, India, and Japan. Latin America and the Middle East & Africa are also witnessing increasing adoption, particularly in sectors like banking, telecommunications, and government, as organizations seek to harness the power of data for strategic growth.





    <

  5. Make Data Count Dataset - MinerU Extraction

    • kaggle.com
    zip
    Updated Aug 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Omid Erfanmanesh (2025). Make Data Count Dataset - MinerU Extraction [Dataset]. https://www.kaggle.com/datasets/omiderfanmanesh/make-data-count-dataset-mineru-extraction
    Explore at:
    zip(4272989320 bytes)Available download formats
    Dataset updated
    Aug 26, 2025
    Authors
    Omid Erfanmanesh
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Dataset Description

    This dataset contains PDF-to-text conversions of scientific research articles, prepared for the task of data citation mining. The goal is to identify references to research datasets within full-text scientific papers and classify them as Primary (data generated in the study) or Secondary (data reused from external sources).

    The PDF articles were processed using MinerU, which converts scientific PDFs into structured machine-readable formats (JSON, Markdown, images). This ensures participants can access both the raw text and layout information needed for fine-grained information extraction.

    Files and Structure

    Each paper directory contains the following files:

    • *_origin.pdf The original PDF file of the scientific article.

    • *_content_list.json Structured extraction of the PDF content, where each object represents a text or figure element with metadata. Example entry:

      {
       "type": "text",
       "text": "10.1002/2017JC013030",
       "text_level": 1,
       "page_idx": 0
      }
      
    • full.md The complete article content in Markdown format (linearized for easier reading).

    • images/ Folder containing figures and extracted images from the article.

    • layout.json Page layout metadata, including positions of text blocks and images.

    Data Mining Task

    The aim is to detect dataset references in the article text and classify them:

    Each dataset mention must be labeled as:

    • Primary: Data generated by the paper (new experiments, field observations, sequencing runs, etc.).
    • Secondary: Data reused from external repositories or prior studies.

    Training and Test Splits

    • train/ → Articles with gold-standard labels (train_labels.csv).
    • test/ → Articles without labels, used for evaluation.
    • train_labels.csv → Ground truth with:

      • article_id: Research paper DOI.
      • dataset_id: Extracted dataset identifier.
      • type: Citation type (Primary / Secondary).
    • sample_submission.csv → Example submission format.

    Example

    Paper: https://doi.org/10.1098/rspb.2016.1151 Data: https://doi.org/10.5061/dryad.6m3n9 In-text span:

    "The data we used in this publication can be accessed from Dryad at doi:10.5061/dryad.6m3n9." Citation type: Primary

    This dataset enables participants to develop and test NLP systems for:

    • Information extraction (locating dataset mentions).
    • Identifier normalization (mapping mentions to persistent IDs).
    • Citation classification (distinguishing Primary vs Secondary data usage).
  6. Mining Distance-Based Outliers in Near Linear Time - Dataset - NASA Open...

    • data.nasa.gov
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Mining Distance-Based Outliers in Near Linear Time - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/mining-distance-based-outliers-in-near-linear-time
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Full title: Mining Distance-Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule Abstract: Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  7. s

    Data from: Comprehensive Evaluation of Association Measures for Fault...

    • researchdata.smu.edu.sg
    rar
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LUCIA Lucia; David LO; Lingxiao JIANG; Aditya Budi (2023). Data from: Comprehensive Evaluation of Association Measures for Fault Localization [Dataset]. http://doi.org/10.25440/smu.12062796.v1
    Explore at:
    rarAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    SMU Research Data Repository (RDR)
    Authors
    LUCIA Lucia; David LO; Lingxiao JIANG; Aditya Budi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This record contains the underlying research data for the publication "Comprehensive Evaluation of Association Measures for Fault Localization" and the full-text is available from: https://ink.library.smu.edu.sg/sis_research/1330In statistics and data mining communities, there have been many measures proposed to gauge the strength of association between two variables of interest, such as odds ratio, confidence, Yule-Y, Yule-Q, Kappa, and gini index. These association measures have been used in various domains, for example, to evaluate whether a particular medical practice is associated positively to a cure of a disease or whether a particular marketing strategy is associated positively to an increase in revenue, etc. This paper models the problem of locating faults as association between the execution or non-execution of particular program elements with failures. There have been special measures, termed as suspiciousness measures, proposed for the task. Two state-of-the-art measures are Tarantula and Ochiai, which are different from many other statistical measures. To the best of our knowledge, there is no study that comprehensively investigates the effectiveness of various association measures in localizing faults. This paper fills in the gap by evaluating 20 wellknown association measures and compares their effectiveness in fault localization tasks with Tarantula and Ochiai. Evaluation on the Siemens programs show that a number of association measures perform statistically comparable as Tarantula and Ochiai.

  8. road sign recognition

    • kaggle.com
    zip
    Updated May 2, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Said Azizov (2021). road sign recognition [Dataset]. https://www.kaggle.com/michaelcripman/road-sign-recognition
    Explore at:
    zip(3523596349 bytes)Available download formats
    Dataset updated
    May 2, 2021
    Authors
    Said Azizov
    License

    http://opendatacommons.org/licenses/dbcl/1.0/http://opendatacommons.org/licenses/dbcl/1.0/

    Description

    Data The input data will be given:

    Archive with task data; train.csv - training images annotations; train_images / - folder with training images; 5_15_2_vocab.json - decoding of the attributes at the 5_15_2 sign.

    The annotations contain fields:

    filename - path to the signed image; label - the class label for the sign on the image. Note! The characters 3_24, 3_25, 5_15_2, 5_31 and 6_2 have separate attributes. These attributes in the label field are separated by a "+" character, for example, 3_24 + 100. For sign 5_15_2, the attribute is the direction of the arrow, for the remaining signs, the attribute is the numbers on the sign.

  9. d

    Data from: Distributed Anomaly Detection using 1-class SVM for Vertically...

    • catalog.data.gov
    • s.cnmilf.com
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Distributed Anomaly Detection using 1-class SVM for Vertically Partitioned Data [Dataset]. https://catalog.data.gov/dataset/distributed-anomaly-detection-using-1-class-svm-for-vertically-partitioned-data
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    There has been a tremendous increase in the volume of sensor data collected over the last decade for different monitoring tasks. For example, petabytes of earth science data are collected from modern satellites, in-situ sensors and different climate models. Similarly, huge amount of flight operational data is downloaded for different commercial airlines. These different types of datasets need to be analyzed for finding outliers. Information extraction from such rich data sources using advanced data mining methodologies is a challenging task not only due to the massive volume of data, but also because these datasets are physically stored at different geographical locations with only a subset of features available at any location. Moving these petabytes of data to a single location may waste a lot of bandwidth. To solve this problem, in this paper, we present a novel algorithm which can identify outliers in the entire data without moving all the data to a single location. The method we propose only centralizes a very small sample from the different data subsets at different locations. We analytically prove and experimentally verify that the algorithm offers high accuracy compared to complete centralization with only a fraction of the communication cost. We show that our algorithm is highly relevant to both earth sciences and aeronautics by describing applications in these domains. The performance of the algorithm is demonstrated on two large publicly available datasets: (1) the NASA MODIS satellite images and (2) a simulated aviation dataset generated by the ‘Commercial Modular Aero-Propulsion System Simulation’ (CMAPSS).

  10. l

    LSC (Leicester Scientific Corpus)

    • figshare.le.ac.uk
    Updated Apr 15, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Neslihan Suzen (2020). LSC (Leicester Scientific Corpus) [Dataset]. http://doi.org/10.25392/leicester.data.9449639.v2
    Explore at:
    Dataset updated
    Apr 15, 2020
    Dataset provided by
    University of Leicester
    Authors
    Neslihan Suzen
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Leicester
    Description

    The LSC (Leicester Scientific Corpus)

    April 2020 by Neslihan Suzen, PhD student at the University of Leicester (ns433@leicester.ac.uk) Supervised by Prof Alexander Gorban and Dr Evgeny MirkesThe data are extracted from the Web of Science [1]. You may not copy or distribute these data in whole or in part without the written consent of Clarivate Analytics.[Version 2] A further cleaning is applied in Data Processing for LSC Abstracts in Version 1*. Details of cleaning procedure are explained in Step 6.* Suzen, Neslihan (2019): LSC (Leicester Scientific Corpus). figshare. Dataset. https://doi.org/10.25392/leicester.data.9449639.v1.Getting StartedThis text provides the information on the LSC (Leicester Scientific Corpus) and pre-processing steps on abstracts, and describes the structure of files to organise the corpus. This corpus is created to be used in future work on the quantification of the meaning of research texts and make it available for use in Natural Language Processing projects.LSC is a collection of abstracts of articles and proceeding papers published in 2014, and indexed by the Web of Science (WoS) database [1]. The corpus contains only documents in English. Each document in the corpus contains the following parts:1. Authors: The list of authors of the paper2. Title: The title of the paper 3. Abstract: The abstract of the paper 4. Categories: One or more category from the list of categories [2]. Full list of categories is presented in file ‘List_of _Categories.txt’. 5. Research Areas: One or more research area from the list of research areas [3]. Full list of research areas is presented in file ‘List_of_Research_Areas.txt’. 6. Total Times cited: The number of times the paper was cited by other items from all databases within Web of Science platform [4] 7. Times cited in Core Collection: The total number of times the paper was cited by other papers within the WoS Core Collection [4]The corpus was collected in July 2018 online and contains the number of citations from publication date to July 2018. We describe a document as the collection of information (about a paper) listed above. The total number of documents in LSC is 1,673,350.Data ProcessingStep 1: Downloading of the Data Online

    The dataset is collected manually by exporting documents as Tab-delimitated files online. All documents are available online.Step 2: Importing the Dataset to R

    The LSC was collected as TXT files. All documents are extracted to R.Step 3: Cleaning the Data from Documents with Empty Abstract or without CategoryAs our research is based on the analysis of abstracts and categories, all documents with empty abstracts and documents without categories are removed.Step 4: Identification and Correction of Concatenate Words in AbstractsEspecially medicine-related publications use ‘structured abstracts’. Such type of abstracts are divided into sections with distinct headings such as introduction, aim, objective, method, result, conclusion etc. Used tool for extracting abstracts leads concatenate words of section headings with the first word of the section. For instance, we observe words such as ConclusionHigher and ConclusionsRT etc. The detection and identification of such words is done by sampling of medicine-related publications with human intervention. Detected concatenate words are split into two words. For instance, the word ‘ConclusionHigher’ is split into ‘Conclusion’ and ‘Higher’.The section headings in such abstracts are listed below:

    Background Method(s) Design Theoretical Measurement(s) Location Aim(s) Methodology Process Abstract Population Approach Objective(s) Purpose(s) Subject(s) Introduction Implication(s) Patient(s) Procedure(s) Hypothesis Measure(s) Setting(s) Limitation(s) Discussion Conclusion(s) Result(s) Finding(s) Material (s) Rationale(s) Implications for health and nursing policyStep 5: Extracting (Sub-setting) the Data Based on Lengths of AbstractsAfter correction, the lengths of abstracts are calculated. ‘Length’ indicates the total number of words in the text, calculated by the same rule as for Microsoft Word ‘word count’ [5].According to APA style manual [6], an abstract should contain between 150 to 250 words. In LSC, we decided to limit length of abstracts from 30 to 500 words in order to study documents with abstracts of typical length ranges and to avoid the effect of the length to the analysis.

    Step 6: [Version 2] Cleaning Copyright Notices, Permission polices, Journal Names and Conference Names from LSC Abstracts in Version 1Publications can include a footer of copyright notice, permission policy, journal name, licence, author’s right or conference name below the text of abstract by conferences and journals. Used tool for extracting and processing abstracts in WoS database leads to attached such footers to the text. For example, our casual observation yields that copyright notices such as ‘Published by Elsevier ltd.’ is placed in many texts. To avoid abnormal appearances of words in further analysis of words such as bias in frequency calculation, we performed a cleaning procedure on such sentences and phrases in abstracts of LSC version 1. We removed copyright notices, names of conferences, names of journals, authors’ rights, licenses and permission policies identified by sampling of abstracts.Step 7: [Version 2] Re-extracting (Sub-setting) the Data Based on Lengths of AbstractsThe cleaning procedure described in previous step leaded to some abstracts having less than our minimum length criteria (30 words). 474 texts were removed.Step 8: Saving the Dataset into CSV FormatDocuments are saved into 34 CSV files. In CSV files, the information is organised with one record on each line and parts of abstract, title, list of authors, list of categories, list of research areas, and times cited is recorded in fields.To access the LSC for research purposes, please email to ns433@le.ac.uk.References[1]Web of Science. (15 July). Available: https://apps.webofknowledge.com/ [2]WoS Subject Categories. Available: https://images.webofknowledge.com/WOKRS56B5/help/WOS/hp_subject_category_terms_tasca.html [3]Research Areas in WoS. Available: https://images.webofknowledge.com/images/help/WOS/hp_research_areas_easca.html [4]Times Cited in WoS Core Collection. (15 July). Available: https://support.clarivate.com/ScientificandAcademicResearch/s/article/Web-of-Science-Times-Cited-accessibility-and-variation?language=en_US [5]Word Count. Available: https://support.office.com/en-us/article/show-word-count-3c9e6a11-a04d-43b4-977c-563a0e0d5da3 [6]A. P. Association, Publication manual. American Psychological Association Washington, DC, 1983.

  11. d

    Data from: A Local Scalable Distributed Expectation Maximization Algorithm...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks [Dataset]. https://catalog.data.gov/dataset/a-local-scalable-distributed-expectation-maximization-algorithm-for-large-peer-to-peer-net
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P) environments. The algorithm can be used for a variety of well-known data mining tasks in distributed environments such as clustering, anomaly detection, target tracking, and density estimation to name a few, necessary for many emerging P2P applications in bioinformatics, webmining and sensor networks. Centralizing all or some of the data to build global models is impractical in such P2P environments because of the large number of data sources, the asynchronous nature of the P2P networks, and dynamic nature of the data/network. The proposed algorithm takes a two-step approach. In the monitoring phase, the algorithm checks if the model ‘quality’ is acceptable by using an efficient local algorithm. This is then used as a feedback loop to sample data from the network and rebuild the GMM when it is outdated. We present thorough experimental results to verify our theoretical claims.

  12. Market Basket Analysis

    • kaggle.com
    zip
    Updated Dec 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Aslan Ahmedov (2021). Market Basket Analysis [Dataset]. https://www.kaggle.com/datasets/aslanahmedov/market-basket-analysis
    Explore at:
    zip(23875170 bytes)Available download formats
    Dataset updated
    Dec 9, 2021
    Authors
    Aslan Ahmedov
    Description

    Market Basket Analysis

    Market basket analysis with Apriori algorithm

    The retailer wants to target customers with suggestions on itemset that a customer is most likely to purchase .I was given dataset contains data of a retailer; the transaction data provides data around all the transactions that have happened over a period of time. Retailer will use result to grove in his industry and provide for customer suggestions on itemset, we be able increase customer engagement and improve customer experience and identify customer behavior. I will solve this problem with use Association Rules type of unsupervised learning technique that checks for the dependency of one data item on another data item.

    Introduction

    Association Rule is most used when you are planning to build association in different objects in a set. It works when you are planning to find frequent patterns in a transaction database. It can tell you what items do customers frequently buy together and it allows retailer to identify relationships between the items.

    An Example of Association Rules

    Assume there are 100 customers, 10 of them bought Computer Mouth, 9 bought Mat for Mouse and 8 bought both of them. - bought Computer Mouth => bought Mat for Mouse - support = P(Mouth & Mat) = 8/100 = 0.08 - confidence = support/P(Mat for Mouse) = 0.08/0.09 = 0.89 - lift = confidence/P(Computer Mouth) = 0.89/0.10 = 8.9 This just simple example. In practice, a rule needs the support of several hundred transactions, before it can be considered statistically significant, and datasets often contain thousands or millions of transactions.

    Strategy

    • Data Import
    • Data Understanding and Exploration
    • Transformation of the data – so that is ready to be consumed by the association rules algorithm
    • Running association rules
    • Exploring the rules generated
    • Filtering the generated rules
    • Visualization of Rule

    Dataset Description

    • File name: Assignment-1_Data
    • List name: retaildata
    • File format: . xlsx
    • Number of Row: 522065
    • Number of Attributes: 7

      • BillNo: 6-digit number assigned to each transaction. Nominal.
      • Itemname: Product name. Nominal.
      • Quantity: The quantities of each product per transaction. Numeric.
      • Date: The day and time when each transaction was generated. Numeric.
      • Price: Product price. Numeric.
      • CustomerID: 5-digit number assigned to each customer. Nominal.
      • Country: Name of the country where each customer resides. Nominal.

    imagehttps://user-images.githubusercontent.com/91852182/145270162-fc53e5a3-4ad1-4d06-b0e0-228aabcf6b70.png">

    Libraries in R

    First, we need to load required libraries. Shortly I describe all libraries.

    • arules - Provides the infrastructure for representing, manipulating and analyzing transaction data and patterns (frequent itemsets and association rules).
    • arulesViz - Extends package 'arules' with various visualization. techniques for association rules and item-sets. The package also includes several interactive visualizations for rule exploration.
    • tidyverse - The tidyverse is an opinionated collection of R packages designed for data science.
    • readxl - Read Excel Files in R.
    • plyr - Tools for Splitting, Applying and Combining Data.
    • ggplot2 - A system for 'declaratively' creating graphics, based on "The Grammar of Graphics". You provide the data, tell 'ggplot2' how to map variables to aesthetics, what graphical primitives to use, and it takes care of the details.
    • knitr - Dynamic Report generation in R.
    • magrittr- Provides a mechanism for chaining commands with a new forward-pipe operator, %>%. This operator will forward a value, or the result of an expression, into the next function call/expression. There is flexible support for the type of right-hand side expressions.
    • dplyr - A fast, consistent tool for working with data frame like objects, both in memory and out of memory.
    • tidyverse - This package is designed to make it easy to install and load multiple 'tidyverse' packages in a single step.

    imagehttps://user-images.githubusercontent.com/91852182/145270210-49c8e1aa-9753-431b-a8d5-99601bc76cb5.png">

    Data Pre-processing

    Next, we need to upload Assignment-1_Data. xlsx to R to read the dataset.Now we can see our data in R.

    imagehttps://user-images.githubusercontent.com/91852182/145270229-514f0983-3bbb-4cd3-be64-980e92656a02.png"> imagehttps://user-images.githubusercontent.com/91852182/145270251-6f6f6472-8817-435c-a995-9bc4bfef10d1.png">

    After we will clear our data frame, will remove missing values.

    imagehttps://user-images.githubusercontent.com/91852182/145270286-05854e1a-2b6c-490e-ab30-9e99e731eacb.png">

    To apply Association Rule mining, we need to convert dataframe into transaction data to make all items that are bought together in one invoice will be in ...

  13. Data Mining for IVHM using Sparse Binary Ensembles, Phase I

    • data.nasa.gov
    application/rdfxml +5
    Updated Jun 26, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2018). Data Mining for IVHM using Sparse Binary Ensembles, Phase I [Dataset]. https://data.nasa.gov/dataset/Data-Mining-for-IVHM-using-Sparse-Binary-Ensembles/qfus-evzq
    Explore at:
    xml, tsv, csv, application/rssxml, application/rdfxml, jsonAvailable download formats
    Dataset updated
    Jun 26, 2018
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    In response to NASA SBIR topic A1.05, "Data Mining for Integrated Vehicle Health Management", Michigan Aerospace Corporation (MAC) asserts that our unique SPADE (Sparse Processing Applied to Data Exploitation) technology meets a significant fraction of the stated criteria and has functionality that enables it to handle many applications within the aircraft lifecycle. SPADE distills input data into highly quantized features and uses MAC's novel techniques for constructing Ensembles of Decision Trees to develop extremely accurate diagnostic/prognostic models for classification, regression, clustering, anomaly detection and semi-supervised learning tasks. These techniques are currently being employed to do Threat Assessment for satellites in conjunction with researchers at the Air Force Research Lab. Significant advantages to this approach include: 1) completely data driven; 2) training and evaluation are faster than conventional methods; 3) operates effectively on huge datasets (> billion samples X > million features), 4) proven to be as accurate as state-of-the-art techniques in many significant real-world applications. The specific goals for Phase 1 will be to work with domain experts at NASA and with our partners Boeing, SpaceX and GMV Space Systems to delineate a subset of problems that are particularly well-suited to this approach and to determine requirements for deploying algorithms on platforms of opportunity.

  14. DrivenData: Pump it Up

    • kaggle.com
    zip
    Updated Jan 21, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abid Ali Awan (2021). DrivenData: Pump it Up [Dataset]. https://www.kaggle.com/kingabzpro/drivendata-pump-it-up
    Explore at:
    zip(10914484 bytes)Available download formats
    Dataset updated
    Jan 21, 2021
    Authors
    Abid Ali Awan
    Description

    Context

    Can you predict which water pumps are faulty?

    Using data from Taarifa and the Tanzanian Ministry of Water, can you predict which pumps are functional, which need some repairs, and which don't work at all? This is an intermediate-level practice competition. Predict one of these three classes based on a number of variables about what kind of pump is operating, when it was installed, and how it is managed. A smart understanding of which waterpoints will fail can improve maintenance operations and ensure that clean, potable water is available to communities across Tanzania.

    Content

    Problem description This is where you'll find all of the documentation about this dataset and the problem we are trying to solve. For this competition, there are three subsections to the problem description:

    Features List of features Example of features Labels List of labels Submission Format Format example

    The features in this dataset

    Your goal is to predict the operating condition of a waterpoint for each record in the dataset. You are provided the following set of information about the waterpoints:

    amount_tsh - Total static head (amount water available to waterpoint) date_recorded - The date the row was entered funder - Who funded the well gps_height - Altitude of the well installer - Organization that installed the well longitude - GPS coordinate latitude - GPS coordinate wpt_name - Name of the waterpoint if there is one num_private - basin - Geographic water basin subvillage - Geographic location region - Geographic location region_code - Geographic location (coded) district_code - Geographic location (coded) lga - Geographic location ward - Geographic location population - Population around the well public_meeting - True/False recorded_by - Group entering this row of data scheme_management - Who operates the waterpoint scheme_name - Who operates the waterpoint permit - If the waterpoint is permitted construction_year - Year the waterpoint was constructed extraction_type - The kind of extraction the waterpoint uses extraction_type_group - The kind of extraction the waterpoint uses extraction_type_class - The kind of extraction the waterpoint uses management - How the waterpoint is managed management_group - How the waterpoint is managed payment - What the water costs payment_type - What the water costs water_quality - The quality of the water quality_group - The quality of the water quantity - The quantity of water quantity_group - The quantity of water source - The source of the water source_type - The source of the water source_class - The source of the water waterpoint_type - The kind of waterpoint waterpoint_type_group - The kind of waterpoint

    Feature data example For example, a single row in the dataset might have these values:

    amount_tsh 300.0 date_recorded 2013-02-26 funder Germany Republi gps_height 1335 installer CES longitude 37.2029845 latitude -3.22870286 wpt_name Kwaa Hassan Ismail num_private 0 basin Pangani subvillage Bwani region Kilimanjaro region_code 3 district_code 5 lga Hai ward Machame Uroki population 25 public_meeting True recorded_by GeoData Consultants Ltd scheme_management Water Board scheme_name Uroki-Bomang'ombe water sup permit True construction_year 1995 extraction_type gravity extraction_type_group gravity extraction_type_class gravity management water board management_group user-group payment other payment_type other water_quality soft quality_group good quantity enough quantity_group enough source spring source_type spring source_class groundwater waterpoint_type communal standpipe waterpoint_type_group communal standpipe

    The labels in this dataset dist image

    Distribution of Labels The labels in this dataset are simple. There are three possible values:

    functional - the waterpoint is operational and there are no repairs needed functional needs repair - the waterpoint is operational, but needs repairs non functional - the waterpoint is not operational

    Submission format The format for the submission file is simply the row id and the predicted label (for an example, see SubmissionFormat.csv on the data download page.

    For example, if you just predicted that all the waterpoints were functional you would have the following predictions: id status_group 50785 functional 51630 functional 17168 functional 45559 functional 49871 functional Your .csv file that you submit would look like:

    id,status_group 50785,functional 51630,functional 17168,functional 45559,functional ...

    Acknowledgements

    All rights reserved with DataDriven.

  15. Wikipedia SQLITE Portable DB, Huge 5M+ Rows

    • kaggle.com
    zip
    Updated Jun 29, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    christernyc (2024). Wikipedia SQLITE Portable DB, Huge 5M+ Rows [Dataset]. https://www.kaggle.com/datasets/christernyc/wikipedia-sqlite-portable-db-huge-5m-rows/code
    Explore at:
    zip(6064169983 bytes)Available download formats
    Dataset updated
    Jun 29, 2024
    Authors
    christernyc
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The "Wikipedia SQLite Portable DB" is a compact and efficient database derived from the Kensho Derived Wikimedia Dataset (KDWD). This dataset provides a condensed subset of raw Wikimedia data in a format optimized for natural language processing (NLP) research and applications.

    I am not affiliated or partnered with the Kensho in any way, just really like the dataset for giving my agents to query easily.

    Key Features:

    Contains over 5 million rows of data from English Wikipedia and Wikidata Stored in a portable SQLite database format for easy integration and querying Includes a link-annotated corpus of English Wikipedia pages and a compact sample of the Wikidata knowledge base Ideal for NLP tasks, machine learning, data analysis, and research projects

    The database consists of four main tables:

    • items: Contains information about Wikipedia items, including labels and descriptions
    • properties: Stores details about Wikidata properties, such as labels and descriptions
    • pages: Provides metadata for Wikipedia pages, including page IDs, item IDs, titles, and view counts
    • link_annotated_text: Contains the link-annotated text of Wikipedia pages, divided into sections

    This dataset is derived from the Kensho Derived Wikimedia Dataset (KDWD), which is built from the English Wikipedia snapshot from December 1, 2019, and the Wikidata snapshot from December 2, 2019. The KDWD is a condensed subset of the raw Wikimedia data in a form that is helpful for NLP work, and it is released under the CC BY-SA 3.0 license. Credits: The "Wikipedia SQLite Portable DB" is derived from the Kensho Derived Wikimedia Dataset (KDWD), created by the Kensho R&D group. The KDWD is based on data from Wikipedia and Wikidata, which are crowd-sourced projects supported by the Wikimedia Foundation. We would like to acknowledge and thank the Kensho R&D group for their efforts in creating the KDWD and making it available for research and development purposes. By providing this portable SQLite database, we aim to make Wikipedia data more accessible and easier to use for researchers, data scientists, and developers working on NLP tasks, machine learning projects, and other data-driven applications. We hope that this dataset will contribute to the advancement of NLP research and the development of innovative applications utilizing Wikipedia data.

    https://www.kaggle.com/datasets/kenshoresearch/kensho-derived-wikimedia-data/data

    Tags: encyclopedia, wikipedia, sqlite, database, reference, knowledge-base, articles, information-retrieval, natural-language-processing, nlp, text-data, large-dataset, multi-table, data-science, machine-learning, research, data-analysis, data-mining, content-analysis, information-extraction, text-mining, text-classification, topic-modeling, language-modeling, question-answering, fact-checking, entity-recognition, named-entity-recognition, link-prediction, graph-analysis, network-analysis, knowledge-graph, ontology, semantic-web, structured-data, unstructured-data, data-integration, data-processing, data-cleaning, data-wrangling, data-visualization, exploratory-data-analysis, eda, corpus, document-collection, open-source, crowdsourced, collaborative, online-encyclopedia, web-data, hyperlinks, categories, page-views, page-links, embeddings

    Usage with LIKE queries: ``` import aiosqlite import asyncio

    class KenshoDatasetQuery: def init(self, db_file): self.db_file = db_file

    async def _aenter_(self):
      self.conn = await aiosqlite.connect(self.db_file)
      return self
    
    async def _aexit_(self, exc_type, exc_val, exc_tb):
      await self.conn.close()
    
    async def search_pages_by_title(self, title):
      query = """
      SELECT pages.page_id, pages.item_id, pages.title, pages.views, 
          items.labels AS item_labels, items.description AS item_description,
          link_annotated_text.sections
      FROM pages 
      JOIN items ON pages.item_id = items.id
      JOIN link_annotated_text ON pages.page_id = link_annotated_text.page_id
      WHERE pages.title LIKE ?
      """
      async with self.conn.execute(query, (f"%{title}%",)) as cursor:
        return await cursor.fetchall()
    
    async def search_items_by_label_or_description(self, keyword):
      query = """
      SELECT id, labels, description 
      FROM items
      WHERE labels LIKE ? OR description LIKE ?
      """
      async with self.conn.execute(query, (f"%{keyword}%", f"%{keyword}%")) as cursor:
        return await cursor.fetchall()
    
    async def search_items_by_label(self, label):
      query = """
      SELECT id, labels, description
      FROM items 
      WHERE labels LIKE ?
      """
      async with self.conn.execute(query, (f"%{label}%",)) as cursor:
        return await cursor.fetchall()
    
    async def search_properties_by_label_or_desc...
    
  16. D

    Mining Automation Systems Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Jan 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Mining Automation Systems Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/global-mining-automation-systems-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jan 7, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Mining Automation Systems Market Outlook



    The global mining automation systems market size was valued at approximately $3.1 billion in 2023 and is projected to grow at a compound annual growth rate (CAGR) of 10.5% from 2024 to 2032, reaching an estimated $7.8 billion by 2032. One of the primary growth factors for this market is the increasing focus on safety and efficiency within mining operations, driving the demand for automated systems.



    One of the most significant growth factors contributing to the mining automation systems market is the emphasis on enhancing operational safety. Mining environments are inherently hazardous, and automation systems can significantly reduce the risk of accidents by minimizing human involvement in dangerous tasks. Automated machinery, drones for site inspections, and autonomous vehicles are examples of technologies that help mitigate risks, thereby fostering a safer working environment. This shift towards safety not only protects the workforce but also reduces the liability and operational downtime associated with accidents and injuries.



    Another critical driver of market growth is the need for increased efficiency and productivity within the mining sector. Automation systems enable continuous operations with minimal human intervention, leading to more consistent and higher output. These systems can operate around the clock without fatigue, thus maximizing the extraction rates and optimizing the use of resources. Furthermore, the integration of advanced data analytics and IoT (Internet of Things) within these systems allows for real-time monitoring and decision-making, which enhances overall operational efficiency.



    Technological advancement plays a vital role in the expansion of the mining automation systems market. Innovations in robotics, artificial intelligence, and machine learning have made automation systems more sophisticated and capable of handling complex mining tasks. These technological advancements not only improve the functionality and reliability of the systems but also reduce the costs associated with their implementation and maintenance. The growing investments in R&D by key industry players also contribute to the development of more advanced and cost-effective solutions, driving market growth.



    Digitalization in Mining is transforming the industry by integrating advanced technologies that enhance operational efficiency and safety. With the advent of digital tools, mining companies can now leverage data analytics, IoT, and cloud computing to optimize their operations. These technologies enable real-time monitoring and predictive maintenance, reducing downtime and increasing productivity. Moreover, digitalization facilitates better resource management and environmental compliance, aligning with the industry's growing emphasis on sustainability. By adopting digital solutions, mining operations can achieve greater precision and control, ultimately leading to improved profitability and reduced environmental impact.



    Regionally, the market dynamics of mining automation systems are influenced by various factors such as regulatory policies, the level of technological adoption, and the presence of key market players. North America and Europe are expected to witness substantial growth due to the high adoption rate of advanced technologies and stringent safety regulations. On the other hand, the Asia Pacific region is anticipated to experience significant market growth, driven by the booming mining activities in countries like China and India and the increasing investments in automation technologies.



    Component Analysis



    The mining automation systems market can be segmented by components into hardware, software, and services. Each of these components plays a crucial role in the overall functionality and efficiency of mining automation systems. Hardware components include automated drilling rigs, robotic trucks, and drones, which perform the physical tasks of mining operations. The hardware segment is essential as it forms the backbone of the automation process, providing the necessary tools and machinery to carry out mining activities with minimal human intervention. The advancements in hardware technology, such as the development of more robust and capable robotic systems, are driving the growth of this segment.



    Software components are equally important in mining automation systems, as they control and manage the hardware. These include various applica

  17. Employee Performance & Salary (Synthetic Dataset)

    • kaggle.com
    zip
    Updated Oct 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mamun Hasan (2025). Employee Performance & Salary (Synthetic Dataset) [Dataset]. https://www.kaggle.com/datasets/mamunhasan2cs/employee-performance-and-salary-synthetic-dataset
    Explore at:
    zip(13002 bytes)Available download formats
    Dataset updated
    Oct 10, 2025
    Authors
    Mamun Hasan
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    🧑‍💼 Employee Performance and Salary Dataset

    This synthetic dataset simulates employee information in a medium-sized organization, designed specifically for data preprocessing and exploratory data analysis (EDA) tasks in Data Mining and Machine Learning labs.

    It includes over 1,000 employee records with realistic variations in age, gender, department, experience, performance score, and salary — along with missing values, duplicates, and outliers to mimic real-world data quality issues.

    📊 Columns Description

    Column NameDescription
    Employee_IDUnique employee identifier (E0001, E0002, …)
    AgeEmployee age (22–60 years)
    GenderGender of the employee (Male/Female)
    DepartmentDepartment where the employee works (HR, Finance, IT, Marketing, Sales, Operations)
    Experience_YearsTotal years of work experience (contains missing values)
    Performance_ScoreEmployee performance score (0–100, contains missing values)
    SalaryAnnual salary in USD (contains outliers)

    🧠 Example Lab Tasks

    • Identify and impute missing values using mean or median.
    • Detect and remove duplicate employee records.
    • Detect outliers in Salary using IQR or Z-score.
    • Normalize Salary and Performance_Score using Min-Max scaling.
    • Encode categorical columns (Gender, Department) for model training.
    • Ideal for Regression

    🎯 Possible Regression Targets (Dependent Variables)

    Salary → Predict salary based on experience, performance, department, and age. Performance_Score → Predict employee performance based on age, experience, and department.

    🧩 Example Regression Problem

    Predict the employee's salary based on their experience, performance score, and department.

    🧠 Sample Features:

    X = ['Age', 'Experience_Years', 'Performance_Score', 'Department', 'Gender'] y = ['Salary']

    You can apply:

    • Linear Regression
    • Ridge/Lasso Regression
    • Random Forest Regressor
    • XGBoost Regressor
    • SVR (Support Vector Regression)
    • and evaluate with metrics like:

    R², MAE, MSE, RMSE, and residual plots.

  18. f

    Gold standard performance results as measured by .

    • figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Igor Mozetič; Luis Torgo; Vitor Cerqueira; Jasmina Smailović (2023). Gold standard performance results as measured by . [Dataset]. http://doi.org/10.1371/journal.pone.0194317.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Igor Mozetič; Luis Torgo; Vitor Cerqueira; Jasmina Smailović
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The baseline, , indicates that all negative and positive examples are classified incorrectly.

  19. AG News (News articles)

    • kaggle.com
    zip
    Updated Nov 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). AG News (News articles) [Dataset]. https://www.kaggle.com/datasets/thedevastator/new-dataset-for-text-classification-ag-news/code
    Explore at:
    zip(11831597 bytes)Available download formats
    Dataset updated
    Nov 20, 2022
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    AG News (News articles)

    News Articles Text Classification

    Source

    Huggingface Hub: link

    About this dataset

    The ag_news dataset provides a new opportunity for text classification research. It is a large dataset consisting of a training set of 10,000 examples and a test set of 5,000 examples. The examples are split evenly into two classes: positive and negative. This makes the dataset well-suited for research into text classification methods

    How to use the dataset

    If you're looking to do text classification research, the ag_news dataset is a great new dataset to use. It consists of a training set of 10,000 examples and a test set of 5,000 examples, split evenly between positive and negative class labels. The data is well-balanced and should be suitable for many different text classification tasks

    Research Ideas

    • This dataset can be used to train a text classifier to automatically categorize news articles into positive and negative categories.
    • This dataset can be used to develop a system that can identify positive and negative sentiment in news articles.
    • This dataset can be used to study the difference in how positive and negative news is reported by different media outlets

    Acknowledgements

    AG is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. ComeToMyHead is an academic news search engine that has been running since July, 2004. The dataset is provided by the academic comunity for research purposes in data mining (clustering, classification, etc), information retrieval (ranking, search, etc), XML, data compression, data streaming, and any other non-commercial activity. For more information, please refer to the link http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html .

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: train.csv | Column name | Description | |:--------------|:-----------------------------------------| | text | The text of the news article. (string) | | label | The label of the news article. (integer) |

    File: test.csv | Column name | Description | |:--------------|:-----------------------------------------| | text | The text of the news article. (string) | | label | The label of the news article. (integer) |

  20. Human Activity Classification Dataset

    • kaggle.com
    zip
    Updated May 8, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Rabie El Kharoua (2024). Human Activity Classification Dataset [Dataset]. https://www.kaggle.com/datasets/rabieelkharoua/human-activity-classification-dataset
    Explore at:
    zip(314064223 bytes)Available download formats
    Dataset updated
    May 8, 2024
    Authors
    Rabie El Kharoua
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    📊 Calling all data aficionados! 🚀 Just stumbled upon some juicy data that might tickle your fancy! If you find it helpful, a little upvote would be most appreciated! 🙌 #DataIsKing #KaggleCommunity 📈

    • Data Collection:

      • Collected by members of the WISDM (Wireless Sensor Data Mining) Lab at Fordham University.
      • Utilized accelerometer and gyroscope sensors from smartphones and smartwatches.
      • 51 subjects participated in performing 18 diverse activities of daily living.
      • Each activity was performed for 3 minutes per subject, resulting in 54 minutes of data per subject.
      • Activities encompassed basic ambulation-related tasks, hand-based activities of daily living, and eating activities.
    • Activity Categories:

      • Basic ambulation-related activities: walking, jogging, climbing stairs.
      • Hand-based activities of daily living: brushing teeth, folding clothes.
      • Eating activities: eating pasta, eating chips.
    • Data Description:

      • Contains low-level time-series sensor data from phone accelerometers, phone gyroscopes, watch accelerometers, and watch gyroscopes.
      • Each time-series data is labeled with the activity being performed and a subject identifier.
      • Suitable for building and evaluating biometric models as well as activity recognition models.
    • Data Transformation:

      • Researchers employed a sliding window approach to transform time-series data into labeled examples.
      • Scripts for performing the transformation are provided along with the transformed data.
    • Availability:

      • The dataset is accessible from the UCI Machine Learning Repository under the name "WISDM Smartphone and Smartwatch Activity and Biometrics Dataset."
    • Dataset Name: WISDM Smartphone and Smartwatch Activity and Biometrics Dataset

    • Subjects and Tasks:

      • Data collected from 51 subjects.
      • Each subject performed 18 tasks, with each task lasting 3 minutes.
    • Data Collection Setup:

      • Subjects wore a smartwatch on their dominant hand and carried a smartphone in their pocket.
      • A custom app controlled data collection on both devices.
      • Sensors used: accelerometer and gyroscope on both smartphone and smartwatch.
    • Sensor Characteristics:

      • Data collected at a rate of 20 Hz (every 50ms).
      • Four total sensors: accelerometer and gyroscope on both smartphone and smartwatch.
    • Device Specifications:

      • Smartphone: Google Nexus 5/5X or Samsung Galaxy S5 running Android 6.0 (Marshmallow).
      • Smartwatch: LG G Watch running Android Wear 1.5.

    SUMMARY INFORMATION FOR THE DATASET

    InformationDetails
    Number of subjects51
    Number of activities18
    Minutes collected per activity3
    Sensor polling rate20 Hz
    Smartphone usedGoogle Nexus 5/5X or Samsung Galaxy S5
    Smartwatch usedLG G Watch
    Number of raw measurements15,630,426

    THE 18 ACTIVITIES REPRESENTED IN THE DATASET

    ActivityActivity Code
    WalkingA
    JoggingB
    StairsC
    SittingD
    StandingE
    TypingF
    Brushing TeethG
    Eating SoupH
    Eating ChipsI
    Eating PastaJ
    Drinking from CupK
    Eating SandwichL
    Kicking (Soccer Ball)M
    Playing Catch w/Tennis BallO
    Dribbling (Basketball)P
    WritingQ
    ClappingR
    Folding ClothesS
    • Non-hand-oriented activities:

      • Walking
      • Jogging
      • Stairs
      • Standing
      • Kicking
    • Hand-oriented activities (General):

      • Dribbling
      • Playing catch
      • Typing
      • Writing
      • Clapping
      • Brushing teeth
      • Folding clothes
    • Hand-oriented activities (eating):

      • Eating pasta
      • Eating soup
      • Eating sandwich
      • Eating chips
      • Drinking

    DEFINITION OF ELEMENTS IN RAW DATA MEASUREMENTS

    Field NameDescription
    Subject-idType: Symbolic numeric identifier. Uniquely identifies the subject. Range: 1600-1650.
    Activity codeType: Symbolic single letter. Range: A-S (no "N" value)
    Time...
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dashlink (2025). Mining Distance-Based Outliers in Near Linear Time [Dataset]. https://catalog.data.gov/dataset/mining-distance-based-outliers-in-near-linear-time

Data from: Mining Distance-Based Outliers in Near Linear Time

Related Article
Explore at:
Dataset updated
Apr 11, 2025
Dataset provided by
Dashlink
Description

Full title: Mining Distance-Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule Abstract: Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

Search
Clear search
Close search
Google apps
Main menu