25 datasets found
  1. D

    Automated Indicator Enrichment Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Automated Indicator Enrichment Market Research Report 2033 [Dataset]. https://dataintelo.com/report/automated-indicator-enrichment-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Automated Indicator Enrichment Market Outlook



    According to our latest research, the global Automated Indicator Enrichment market size reached USD 1.26 billion in 2024, reflecting robust adoption across sectors. The market is expected to grow at a CAGR of 16.7% during the forecast period, with the market size projected to reach USD 4.01 billion by 2033. This growth is primarily driven by the increasing sophistication of cyber threats and the urgent need for organizations to automate threat intelligence processes, enabling faster and more accurate response to security incidents. The convergence of AI, machine learning, and security automation technologies is further accelerating the adoption of automated indicator enrichment solutions globally.




    One of the key growth factors for the Automated Indicator Enrichment market is the escalating volume and complexity of cyber threats targeting organizations of all sizes. With threat actors employing advanced tactics, techniques, and procedures (TTPs), traditional manual threat analysis processes are proving inadequate. Automated indicator enrichment enables security teams to automatically contextualize, validate, and prioritize threat indicators, significantly reducing the mean time to detect (MTTD) and respond (MTTR) to incidents. The proliferation of endpoints, cloud workloads, and interconnected digital assets has necessitated a scalable approach to threat intelligence, further fueling demand for automated solutions that can process vast amounts of threat data in real time.




    Another significant driver is the increasing regulatory pressure on organizations to maintain robust cybersecurity postures and ensure compliance with international standards such as GDPR, HIPAA, and PCI DSS. Automated indicator enrichment solutions facilitate compliance management by providing auditable, consistent, and timely threat intelligence workflows. This not only helps organizations avoid costly penalties but also enhances their overall security posture. The market is also benefitting from the growing awareness among enterprises regarding the benefits of automation in reducing human error, improving operational efficiency, and enabling proactive security measures. As a result, both large enterprises and small and medium enterprises (SMEs) are investing in advanced automated indicator enrichment platforms to stay ahead of evolving cyber threats.




    The rapid advancements in artificial intelligence (AI) and machine learning (ML) technologies have also played a pivotal role in shaping the Automated Indicator Enrichment market. Modern solutions leverage AI and ML algorithms to enrich threat indicators with contextual data from multiple sources, including threat intelligence feeds, internal logs, and external databases. This automated enrichment process enhances the accuracy of threat detection and enables security analysts to focus on high-priority incidents. Additionally, the integration of automated indicator enrichment tools with Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platforms is creating new opportunities for seamless, end-to-end security automation, further driving market growth.




    From a regional perspective, North America currently dominates the Automated Indicator Enrichment market, accounting for the largest share in 2024, followed closely by Europe and the Asia Pacific. The presence of major cybersecurity vendors, high adoption rates of advanced security solutions, and stringent regulatory frameworks are key factors contributing to North America's leadership. Meanwhile, Asia Pacific is expected to witness the fastest growth over the forecast period, driven by increasing digital transformation initiatives, rising cybercrime rates, and growing investments in cybersecurity infrastructure across emerging economies such as India, China, and Southeast Asia. Europe continues to show strong growth potential, particularly in sectors like BFSI, healthcare, and government, where data protection and compliance are top priorities.



    Component Analysis



    The Automated Indicator Enrichment market is segmented by component into software and services, each playing a vital role in the ecosystem. The software segment currently holds the largest market share, owing to the increasing deployment of advanced enrichment platforms that leverage AI, ML, and big data analytics to automate the enrichment of threat indicators. These platforms

  2. n

    Data from: Enriching the ant tree of life: enhanced UCE bait set for...

    • data.niaid.nih.gov
    • datadryad.org
    • +1more
    zip
    Updated Feb 14, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Michael G. Branstetter; John T. Longino; Philip S. Ward; Brant C. Faircloth (2017). Enriching the ant tree of life: enhanced UCE bait set for genome-scale phylogenetics of ants and other Hymenoptera [Dataset]. http://doi.org/10.5061/dryad.89n87
    Explore at:
    zipAvailable download formats
    Dataset updated
    Feb 14, 2017
    Dataset provided by
    University of California, Davis
    Louisiana State University of Alexandria
    University of Utah
    Authors
    Michael G. Branstetter; John T. Longino; Philip S. Ward; Brant C. Faircloth
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Area covered
    Global
    Description
    1. Targeted enrichment of conserved genomic regions (e.g., ultraconserved elements or UCEs) has emerged as a promising tool for inferring evolutionary history in many organismal groups. Because the UCE approach is still relatively new, much remains to be learned about how best to identify UCE loci and design baits to enrich them.

    2. We test an updated UCE identification and bait design workflow for the insect order Hymenoptera, with a particular focus on ants. The new strategy augments a previous bait design for Hymenoptera by (a) changing the parameters by which conserved genomic regions are identified and retained, and (b) increasing the number of genomes used for locus identification and bait design. We perform in vitro validation of the approach in ants by synthesizing an ant-specific bait set that targets UCE loci and a set of “legacy” phylogenetic markers. Using this bait set, we generate new data for 84 taxa (16/17 ant subfamilies) and extract loci from an additional 17 genome-enabled taxa. We then use these data to examine UCE capture success and phylogenetic performance across ants. We also test the workability of extracting legacy markers from enriched samples and combining the data with published data sets.

    3. The updated bait design (hym-v2) contained a total of 2,590-targeted UCE loci for Hymenoptera, significantly increasing the number of loci relative to the original bait set (hym-v1; 1,510 loci). Across 38 genome-enabled Hymenoptera and 84 enriched samples, experiments demonstrated a high and unbiased capture success rate, with the mean locus enrichment rate being 2,214 loci per sample. Phylogenomic analyses of ants produced a robust tree that included strong support for previously uncertain relationships. Complementing the UCE results, we successfully enriched legacy markers, combined the data with published Sanger data sets, and generated a comprehensive ant phylogeny containing 1,060 terminals.

    4. Overall, the new UCE bait design strategy resulted in an enhanced bait set for genome-scale phylogenetics in ants and likely all of Hymenoptera. Our in vitro tests demonstrate the utility of the updated design workflow, providing evidence that this approach could be applied to any organismal group with available genomic information.

  3. m

    Enriched Tourism Dataset Paris (POIs)

    • data.mendeley.com
    • figshare.com
    Updated Oct 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ramon Hermoso (2024). Enriched Tourism Dataset Paris (POIs) [Dataset]. http://doi.org/10.17632/vh4g4g2322.1
    Explore at:
    Dataset updated
    Oct 30, 2024
    Authors
    Ramon Hermoso
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Paris
    Description

    This dataset contains the Paris subset of the Tourpedia dataset, specifically focusing on points of interest (POIs) categorized as attractions (dataset available at http://tour-pedia.org/download/paris-attraction.csv). The original dataset comprises 4,351 entries that encompass a variety of attractions across Paris, providing details on several attributes for each POI. These attributes include a unique identifier, POI name, category, location information (address), latitude, longitude, specific details, and user-generated reviews. The review fields contain textual feedback from users, aggregated from platforms such as Google Places, Foursquare, and Facebook, offering a qualitative insight into each location.

    However, due to the initial dataset's high proportion of incomplete or inconsistently structured entries, a rigorous cleaning process was implemented. This process entailed the removal of erroneous and incomplete data points, ultimately refining the dataset to 477 entries that meet criteria for quality and structural coherence. These selected entries were subjected to further validation to ensure data integrity, enabling a more accurate representation of Paris' attractions.

    • Paris.csv It contains columns including a unique identifier, POI name, category, location information (address), latitude, longitude, specific details, and user-generated reviews. Those reviews have been previously retrieved and pre-processed from Google Places, Foursquare, and Facebook, and have different formats: all words, only nouns, nouns + verbs, noun + adjectives and nouns + verbs + adjectives.

    • Paris_annotated.csv It contains the ground truth relating to the previous dataset, with manual annotations made by humans on the categorisation of each of the POIs into 12 different pre-defined categories. It has the following columns:

      • POI name
      • POI's address
      • One column for each of the above categories. 1 means that the POI belongs to the category while blank indicates that it does not.
  4. f

    Data Sheet 1_Data enrichment for semantic segmentation of point clouds for...

    • frontiersin.figshare.com
    pdf
    Updated Jun 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Crampen; Joerg Blankenbach (2025). Data Sheet 1_Data enrichment for semantic segmentation of point clouds for the generation of geometric-semantic road models.pdf [Dataset]. http://doi.org/10.3389/fbuil.2025.1607375.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 11, 2025
    Dataset provided by
    Frontiers
    Authors
    David Crampen; Joerg Blankenbach
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Digitalizing highway infrastructure is gaining interest in Germany and other countries due to the need for greater efficiency and sustainability. The maintenance of the built infrastructure accounts for nearly 30% of greenhouse gas emissions in Germany. To address this, Digital Twins are emerging as tools to optimize road systems. A Digital Twin of a built asset relies on a geometric-semantic as-is model of the area of interest, where an essential step for automated model generation is the semantic segmentation of reality capture data. While most approaches handle data without considering real-world context, our approach leverages existing geospatial data to enrich the data foundation through an adaptive feature extraction workflow. This workflow is adaptable to various model architectures, from deep learning methods like PointNet++ and PointNeXt to traditional machine learning models such as Random Forest. Our four-step workflow significantly boosts performance, improving overall accuracy by 20% and unweighted mean Intersection over Union (mIoU) by up to 43.47%. The target application is the semantic segmentation of point clouds in road environments. Additionally, the proposed modular workflow can be easily customized to fit diverse data sources and enhance semantic segmentation performance in a model-agnostic way.

  5. Startup Data | Startup Founders Worldwide Contact Data | Verified Profiles...

    • datarade.ai
    Updated Oct 27, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Success.ai (2021). Startup Data | Startup Founders Worldwide Contact Data | Verified Profiles with Work Emails & Phone Numbers | Best Price Guaranteed [Dataset]. https://datarade.ai/data-products/success-ai-b2b-contact-data-170m-global-work-emails-pho-success-ai-be33
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Oct 27, 2021
    Dataset provided by
    Area covered
    Wallis and Futuna, Nauru, Afghanistan, Bahrain, Rwanda, Pitcairn, Mayotte, Curaçao, Madagascar, Guinea-Bissau
    Description

    Success.ai’s Startup Data with Contact Data for Startup Founders Worldwide provides businesses with unparalleled access to key entrepreneurs and decision-makers shaping the global startup landscape. With data sourced from over 170 million verified professional profiles, this dataset offers essential contact details, including work emails and direct phone numbers, for founders in various industries and regions.

    Whether you’re targeting tech innovators in Silicon Valley, fintech entrepreneurs in Europe, or e-commerce trailblazers in Asia, Success.ai ensures that your outreach efforts reach the right individuals at the right time.

    Why Choose Success.ai’s Startup Founders Data?

    1. Comprehensive Contact Information
    2. Access verified work emails, phone numbers, and LinkedIn profiles for founders and key startup executives worldwide.
    3. AI-driven validation ensures 99% accuracy, providing reliable data for effective outreach.

    4. Global Reach Across Startup Ecosystems

    5. Includes profiles of startup founders from tech, healthcare, fintech, sustainability, and other emerging sectors.

    6. Covers North America, Europe, Asia-Pacific, South America, and the Middle East, helping you connect with founders on a global scale.

    7. Continuously Updated Datasets

    8. Real-time updates mean you always have the latest contact information, ensuring your outreach is timely and relevant.

    9. Ethical and Compliant

    10. Adheres to GDPR, CCPA, and global data privacy regulations, ensuring ethical and compliant use of data.

    Data Highlights

    • 170M+ Verified Professional Profiles: Includes startup founders and their teams across a range of industries.
    • 50M Work Emails: AI-validated for precision in communication.
    • 30M Company Profiles: Gain insights into the startups’ size, location, and industry focus.
    • 700M Global Professional Profiles: Enriched data for comprehensive outreach and analysis.

    Key Features of the Dataset:

    1. Founder Decision-Maker Profiles
    2. Identify and connect with founders and key executives who drive strategy, funding, and product decisions.
    3. Engage with individuals who can approve partnerships, investments, and collaborations.

    4. Advanced Filters for Precision Targeting

    5. Filter by industry, funding stage, region, or startup size to narrow down your outreach efforts.

    6. Ensure your campaigns target the most relevant contacts for your products, services, or investment opportunities.

    7. AI-Driven Enrichment

    8. Profiles are enriched with actionable data, offering insights that help tailor your messaging and improve response rates.

    Strategic Use Cases:

    1. Investor Relations and Funding Opportunities
    2. Connect with founders seeking investment, pitch your venture capital or angel investment services, and establish long-term partnerships.

    3. Business Development and Partnerships

    4. Offer collaboration opportunities, strategic alliances, and joint ventures to startups in need of new market entries or product expansions.

    5. Marketing and Sales Campaigns

    6. Launch targeted email and phone outreach to founders who match your ideal customer profile, driving product adoption and long-term client relationships.

    7. Recruitment and Talent Acquisition

    8. Reach founders who may be open to recruitment partnerships or HR solutions, helping them build strong teams and scale effectively.

    Why Choose Success.ai?

    1. Best Price Guarantee
    2. Enjoy top-quality, verified startup founder data at competitive prices, ensuring maximum return on investment.

    3. Seamless Integration

    4. Easily integrate verified contact data into your CRM or marketing platforms via APIs or customizable downloads.

    5. Data Accuracy with AI Validation

    6. With 99% data accuracy, you can trust the information to guide meaningful and productive outreach campaigns.

    7. Customizable and Scalable Solutions

    8. Tailor the dataset to your needs, focusing on specific industries, regions, or funding stages, and easily scale as your business grows.

    APIs for Enhanced Functionality:

    1. Data Enrichment API
    2. Enrich your existing CRM records with verified founder contact data, adding valuable insights for targeted engagements.

    3. Lead Generation API

    4. Automate lead generation and streamline your campaigns, ensuring efficient and scalable outreach to startup founders worldwide.

    Leverage Success.ai’s B2B Contact Data for Startup Founders Worldwide to connect with the entrepreneurs driving innovation across global markets. With verified work emails, phone numbers, and continuously updated profiles, your outreach efforts become more impactful, timely, and effective.

    Experience AI-validated accuracy and our Best Price Guarantee. Contact Success.ai today to learn how our B2B contact data solutions can help you engage with the startup founders who matter most.

    No one beats us on price. Period.

  6. f

    Data from: Sampling Enrichment toward Target Structures Using Hybrid...

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Sep 28, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li, Yunqi; Chen, Wenduo; Różycki, Bartosz; Shi, Ce; Yang, Kecheng; Cui, Fengchao (2016). Sampling Enrichment toward Target Structures Using Hybrid Molecular Dynamics-Monte Carlo Simulations [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001579261
    Explore at:
    Dataset updated
    Sep 28, 2016
    Authors
    Li, Yunqi; Chen, Wenduo; Różycki, Bartosz; Shi, Ce; Yang, Kecheng; Cui, Fengchao
    Description

    Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.

  7. Data from: Enriched semantic markup for postgraduate programs in Latin...

    • scielo.figshare.com
    jpeg
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sandra Milena Roa-Martínez; Silvana Aparecida Borsetti Gregorio Vidotti; Juan Antonio Pastor-Sánchez (2023). Enriched semantic markup for postgraduate programs in Latin America [Dataset]. http://doi.org/10.6084/m9.figshare.7304630.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    SciELOhttp://www.scielo.org/
    Authors
    Sandra Milena Roa-Martínez; Silvana Aparecida Borsetti Gregorio Vidotti; Juan Antonio Pastor-Sánchez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Latin America
    Description

    ABSTRACT Enriched semantic markup provides meaning to content and allows interoperability between machines, encouraging the visualization of information for users, such as the use of rich snippets, which expand the information provided by search engines. The objective was to analyze and enrich semantically the contents of the graduate programs delivered to the users, so that they are interoperable and contribute to the community through the reuse and proposal of extension of a new entity. The methodology used was descriptive based on the compilation and systematization of qualitative and quantitative information, analysis and characterization of the contents that currently contain the studied web pages and the Schema.org vocabulary. As result, a content semantically enriched markup proposal is presented for the postgraduate programs offered by some Latin American universities contained in a new entity named ProgramaPosgrado, based on the vocabulary of Schema.org. It was concluded that enriched semantic markup using rich snippets is a true Semantic Web application that adds visibility and interoperability to Web content, verifying that Schema.org is a vocabulary that can be extended for use in different fields.

  8. u

    1961 1% Teaching Census Microdata for Great Britain

    • datacatalogue.ukdataservice.ac.uk
    Updated Mar 18, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UK Data Service (2019). 1961 1% Teaching Census Microdata for Great Britain [Dataset]. http://doi.org/10.5255/UKDA-SN-8274-1
    Explore at:
    Dataset updated
    Mar 18, 2019
    Dataset provided by
    UK Data Servicehttps://ukdataservice.ac.uk/
    Area covered
    United Kingdom
    Description

    The 1961 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access dataset was created from existing digital records from the 1961 Census. It can be used as a 'taster' file for 1961 Census data and is freely available for anyone to download under an Open Government Licence.

    The file was created under a project known as Enhancing adn Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1961 data, the team worked on files from the 1971 Census and 1981 Census.

    The original 1961 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1961 Census datasets have been created:

    • SN 8272 - 1961 Census Microdata Individual File for Great Britain: 5% Sample, which contains information on individuals in larger local authorities;
    • SN 8273 - 1961 Census Microdata Household File for Great Britain: 0.95% Sample, which links household members together to allow individuals to be understood within their household context. SNs 8272 and 8273 are both available to registered UK Data Service users based in the United Kingdom (see Access section for non-UK access restrictions); and
    • SN 8275 - 1961 Census Microdata for Great Britain: 9% Sample: Secure Access, which comprises a larger population sample and so contains sufficient information to constitute personal data, meaning that it is only available to Accredited Researchers, under restrictive Secure Access conditions.

  9. Z

    Supplementary materials for article focused on traffic information...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Aug 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simunek, Milan; Smutny, Zdenek (2024). Supplementary materials for article focused on traffic information enrichment: Animation and complete overview of all road segments on which basis summary test results were calculated [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4372333
    Explore at:
    Dataset updated
    Aug 11, 2024
    Dataset provided by
    Prague University of Economics and Business
    Authors
    Simunek, Milan; Smutny, Zdenek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary materials (Appendix A and B) for the article:

    Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints

    Abstract: Traffic speed prediction for a selected road segment from a short-term and long-term perspective is among the fundamental issues of intelligent transportation systems (ITS). During the course of the past two decades, many artefacts (e.g., models) have been designed dealing with traffic speed prediction. However, no satisfactory solution has been found for the issue of a long-term prediction for days and weeks using the vast spatial and temporal data. This article aims to introduce a long-term traffic speed prediction ensemble model using country-scale historic traffic data from 37,002 km of roads, which constitutes 66% of all roads in the Czech Republic. The designed model comprises three submodels and combines parametric and nonparametric approaches in order to acquire a good-quality prediction that can enrich available real-time traffic information. Furthermore, the model is set into a conceptual design which expects its usage for the improvement of navigation through waypoints (e.g., delivery service, goods distribution, police patrol) and the estimated arrival time. The model validation is carried out using the same network of roads, and the model predicts traffic speed in the period of 1 week. According to the performed validation of average speed prediction at a given hour, it can be stated that the designed model achieves good results, with mean absolute error of 4.67 km/h. The achieved results indicate that the designed solution can effectively predict the long-term speed information using large-scale spatial and temporal data, and that this solution is suitable for use in ITS.

    Simunek, M., & Smutny, Z. (2021). Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints. Applied Sciences, 11(1), 315. https://doi.org/10.3390/app11010315

    Appendix A Examples of the deviation between the average speed and the FreeFlowSpeed for selected hours.

    Appendix B The text file provides a complete overview of all road segments on which basis summary test results were calculated in Section 6 of the article.

  10. u

    1971 9% Controlled Census Microdata for Great Britain

    • datacatalogue.ukdataservice.ac.uk
    Updated Mar 18, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UK Data Service (2019). 1971 9% Controlled Census Microdata for Great Britain [Dataset]. http://doi.org/10.5255/UKDA-SN-8271-1
    Explore at:
    Dataset updated
    Mar 18, 2019
    Dataset provided by
    UK Data Servicehttps://ukdataservice.ac.uk/
    Area covered
    United Kingdom
    Description

    The 1971 Census Microdata for Great Britain: 9% Sample: Secure Access dataset was created from existing digital records from the 1971 Census. It comprises a larger population sample than the other files available from the 1971 Census (see below) and so contains sufficient information to constitute personal data, meaning that it is only available to Accredited Researchers, under restrictive Secure Access conditions. See Access section for further details.

    The file was created under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1971 data, the team worked on files from the 1961 Census and 1981 Census.

    The original 1971 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details).

    Three other 1971 Census datasets have been created; users should obtain the other datasets in the series first to see whether they are sufficient for their research needs before considering making an application for this study (SN 8271), the Secure Access version:

    • SN 8268 - 1971 Census Microdata Individual File for Great Britain: 5% Sample, which contains information on individuals in larger local authorities;
    • SN 8269 - 1971 Census Microdata Household File for Great Britain: 0.95% Sample, which links household members together to allow individuals to be understood within their household context. SNs 8268 and 8269 are both available to registered UK Data Service users based in the United Kingdom (see Access section for non-UK access restrictions); and
    • SN 8270 - 1971 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access, which can be used as a taster file and is freely available for anyone to download under an Open Government Licence.

  11. f

    Data from: Quantitative Analysis of Human Cerebrospinal Fluid Proteins Using...

    • acs.figshare.com
    xls
    Updated Jun 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Priscille Giron; Loïc Dayon; Natacha Turck; Christine Hoogland; Jean-Charles Sanchez (2023). Quantitative Analysis of Human Cerebrospinal Fluid Proteins Using a Combination of Cysteine Tagging and Amine-Reactive Isobaric Labeling [Dataset]. http://doi.org/10.1021/pr100535f.s001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 5, 2023
    Dataset provided by
    ACS Publications
    Authors
    Priscille Giron; Loïc Dayon; Natacha Turck; Christine Hoogland; Jean-Charles Sanchez
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Highly complex and dynamic protein mixtures are hardly comprehensively resolved by direct shotgun proteomic analysis. As many proteins of biological interest are of low abundance, numerous analytical methodologies have been developed to reduce sample complexity and go deeper into proteomes. The present work describes an analytical strategy to perform cysteinyl-peptide subset enrichment and relative quantification through successive cysteine and amine-isobaric tagging. A cysteine-reactive covalent capture tag (C3T) allowed derivatization of cysteines and specific isolation on a covalent capture (CC) resin. The 6-plex amine-reactive tandem mass tags (TMT) served for relative quantification of the targeted peptides. The strategy was first evaluated on a model protein mixture with increasing concentrations to assess the specificity of the enrichment and the quantitative performances of the workflow. It was then applied to human cerebrospinal fluid (CSF) from post-mortem and ante-mortem samples. These studies confirmed the specificity of the C3T and the CC technique to cysteine-containing peptides. The model protein mixture analysis showed high precision and accuracy of the quantification with coefficients of variation and mean absolute errors of less than 10% on average. The CSF experiments demonstrated the potential of the strategy to study complex biological samples and identify differential brain-related proteins. In addition, the quantification data were highly correlated with a classical TMT experiment (i.e., without C3T cysteine-tagging and enrichment steps). Altogether, these results legitimate the use of this quantitative C3T strategy to enrich and relatively quantify cysteine-containing peptides in complex mixtures.

  12. What are we missing by using hydrophilic enrichment? Improving bacterial...

    • data.niaid.nih.gov
    xml
    Updated Oct 30, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nichollas Scott; Nichollas Scott (2020). What are we missing by using hydrophilic enrichment? Improving bacterial glycoproteome coverage using total proteome and FAIMS analysis.- Scripts [Dataset]. https://data.niaid.nih.gov/resources?id=pxd021913
    Explore at:
    xmlAvailable download formats
    Dataset updated
    Oct 30, 2020
    Dataset provided by
    University of Melbourne
    Department of Microbiology and Immunology, University of Melbourne at the Peter Doherty Institute for Infection and Immunity, Melbourne 3000, Australia
    Authors
    Nichollas Scott; Nichollas Scott
    Variables measured
    Proteomics
    Description

    Hydrophilic Interaction Liquid Chromatography (HILIC) glycopeptide enrichment is an indispensable tool for the high-throughput characterisation of glycoproteomes. Despite its utility, HILIC enrichment is associated with a number of short comings including requiring large amounts of starting material, potentially introducing chemical artefacts such as formylation when high concentrations of formic acid are used, and biasing/under-sampling specific classes of glycopeptides. Here we investigate HILIC enrichment independent approaches for the study of bacterial glycoproteomes. Using three Burkholderia species (B. cenocepacia, B. dolosa and B. ubonensis) we demonstrate that short aliphatic O-linked glycopeptides are typically absent from HILIC enrichments yet are readily identified in whole proteome samples. Using high-field asymmetric waveform ion mobility spectrometry (FAIMS) fractionation we show that at high compensation voltages (CVs) short aliphatic glycopeptides can be enriched from complex samples providing an alternative means to identify glycopeptides recalcitrant to hydrophilic based enrichment. Combining whole proteome and FAIMS analysis we show that the observable glycoproteome of these Burkholderia species is at least 30% larger than initially thought. Excitingly, the ability to enrich glycopeptides using FAIMS appears generally applicable, with the N-linked glycopeptides of Campylobacter fetus subsp. fetus also enrichable at high FAIMS CVs. Taken together, these results demonstrate that FAIMS provides an alternative means to access glycopeptides and is a valuable tool for glycoproteomic analysis.

  13. c

    1961 Census Microdata for Great Britain: 9% Sample: Secure Access

    • datacatalogue.cessda.eu
    Updated Nov 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enhancing and Enriching Historic Census Microdata Project (EEHCM); Office for National Statistics (2024). 1961 Census Microdata for Great Britain: 9% Sample: Secure Access [Dataset]. http://doi.org/10.5255/UKDA-SN-8275-1
    Explore at:
    Dataset updated
    Nov 28, 2024
    Authors
    Enhancing and Enriching Historic Census Microdata Project (EEHCM); Office for National Statistics
    Time period covered
    Jan 1, 2012 - Jan 1, 2014
    Area covered
    United Kingdom
    Variables measured
    Individuals, National
    Measurement technique
    Compilation or synthesis of existing material
    Description

    Abstract copyright UK Data Service and data collection copyright owner.


    The 1961 Census Microdata for Great Britain: 9% Sample: Secure Access dataset was created from existing digital records from the 1961 Census. It comprises a larger population sample than the other files available from the 1961 Census (see below) and so contains sufficient information to constitute personal data, meaning that it is only available to Accredited Researchers, under restrictive Secure Access conditions. See Access section for further details.

    The file was created under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1961 data, the team worked on files from the 1971 Census and 1981 Census.

    The original 1961 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details).

    Three other 1961 Census datasets have been created; users should obtain the other datasets in the series first to see whether they are sufficient for their research needs before considering making an application for this study (SN 8275), the Secure Access version:
    • SN 8272 - 1961 Census Microdata Individual File for Great Britain: 5% Sample, which contains information on individuals in larger local authorities;
    • SN 8273 - 1961 Census Microdata Household File for Great Britain: 0.95% Sample, which links household members together to allow individuals to be understood within their household context. SNs 8272 and 8273 are both available to registered UK Data Service users based in the United Kingdom (see Access section for non-UK access restrictions); and
    • SN 8274 - 1961 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access, which can be used as a taster file and is freely available for anyone to download under an Open Government Licence.

    Main Topics:

    The file contains information on individuals in local authorities and counties. Topics covered include demographics, for example family and household circumstances, employment and qualifications, migration, commuting, car ownership, and housing.

  14. c

    1981 Census Microdata Individual File for Great Britain: 5% Sample

    • datacatalogue.cessda.eu
    Updated Nov 28, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office for National Statistics; Enhancing and Enriching Historic Census Microdata Project (EEHCM) (2024). 1981 Census Microdata Individual File for Great Britain: 5% Sample [Dataset]. http://doi.org/10.5255/UKDA-SN-8241-1
    Explore at:
    Dataset updated
    Nov 28, 2024
    Authors
    Office for National Statistics; Enhancing and Enriching Historic Census Microdata Project (EEHCM)
    Time period covered
    Jan 1, 2012 - Jan 1, 2014
    Area covered
    United Kingdom
    Variables measured
    Individuals, Families/households, National
    Measurement technique
    Compilation or synthesis of existing material
    Description

    Abstract copyright UK Data Service and data collection copyright owner.


    The 1981 Census Microdata Individual File for Great Britain: 5% Sample dataset was created from existing digital records from the 1981 Census under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1981 data, the team worked on files from the 1961 Census and 1971 Census.

    The original 1981 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1981 Census datasets have been created:

    • SN 8242 - 1981 Census Microdata Household File for Great Britain: 0.95% Sample, which links household members together to allow individuals to be understood within their household context, and is available to registered UK Data Service users based in the United Kingdom (see Access section for non-UK access restrictions);
    • SN 8243 - 1981 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access, which can be used as a taster file and is freely available for anyone to download under an Open Government Licence; and
    • SN 8248 - 1981 Census Microdata for Great Britain: 9% Sample: Secure Access, which comprises a larger population sample and so contains sufficient information to constitute personal data, meaning that it is only available to Accredited Researchers, under restrictive Secure Access conditions.

    Main Topics:

    The file contains information on individuals in larger local authorities. Topics covered include demographics, for example family and household circumstances, employment and qualifications, migration, commuting, car ownership, and housing.

  15. n

    California - NWS Watches and Warnings - Dataset - CKAN

    • nationaldataplatform.org
    Updated Feb 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). California - NWS Watches and Warnings - Dataset - CKAN [Dataset]. https://nationaldataplatform.org/catalog/dataset/california-nws-watches-and-warnings
    Explore at:
    Dataset updated
    Feb 28, 2024
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    California
    Description

    This feature service depicts the National Weather Service (NWS) watches, warnings, and advisories within the United States. Watches and warnings are classified into 43 categories.A warning is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. A warning means weather conditions pose a threat to life or property. People in the path of the storm need to take protective action.A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location or timing is still uncertain. It is intended to provide enough lead time so those who need to set their plans in motion can do so. A watch means that hazardous weather is possible. People should have a plan of action in case a storm threatens, and they should listen for later information and possible warnings especially when planning travel or outdoor activities.An advisory is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. Advisories are for less serious conditions than warnings, that cause significant inconvenience and if caution is not exercised, could lead to situations that may threaten life or property.SourceNational Weather Service RSS-CAP Warnings and Advisories: Public AlertsNational Weather Service Boundary Overlays: AWIPS Shapefile DatabaseSample DataSee Sample Layer Item for sample data during Weather inactivity!Update FrequencyThe services is updated every 5 minutes using the Aggregated Live Feeds methodology.The overlay data is checked and updated daily from the official AWIPS Shapefile Database.Area CoveredUnited States and TerritoriesWhat can you do with this layer?Customize the display of each attribute by using the Change Style option for any layer.Query the layer to display only specific types of weather watches and warnings.Add to a map with other weather data layers to provide insight on hazardous weather events.Use ArcGIS Online analysis tools, such as Enrich Data, to determine the potential impact of weather events on populations.This map is provided for informational purposes and is not monitored 24/7 for accuracy and currency.Additional information on Watches and Warnings.

  16. c

    1961 Census Microdata Household File for Great Britain: 0.95% Sample

    • datacatalogue.cessda.eu
    Updated Nov 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Enhancing and Enriching Historic Census Microdata Project (EEHCM); Office for National Statistics (2024). 1961 Census Microdata Household File for Great Britain: 0.95% Sample [Dataset]. http://doi.org/10.5255/UKDA-SN-8273-1
    Explore at:
    Dataset updated
    Nov 28, 2024
    Authors
    Enhancing and Enriching Historic Census Microdata Project (EEHCM); Office for National Statistics
    Time period covered
    Jan 1, 2012 - Jan 1, 2014
    Area covered
    United Kingdom
    Variables measured
    Families/households, National
    Measurement technique
    Compilation or synthesis of existing material
    Description

    Abstract copyright UK Data Service and data collection copyright owner.


    The 1961 Census Microdata Household File for Great Britain: 0.95% Sample dataset was created from existing digital records from the 1961 Census under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1961 data, the team worked on files from the 1971 Census and 1981 Census.

    The original 1961 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1961 Census datasets have been created:
    • SN 8272 - 1961 Census Microdata Individual File for Great Britain: 5% Sample, which contains information on individuals in larger local authorities, and is available to registered UK Data Service users based in the United Kingdom (see Access section for non-UK access restrictions);
    • SN 8274 - 1961 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access, which can be used as a taster file and is freely available for anyone to download under an Open Government Licence; and
    • SN 8275 - 1961 Census Microdata for Great Britain: 9% Sample: Secure Access, which comprises a larger population sample and so contains sufficient information to constitute personal data, meaning that it is only available to Accredited Researchers, under restrictive Secure Access conditions.

    Main Topics:

    The file links household members together to allow individuals to be understood within their household context. Topics covered include demographics, for example family and household circumstances, employment and qualifications, migration, commuting, car ownership, and housing.

  17. Ground truth labels - Amazon movie reviews dataset

    • kaggle.com
    zip
    Updated Jul 8, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Konstantinos Bazakos (2017). Ground truth labels - Amazon movie reviews dataset [Dataset]. https://www.kaggle.com/thebuzz/ground-truth-labels-amazon-movie-reviews-dataset
    Explore at:
    zip(6829166 bytes)Available download formats
    Dataset updated
    Jul 8, 2017
    Authors
    Konstantinos Bazakos
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Addition of ground truth labels on Amazon movie reviews

    http://i.imgur.com/aDVUwMz.png" alt="Image">

    What is it?

    This is a side project for my thesis “Classification/Clustering Techniques for Large Web Data Collections”.

    My main goal was to provide a new, enriched, ground truth labeled dataset to the Machine Learning community. All labels have been collected by crawling/scraping Amazon.com for a period of some months. By labels I mean the categories in which the products are classified (look the green underlined labels on the screenshot below).

    http://i.imgur.com/mAiuoO6.png" alt="Image">

    Please, if you feel you can make any contribution that will improve this dataset, fork it on github.com.

    The original dataset

    The Amazon Movies Reviews dataset consists of 7,911,684 reviews Amazon users left between Aug 1997 - Oct 2012.

    Data format:

    • product/productId: B00006HAXW
    • review/userId: A1RSDE90N6RSZF
    • review/profileName: Joseph M. Kotow
    • review/helpfulness: 9/9
    • review/score: 5.0
    • review/time: 1042502400
    • review/summary: Pittsburgh - Home of the OLDIES
    • review/text: I have all of the doo wop DVD's and this one is as good or better than the 1st ones. Remember once these performers are gone, we'll never get to see them again. Rhino did an excellent job and if you like or love doo wop and Rock n Roll you'll LOVE this DVD!!

    where:

    • product/productId: asin, e.g. amazon.com/dp/B00006HAXW
    • review/userId: id of the user, e.g. A1RSDE90N6RSZF
    • review/profileName: name of the user
    • review/helpfulness: fraction of users who found the review helpful
    • review/score: rating of the product
    • review/time: time of the review (unix time)
    • review/summary: review summary
    • review/text: text of the review

    The new labeled dataset

    All the collected data (for every ASIN of the SNAP Dataset, ~253k products for ~8m reviews) are stored in a csv file labels.csv in the following format:

    • ASIN: unique identifier for the product
    • Categories: [label, label, label,..., label]

    The new data format will be:

    • product/productId: B00006HAXW
    • review/userId: A1RSDE90N6RSZF
    • review/profileName: Joseph M. Kotow
    • review/helpfulness: 9/9
    • review/score: 5.0
    • review/time: 1042502400
    • review/summary: Pittsburgh - Home of the OLDIES
    • review/text: I have all of the doo wop DVD's and this one is as good or better than the 1st ones. Remember once these performers are gone, we'll never get to see them again. Rhino did an excellent job and if you like or love doo wop and Rock n Roll you'll LOVE this DVD!!
    • product/categories: ['CDs & Vinyl', 'Pop', 'Oldies', 'Doo Wop']

    Instructions

    You can follow the steps mentioned below on how to get the enriched dataset:

    1. Download the original dataset from the SNAP website (~ 3.3 GB compressed) and put it in the root folder of the repository (where you can find also the labels.csv file).

    2. Execute the python file enrich.py (it is available in the github project), so the new enriched multi-labeled dataset be exported. The name of the new file should be output.txt.gz.

    Notice: Please be patient as the python script will take a while to parse all these reviews.

    The python script generates a new compressed file that is actually same with the original one, but with an extra feature (product/categories).

    In fact,(the python script) applies a mapping between ASIN values in both files and adds the labels data of the product in every review instance of that, as an extra column.

    Here is the code:

    import gzip
    import csv
    import ast
    
    def look_up(asin, diction):
      try:
        return diction[asin]
      except KeyError:
        return []
    
    def load_labels():
      labels_dictionary = {}
      with open('labels.csv', mode='r') as infile:
        csvreader = csv.reader(infile)
        next(csvreader)
        for rows in csvreader:
          labels_dictionary[rows[0]] = ast.literal_eval(rows[1])
      return labels_dictionary
    
    def parse(filename):
      labels_dict = load_labels()
      f = gzip.open(filename, 'r')
      entry = {}
      for l in f:
        l = l.strip()
        colonPos = l.find(':')
        if colonPos == -1:
          yield entry
          entry = {}
          continue
        eName = l[:colonPos]
        rest = l[colonPos+2:]
        entry[eName] = rest
        if eName == 'product/productId':
          entry['product/categories'] ...
    
  18. n

    Other Events

    • prep-response-portal.napsgfoundation.org
    Updated Apr 28, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NAPSG Foundation (2020). Other Events [Dataset]. https://prep-response-portal.napsgfoundation.org/datasets/other-events
    Explore at:
    Dataset updated
    Apr 28, 2020
    Dataset authored and provided by
    NAPSG Foundation
    Area covered
    Description

    This is a saved copy of the NWS Weather Watches and Warning layer, filtered just for wildfire related warnings.Details from the orginal item:https://www.arcgis.com/home/item.html?id=a6134ae01aad44c499d12feec782b386This feature service depicts the National Weather Service (NWS) watches, warnings, and advisories within the United States. Watches and warnings are classified into 43 categories.A warning is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. A warning means weather conditions pose a threat to life or property. People in the path of the storm need to take protective action.A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location or timing is still uncertain. It is intended to provide enough lead time so those who need to set their plans in motion can do so. A watch means that hazardous weather is possible. People should have a plan of action in case a storm threatens, and they should listen for later information and possible warnings especially when planning travel or outdoor activities.An advisory is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. Advisories are for less serious conditions than warnings, that cause significant inconvenience and if caution is not exercised, could lead to situations that may threaten life or property.SourceNational Weather Service RSS-CAP Warnings and Advisories: Public AlertsNational Weather Service Boundary Overlays: AWIPS Shapefile DatabaseUpdate FrequencyThe services is updated every 5 minutes using the Aggregated Live Feeds methodology.The overlay data is checked and updated daily from the official AWIPS Shapefile Database.Area CoveredUnited States and TerritoriesWhat can you do with this layer?Customize the display of each attribute by using the Change Style option for any layer.Query the layer to display only specific types of weather watches and warnings.Add to a map with other weather data layers to provide insight on hazardous weather events.Use ArcGIS Online analysis tools, such as Enrich Data, to determine the potential impact of weather events on populations.This map is provided for informational purposes and is not monitored 24/7 for accuracy and currency.

  19. Business Survey on ICT, 2007 - West Bank and Gaza

    • pcbs.gov.ps
    Updated Jan 28, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Palestinian Central Bureau of Statistics (2020). Business Survey on ICT, 2007 - West Bank and Gaza [Dataset]. https://www.pcbs.gov.ps/PCBS-Metadata-en-v5.2/index.php/catalog/460
    Explore at:
    Dataset updated
    Jan 28, 2020
    Dataset authored and provided by
    Palestinian Central Bureau of Statisticshttps://pcbs.gov/
    Time period covered
    2008
    Area covered
    Gaza, Gaza Strip, West Bank
    Description

    Abstract

    Objectives of the Survey The main objective of this survey is to provide statistical data on ICT for the enterprises in the Palestinian Territory. The specific objectives can be summarized in the following: - · Enriching ICT statistical data on the actual use and access by the economic enterprises of ICT. · Identifying the characteristics of the tools and means of ICT used in the economic activity, the type of economic activity and size of enterprises. · Providing opportunity for international and regional comparisons which helps in knowing the location of the Palestinian Territory among the technological world countries. · Assisting planners and policy makers in understanding the current status of the Technology-Based Economy in the Palestinian Territory, which helps to meet the future needs of the Palestinian economy.

    Geographic coverage

    The Data are representative at region level (West Bank, Gaza Strip),

    Analysis unit

    enterprises

    Universe

    The enterprises in the Palestinian Territory

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    Sample Design The sample is a regular stratified random sample of one stage. The strata of less than 30 enterprises and enterprises that operate 30 or more workers was included. Enterprises were divided into three levels, namely: First level, geographical classification of enterprises and classified into two regions: the West Bank and Gaza Strip. Second Level, economic activity of the enterprises classified according to International Industrial Classification for Economic Activities. Third level, employment size category of the enterprises classified according to the number of employees as follows: 1. Enterprises that operate with less than 5 employees. 2. Enterprises that operate with 5-10 employees. 3. Enterprises that operate with 11-29 employees. 4. Enterprises that operate with 30 employees and over.

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    The Survey Questionnaire In light of identifying data requirements, the survey instrument was developed following a review of international recommendations and experiences of countries in this area, and following discussion with stakeholders, through a workshop at PCBS to discuss producers and indicators of the survey.

    In addition to identification information and data quality control, BICT 2007 survey instrument consists of three main sections, namely:

    Section one: Includes readiness, access to ICT; this section contains a collection of examples about the existence of the necessary infrastructure for the use of technology and tools and instruments in the business, such as the availability of the computer and Internet service. It also provides a range of sophisticated devices associated with the use of technology such as telephone, fax, mobile phone, printers, and other related issues.

    Section two: includes a series of questions about the use of Internet and computer networks in various activities and projects of economic enterprises, such as using the Internet, and networks to conduct commercial transactions buying and selling, and obstacles faced by Palestinian enterprises in the use of networks and Internet in their economic activities and implementation electronically of commercial transactions.

    Section three: includes questions about the future direction of the enterprises in the use of means and tools of ICT, as well as expenditures for some tools and means of ICT that have been adopted.

    Cleaning operations

    Data Editing The project's management developed a clear mechanism for editing the data and trained the team of editors accordingly. The mechanism was as follows: · Receiving completed questionnaires on a daily basis; · Checking each questionnaire to make sure that they were completed and that the data covered all eligible enterprises. Checks also focused on the accuracy of the answers to the questions. Returning the uncompleted questionnaires as well as those with errors to the field for completion

    Response rate

    Response Rates The survey sample consists of about 2,966 enterprises; 2,604 enterprises completed the interview, of which 1,746 enterprises were in the West Bank and 858 enterprises in Gaza Strip. The response rate was 92.2%.

    Sampling error estimates

    Detailed information on the sampling Error is available in the Survey Report.

    Data appraisal

    Detailed information on the data appraisal is available in the Survey Report.

  20. u

    1961 0.95% Safeguarded Household Census Microdata for Great Britain

    • datacatalogue.ukdataservice.ac.uk
    Updated Jan 6, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    UK Data Service (2023). 1961 0.95% Safeguarded Household Census Microdata for Great Britain [Dataset]. http://doi.org/10.5255/UKDA-SN-8273-1
    Explore at:
    Dataset updated
    Jan 6, 2023
    Dataset provided by
    UK Data Servicehttps://ukdataservice.ac.uk/
    Area covered
    United Kingdom
    Description

    The 1961 Census Microdata Household File for Great Britain: 0.95% Sample dataset was created from existing digital records from the 1961 Census under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1961 data, the team worked on files from the 1971 Census and 1981 Census.

    The original 1961 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1961 Census datasets have been created:

    • SN 8272 - 1961 Census Microdata Individual File for Great Britain: 5% Sample, which contains information on individuals in larger local authorities, and is available to registered UK Data Service users based in the United Kingdom (see Access section for non-UK access restrictions);
    • SN 8274 - 1961 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access, which can be used as a taster file and is freely available for anyone to download under an Open Government Licence; and
    • SN 8275 - 1961 Census Microdata for Great Britain: 9% Sample: Secure Access, which comprises a larger population sample and so contains sufficient information to constitute personal data, meaning that it is only available to Accredited Researchers, under restrictive Secure Access conditions.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Dataintelo (2025). Automated Indicator Enrichment Market Research Report 2033 [Dataset]. https://dataintelo.com/report/automated-indicator-enrichment-market

Automated Indicator Enrichment Market Research Report 2033

Explore at:
csv, pdf, pptxAvailable download formats
Dataset updated
Sep 30, 2025
Dataset authored and provided by
Dataintelo
License

https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

Time period covered
2024 - 2032
Area covered
Global
Description

Automated Indicator Enrichment Market Outlook



According to our latest research, the global Automated Indicator Enrichment market size reached USD 1.26 billion in 2024, reflecting robust adoption across sectors. The market is expected to grow at a CAGR of 16.7% during the forecast period, with the market size projected to reach USD 4.01 billion by 2033. This growth is primarily driven by the increasing sophistication of cyber threats and the urgent need for organizations to automate threat intelligence processes, enabling faster and more accurate response to security incidents. The convergence of AI, machine learning, and security automation technologies is further accelerating the adoption of automated indicator enrichment solutions globally.




One of the key growth factors for the Automated Indicator Enrichment market is the escalating volume and complexity of cyber threats targeting organizations of all sizes. With threat actors employing advanced tactics, techniques, and procedures (TTPs), traditional manual threat analysis processes are proving inadequate. Automated indicator enrichment enables security teams to automatically contextualize, validate, and prioritize threat indicators, significantly reducing the mean time to detect (MTTD) and respond (MTTR) to incidents. The proliferation of endpoints, cloud workloads, and interconnected digital assets has necessitated a scalable approach to threat intelligence, further fueling demand for automated solutions that can process vast amounts of threat data in real time.




Another significant driver is the increasing regulatory pressure on organizations to maintain robust cybersecurity postures and ensure compliance with international standards such as GDPR, HIPAA, and PCI DSS. Automated indicator enrichment solutions facilitate compliance management by providing auditable, consistent, and timely threat intelligence workflows. This not only helps organizations avoid costly penalties but also enhances their overall security posture. The market is also benefitting from the growing awareness among enterprises regarding the benefits of automation in reducing human error, improving operational efficiency, and enabling proactive security measures. As a result, both large enterprises and small and medium enterprises (SMEs) are investing in advanced automated indicator enrichment platforms to stay ahead of evolving cyber threats.




The rapid advancements in artificial intelligence (AI) and machine learning (ML) technologies have also played a pivotal role in shaping the Automated Indicator Enrichment market. Modern solutions leverage AI and ML algorithms to enrich threat indicators with contextual data from multiple sources, including threat intelligence feeds, internal logs, and external databases. This automated enrichment process enhances the accuracy of threat detection and enables security analysts to focus on high-priority incidents. Additionally, the integration of automated indicator enrichment tools with Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platforms is creating new opportunities for seamless, end-to-end security automation, further driving market growth.




From a regional perspective, North America currently dominates the Automated Indicator Enrichment market, accounting for the largest share in 2024, followed closely by Europe and the Asia Pacific. The presence of major cybersecurity vendors, high adoption rates of advanced security solutions, and stringent regulatory frameworks are key factors contributing to North America's leadership. Meanwhile, Asia Pacific is expected to witness the fastest growth over the forecast period, driven by increasing digital transformation initiatives, rising cybercrime rates, and growing investments in cybersecurity infrastructure across emerging economies such as India, China, and Southeast Asia. Europe continues to show strong growth potential, particularly in sectors like BFSI, healthcare, and government, where data protection and compliance are top priorities.



Component Analysis



The Automated Indicator Enrichment market is segmented by component into software and services, each playing a vital role in the ecosystem. The software segment currently holds the largest market share, owing to the increasing deployment of advanced enrichment platforms that leverage AI, ML, and big data analytics to automate the enrichment of threat indicators. These platforms

Search
Clear search
Close search
Google apps
Main menu