Xverum empowers tech-driven companies to elevate their solutions by providing comprehensive global company data. With over 50 million comprehensive company profiles, we help you enrich and expand your data, conduct extensive company analysis, and tailor your digital strategies accordingly.
Top 5 characteristics of company data from Xverum:
Monthly Updates: Stay informed about any changes in company data with over 40 data attributes per profile.
3.5x Higher Refresh Rate: Stay ahead of the competition with the freshest prospect data available as you won't find any profile older than 120 days.
5x Better Quality of Company Data: High-quality data means more precise prospecting and data enrichment in your strategies.
100% GDPR and CCPA Compliant: Build digital strategies using legitimate data.
Global Coverage: Access data from over 200 countries, ensuring you have the right audience data you need, wherever you operate.
At Xverum, we're committed to providing you with real-time B2B data to fuel your success. We are happy to learn more about your specific needs and deliver custom company data according to your requirements.
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
According to our latest research, the global Automated Indicator Enrichment market size reached USD 1.26 billion in 2024, reflecting robust adoption across sectors. The market is expected to grow at a CAGR of 16.7% during the forecast period, with the market size projected to reach USD 4.01 billion by 2033. This growth is primarily driven by the increasing sophistication of cyber threats and the urgent need for organizations to automate threat intelligence processes, enabling faster and more accurate response to security incidents. The convergence of AI, machine learning, and security automation technologies is further accelerating the adoption of automated indicator enrichment solutions globally.
One of the key growth factors for the Automated Indicator Enrichment market is the escalating volume and complexity of cyber threats targeting organizations of all sizes. With threat actors employing advanced tactics, techniques, and procedures (TTPs), traditional manual threat analysis processes are proving inadequate. Automated indicator enrichment enables security teams to automatically contextualize, validate, and prioritize threat indicators, significantly reducing the mean time to detect (MTTD) and respond (MTTR) to incidents. The proliferation of endpoints, cloud workloads, and interconnected digital assets has necessitated a scalable approach to threat intelligence, further fueling demand for automated solutions that can process vast amounts of threat data in real time.
Another significant driver is the increasing regulatory pressure on organizations to maintain robust cybersecurity postures and ensure compliance with international standards such as GDPR, HIPAA, and PCI DSS. Automated indicator enrichment solutions facilitate compliance management by providing auditable, consistent, and timely threat intelligence workflows. This not only helps organizations avoid costly penalties but also enhances their overall security posture. The market is also benefitting from the growing awareness among enterprises regarding the benefits of automation in reducing human error, improving operational efficiency, and enabling proactive security measures. As a result, both large enterprises and small and medium enterprises (SMEs) are investing in advanced automated indicator enrichment platforms to stay ahead of evolving cyber threats.
The rapid advancements in artificial intelligence (AI) and machine learning (ML) technologies have also played a pivotal role in shaping the Automated Indicator Enrichment market. Modern solutions leverage AI and ML algorithms to enrich threat indicators with contextual data from multiple sources, including threat intelligence feeds, internal logs, and external databases. This automated enrichment process enhances the accuracy of threat detection and enables security analysts to focus on high-priority incidents. Additionally, the integration of automated indicator enrichment tools with Security Information and Event Management (SIEM) and Security Orchestration, Automation, and Response (SOAR) platforms is creating new opportunities for seamless, end-to-end security automation, further driving market growth.
From a regional perspective, North America currently dominates the Automated Indicator Enrichment market, accounting for the largest share in 2024, followed closely by Europe and the Asia Pacific. The presence of major cybersecurity vendors, high adoption rates of advanced security solutions, and stringent regulatory frameworks are key factors contributing to North America's leadership. Meanwhile, Asia Pacific is expected to witness the fastest growth over the forecast period, driven by increasing digital transformation initiatives, rising cybercrime rates, and growing investments in cybersecurity infrastructure across emerging economies such as India, China, and Southeast Asia. Europe continues to show strong growth potential, particularly in sectors like BFSI, healthcare, and government, where data protection and compliance are top priorities.
The Automated Indicator Enrichment market is segmented by component into software and services, each playing a vital role in the ecosystem. The software segment currently holds the largest market share, owing to the increasing deployment of advanced enrichment platforms that leverage AI, ML, and big data analytics to automate the enrichment of threat indicators. These platforms
According to our latest research, the global AI-Powered Knowledge Graph market size reached USD 2.45 billion in 2024, demonstrating a robust momentum driven by rising enterprise adoption of AI-driven data structuring tools. The market is expected to expand at a CAGR of 25.8% from 2025 to 2033, reaching a projected value of USD 19.1 billion by 2033. This significant growth is fueled by the increasing demand for advanced data integration, real-time analytics, and intelligent automation across diverse industry verticals. As per our latest research, the market’s acceleration is underpinned by a confluence of digital transformation initiatives, surging investments in AI infrastructure, and the growing need for contextual data insights to drive business decisions.
The primary growth factor propelling the AI-Powered Knowledge Graph market is the exponential rise in data generation and the urgent need for organizations to derive meaningful, actionable intelligence from vast, disparate data sources. Modern enterprises are inundated with both structured and unstructured data originating from internal systems, customer interactions, social media, IoT devices, and external databases. Traditional data management tools are increasingly inadequate for extracting context-rich insights at scale. AI-powered knowledge graphs leverage advanced machine learning and natural language processing to semantically link data points, enabling enterprises to create a holistic, interconnected view of their information landscape. This capability not only enhances data discoverability and accessibility but also supports intelligent automation, predictive analytics, and personalized customer experiences, all of which are critical for maintaining competitive advantage in today’s digital economy.
Another key driver for the AI-Powered Knowledge Graph market is the growing focus on digital transformation across sectors such as BFSI, healthcare, retail, and manufacturing. Organizations in these industries are under pressure to modernize their IT infrastructure, optimize operations, and deliver superior customer engagement. AI-powered knowledge graphs play a pivotal role in these transformation initiatives by breaking down data silos, enriching data with contextual meaning, and enabling seamless integration of information across platforms and business units. The ability to automate knowledge discovery and reasoning processes streamlines compliance, risk management, and decision-making, which is particularly valuable in highly regulated sectors. Furthermore, the adoption of cloud-based deployment models is accelerating, offering scalability, flexibility, and cost efficiencies that further stimulate market growth.
The proliferation of AI and machine learning technologies, coupled with rapid advancements in natural language understanding, has significantly expanded the capabilities and applications of knowledge graphs. Modern AI-powered knowledge graphs can ingest, process, and interlink data from a multitude of sources in real time, supporting advanced use cases such as fraud detection, recommendation engines, and information retrieval. The integration of AI enables knowledge graphs to evolve dynamically, learning from new data and user interactions to continuously improve accuracy and relevance. This adaptability is particularly valuable as organizations face ever-changing business environments and increasingly complex data ecosystems. As a result, the market is witnessing heightened interest from both large enterprises and SMEs seeking to harness the full potential of their data assets.
Regionally, North America continues to dominate the AI-Powered Knowledge Graph market, accounting for the largest revenue share in 2024, owing to the early adoption of AI technologies, strong presence of leading vendors, and significant investments in digital infrastructure. Europe follows closely, driven by stringent data regulations and a robust ecosystem of technology innovators. Meanwhile, the Asia Pacific region is experiencing the fastest growth, propelled by expanding digital economies, increasing cloud adoption, and supportive government initiatives. Latin America and the Middle East & Africa are also emerging as promising markets, albeit from a smaller base, as enterprises in these regions accelerate their digital transformation journeys. The global market’s trajectory is thus shaped by a combination of technological innovation, industry-specific requirements, and regional economic dynam
Targeted enrichment of conserved genomic regions (e.g., ultraconserved elements or UCEs) has emerged as a promising tool for inferring evolutionary history in many organismal groups. Because the UCE approach is still relatively new, much remains to be learned about how best to identify UCE loci and design baits to enrich them.
2. We test an updated UCE identification and bait design workflow for the insect order Hymenoptera, with a particular focus on ants. The new strategy augments a previous bait design for Hymenoptera by (a) changing the parameters by which conserved genomic regions are identified and retained, and (b) increasing the number of genomes used for locus identification and bait design. We perform in vitro validation of the approach in ants by synthesizing an ant-specific bait set that targets UCE loci and a set of “legacy” phylogenetic markers. Using this bait set, we generate new data for 84 taxa (16/17 ant subfamilies) and extract loci from an additional 17 genome-enabled taxa. We then use these data to examine UCE capture success and phylogenetic performance across ants. We also test the workability of extracting legacy markers from enriched samples and combining the data with published data sets.
3. The updated bait design (hym-v2) contained a total of 2,590-targeted UCE loci for Hymenoptera, significantly increasing the number of loci relative to the original bait set (hym-v1; 1,510 loci). Across 38 genome-enabled Hymenoptera and 84 enriched samples, experiments demonstrated a high and unbiased capture success rate, with the mean locus enrichment rate being 2,214 loci per sample. Phylogenomic analyses of ants produced a robust tree that included strong support for previously uncertain relationships. Complementing the UCE results, we successfully enriched legacy markers, combined the data with published Sanger data sets, and generated a comprehensive ant phylogeny containing 1,060 terminals.
4. Overall, the new UCE bait design strategy resulted in an enhanced bait set for genome-scale phylogenetics in ants and likely all of Hymenoptera. Our in vitro tests demonstrate the utility of the updated design workflow, providing evidence that this approach could be applied to any organismal group with available genomic information.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Digitalizing highway infrastructure is gaining interest in Germany and other countries due to the need for greater efficiency and sustainability. The maintenance of the built infrastructure accounts for nearly 30% of greenhouse gas emissions in Germany. To address this, Digital Twins are emerging as tools to optimize road systems. A Digital Twin of a built asset relies on a geometric-semantic as-is model of the area of interest, where an essential step for automated model generation is the semantic segmentation of reality capture data. While most approaches handle data without considering real-world context, our approach leverages existing geospatial data to enrich the data foundation through an adaptive feature extraction workflow. This workflow is adaptable to various model architectures, from deep learning methods like PointNet++ and PointNeXt to traditional machine learning models such as Random Forest. Our four-step workflow significantly boosts performance, improving overall accuracy by 20% and unweighted mean Intersection over Union (mIoU) by up to 43.47%. The target application is the semantic segmentation of point clouds in road environments. Additionally, the proposed modular workflow can be easily customized to fit diverse data sources and enhance semantic segmentation performance in a model-agnostic way.
Success.ai’s Startup Data with Contact Data for Startup Founders Worldwide provides businesses with unparalleled access to key entrepreneurs and decision-makers shaping the global startup landscape. With data sourced from over 170 million verified professional profiles, this dataset offers essential contact details, including work emails and direct phone numbers, for founders in various industries and regions.
Whether you’re targeting tech innovators in Silicon Valley, fintech entrepreneurs in Europe, or e-commerce trailblazers in Asia, Success.ai ensures that your outreach efforts reach the right individuals at the right time.
Why Choose Success.ai’s Startup Founders Data?
AI-driven validation ensures 99% accuracy, providing reliable data for effective outreach.
Global Reach Across Startup Ecosystems
Includes profiles of startup founders from tech, healthcare, fintech, sustainability, and other emerging sectors.
Covers North America, Europe, Asia-Pacific, South America, and the Middle East, helping you connect with founders on a global scale.
Continuously Updated Datasets
Real-time updates mean you always have the latest contact information, ensuring your outreach is timely and relevant.
Ethical and Compliant
Adheres to GDPR, CCPA, and global data privacy regulations, ensuring ethical and compliant use of data.
Data Highlights
Key Features of the Dataset:
Engage with individuals who can approve partnerships, investments, and collaborations.
Advanced Filters for Precision Targeting
Filter by industry, funding stage, region, or startup size to narrow down your outreach efforts.
Ensure your campaigns target the most relevant contacts for your products, services, or investment opportunities.
AI-Driven Enrichment
Profiles are enriched with actionable data, offering insights that help tailor your messaging and improve response rates.
Strategic Use Cases:
Connect with founders seeking investment, pitch your venture capital or angel investment services, and establish long-term partnerships.
Business Development and Partnerships
Offer collaboration opportunities, strategic alliances, and joint ventures to startups in need of new market entries or product expansions.
Marketing and Sales Campaigns
Launch targeted email and phone outreach to founders who match your ideal customer profile, driving product adoption and long-term client relationships.
Recruitment and Talent Acquisition
Reach founders who may be open to recruitment partnerships or HR solutions, helping them build strong teams and scale effectively.
Why Choose Success.ai?
Enjoy top-quality, verified startup founder data at competitive prices, ensuring maximum return on investment.
Seamless Integration
Easily integrate verified contact data into your CRM or marketing platforms via APIs or customizable downloads.
Data Accuracy with AI Validation
With 99% data accuracy, you can trust the information to guide meaningful and productive outreach campaigns.
Customizable and Scalable Solutions
Tailor the dataset to your needs, focusing on specific industries, regions, or funding stages, and easily scale as your business grows.
APIs for Enhanced Functionality:
Enrich your existing CRM records with verified founder contact data, adding valuable insights for targeted engagements.
Lead Generation API
Automate lead generation and streamline your campaigns, ensuring efficient and scalable outreach to startup founders worldwide.
Leverage Success.ai’s B2B Contact Data for Startup Founders Worldwide to connect with the entrepreneurs driving innovation across global markets. With verified work emails, phone numbers, and continuously updated profiles, your outreach efforts become more impactful, timely, and effective.
Experience AI-validated accuracy and our Best Price Guarantee. Contact Success.ai today to learn how our B2B contact data solutions can help you engage with the startup founders who matter most.
No one beats us on price. Period.
Sampling enrichment toward a target state, an analogue of the improvement of sampling efficiency (SE), is critical in both the refinement of protein structures and the generation of near-native structure ensembles for the exploration of structure-function relationships. We developed a hybrid molecular dynamics (MD)-Monte Carlo (MC) approach to enrich the sampling toward the target structures. In this approach, the higher SE is achieved by perturbing the conventional MD simulations with a MC structure-acceptance judgment, which is based on the coincidence degree of small angle x-ray scattering (SAXS) intensity profiles between the simulation structures and the target structure. We found that the hybrid simulations could significantly improve SE by making the top-ranked models much closer to the target structures both in the secondary and tertiary structures. Specifically, for the 20 mono-residue peptides, when the initial structures had the root-mean-squared deviation (RMSD) from the target structure smaller than 7 Å, the hybrid MD-MC simulations afforded, on average, 0.83 Å and 1.73 Å in RMSD closer to the target than the parallel MD simulations at 310K and 370K, respectively. Meanwhile, the average SE values are also increased by 13.2% and 15.7%. The enrichment of sampling becomes more significant when the target states are gradually detectable in the MD-MC simulations in comparison with the parallel MD simulations, and provide >200% improvement in SE. We also performed a test of the hybrid MD-MC approach in the real protein system, the results showed that the SE for 3 out of 5 real proteins are improved. Overall, this work presents an efficient way of utilizing solution SAXS to improve protein structure prediction and refinement, as well as the generation of near native structures for function annotation.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Human Proteome Project (HPP) aims at mapping entire human proteins with a systematic effort upon all the emerging techniques, which would enhance understanding of human biology and lay a foundation for development of medical applications. Until now, 2563 missing proteins (MPs, PE2–4) are still undetected even using the most sensitive approach of protein detection. Herein, we propose that enrichment of low-abundance proteins benefits MPs finding. ProteoMiner is an equalizing technique by reducing high-abundance proteins and enriching low-abundance proteins in biological liquids. With triton X-100/TBS buffer extraction, ProteoMiner enrichment, and peptide fractionation, 20 MPs (at least two non-nested unique peptides with more than eight a.a. length) with 60 unique peptides were identified from four human tissues including eight membrane/secreted proteins and five nucleus proteins. Then 15 of them were confirmed with two non-nested unique peptides (≥9 a.a.) identified by matching well with their chemically synthetic peptides in PRM assay. Hence, these results demonstrated ProteoMiner as a powerful means in discovery of MPs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary materials (Appendix A and B) for the article:
Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints
Abstract: Traffic speed prediction for a selected road segment from a short-term and long-term perspective is among the fundamental issues of intelligent transportation systems (ITS). During the course of the past two decades, many artefacts (e.g., models) have been designed dealing with traffic speed prediction. However, no satisfactory solution has been found for the issue of a long-term prediction for days and weeks using the vast spatial and temporal data. This article aims to introduce a long-term traffic speed prediction ensemble model using country-scale historic traffic data from 37,002 km of roads, which constitutes 66% of all roads in the Czech Republic. The designed model comprises three submodels and combines parametric and nonparametric approaches in order to acquire a good-quality prediction that can enrich available real-time traffic information. Furthermore, the model is set into a conceptual design which expects its usage for the improvement of navigation through waypoints (e.g., delivery service, goods distribution, police patrol) and the estimated arrival time. The model validation is carried out using the same network of roads, and the model predicts traffic speed in the period of 1 week. According to the performed validation of average speed prediction at a given hour, it can be stated that the designed model achieves good results, with mean absolute error of 4.67 km/h. The achieved results indicate that the designed solution can effectively predict the long-term speed information using large-scale spatial and temporal data, and that this solution is suitable for use in ITS.
Simunek, M., & Smutny, Z. (2021). Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints. Applied Sciences, 11(1), 315. https://doi.org/10.3390/app11010315
Appendix A
Examples of the deviation between the average speed and the FreeFlowSpeed for selected hours.
Appendix B
The text file provides a complete overview of all road segments on which basis summary test results were calculated in Section 6 of the article.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary materials (Appendix A and B) for the article:
Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints
Abstract: Traffic speed prediction for a selected road segment from a short-term and long-term perspective is among the fundamental issues of intelligent transportation systems (ITS). During the course of the past two decades, many artefacts (e.g., models) have been designed dealing with traffic speed prediction. However, no satisfactory solution has been found for the issue of a long-term prediction for days and weeks using the vast spatial and temporal data. This article aims to introduce a long-term traffic speed prediction ensemble model using country-scale historic traffic data from 37,002 km of roads, which constitutes 66% of all roads in the Czech Republic. The designed model comprises three submodels and combines parametric and nonparametric approaches in order to acquire a good-quality prediction that can enrich available real-time traffic information. Furthermore, the model is set into a conceptual design which expects its usage for the improvement of navigation through waypoints (e.g., delivery service, goods distribution, police patrol) and the estimated arrival time. The model validation is carried out using the same network of roads, and the model predicts traffic speed in the period of 1 week. According to the performed validation of average speed prediction at a given hour, it can be stated that the designed model achieves good results, with mean absolute error of 4.67 km/h. The achieved results indicate that the designed solution can effectively predict the long-term speed information using large-scale spatial and temporal data, and that this solution is suitable for use in ITS.
Simunek, M., & Smutny, Z. (2021). Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints. Applied Sciences, 11(1), 315. https://doi.org/10.3390/app11010315
Appendix A Examples of the deviation between the average speed and the FreeFlowSpeed for selected hours.
Appendix B The text file provides a complete overview of all road segments on which basis summary test results were calculated in Section 6 of the article.
The main objective of this survey is to provide statistical data on ICT for the enterprises in the Palestinian Territory. The specific objectives can be summarized in the following points: · Enriching ICT statistical data on the actual use and access by the economic enterprises of ICT. · Identifying the characteristics of the tools and means of ICT used in the economic activity, the type of economic activity and size of enterprises. · Providing opportunity for international and regional comparisons which helps in knowing the location of the Palestinian Territory among the technological world countries. · Assisting planners and policy makers in understanding the current status of the Technology-Based Economy in the Palestinian Territory, which helps to meet the future needs of the Palestinian economy.
The data are representative at the region level (West Bank, Gaza Strip).
Enterprises
The enterprises in the Palestinian Territory
Sample survey data [ssd]
The sample is a regular stratified random sample of one stage. The strata of less than 30 enterprises and enterprises that operate 30 or more workers was included. Enterprises were divided into three levels, namely: First level, geographical classification of enterprises and classified into two regions: the West Bank and Gaza Strip. Second Level, economic activity of the enterprises classified according to International Industrial Classification for Economic Activities. Third level, employment size category of the enterprises classified according to the number of employees as follows: 1. Enterprises that operate with less than 5 employees. 2. Enterprises that operate with 5-10 employees. 3. Enterprises that operate with 11-29 employees. 4. Enterprises that operate with 30 employees and over.
Face-to-face [f2f]
The Survey Questionnaire In light of identifying data requirements, the survey instrument was developed following a review of international recommendations and experiences of countries in this area, and following discussion with stakeholders, through a workshop at PCBS to discuss producers and indicators of the survey.
In addition to identification information and data quality control, BICT 2007 survey instrument consists of three main sections, namely:
Section one: Includes readiness, access to ICT; this section contains a collection of examples about the existence of the necessary infrastructure for the use of technology and tools and instruments in the business, such as the availability of the computer and Internet service. It also provides a range of sophisticated devices associated with the use of technology such as telephone, fax, mobile phone, printers, and other related issues.
Section two: includes a series of questions about the use of Internet and computer networks in various activities and projects of economic enterprises, such as using the Internet, and networks to conduct commercial transactions buying and selling, and obstacles faced by Palestinian enterprises in the use of networks and Internet in their economic activities and implementation electronically of commercial transactions.
Section three: includes questions about the future direction of the enterprises in the use of means and tools of ICT, as well as expenditures for some tools and means of ICT that have been adopted.
Data Editing The project's management developed a clear mechanism for editing the data and trained the team of editors accordingly. The mechanism was as follows: · Receiving completed questionnaires on a daily basis; · Checking each questionnaire to make sure that they were completed and that the data covered all eligible enterprises. Checks also focused on the accuracy of the answers to the questions. Returning the uncompleted questionnaires as well as those with errors to the field for completion
The survey sample consists of about 2,966 enterprises; 2,604 enterprises completed the interview, of which 1,746 enterprises were in the West Bank and 858 enterprises in Gaza Strip. The response rate was 92.2%.
Detailed information on the sampling Error is available in the Survey Report, downloadable under the "Resources" tab.
Detailed information on the data appraisal is available in the Survey Report, downloadable under the "Resources" tab.
This is a saved copy of the NWS Weather Watches and Warning layer, filtered just for wildfire related warnings.Details from the orginal item:https://www.arcgis.com/home/item.html?id=a6134ae01aad44c499d12feec782b386This feature service depicts the National Weather Service (NWS) watches, warnings, and advisories within the United States. Watches and warnings are classified into 43 categories.A warning is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. A warning means weather conditions pose a threat to life or property. People in the path of the storm need to take protective action.A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location or timing is still uncertain. It is intended to provide enough lead time so those who need to set their plans in motion can do so. A watch means that hazardous weather is possible. People should have a plan of action in case a storm threatens, and they should listen for later information and possible warnings especially when planning travel or outdoor activities.An advisory is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. Advisories are for less serious conditions than warnings, that cause significant inconvenience and if caution is not exercised, could lead to situations that may threaten life or property.SourceNational Weather Service RSS-CAP Warnings and Advisories: Public AlertsNational Weather Service Boundary Overlays: AWIPS Shapefile DatabaseUpdate FrequencyThe services is updated every 5 minutes using the Aggregated Live Feeds methodology.The overlay data is checked and updated daily from the official AWIPS Shapefile Database.Area CoveredUnited States and TerritoriesWhat can you do with this layer?Customize the display of each attribute by using the Change Style option for any layer.Query the layer to display only specific types of weather watches and warnings.Add to a map with other weather data layers to provide insight on hazardous weather events.Use ArcGIS Online analysis tools, such as Enrich Data, to determine the potential impact of weather events on populations.This map is provided for informational purposes and is not monitored 24/7 for accuracy and currency.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This feature service depicts the National Weather Service (NWS) watches, warnings, and advisories within the United States. Watches and warnings are classified into 43 categories.A warning is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. A warning means weather conditions pose a threat to life or property. People in the path of the storm need to take protective action.A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location or timing is still uncertain. It is intended to provide enough lead time so those who need to set their plans in motion can do so. A watch means that hazardous weather is possible. People should have a plan of action in case a storm threatens, and they should listen for later information and possible warnings especially when planning travel or outdoor activities.An advisory is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. Advisories are for less serious conditions than warnings, that cause significant inconvenience and if caution is not exercised, could lead to situations that may threaten life or property.SourceNational Weather Service RSS-CAP Warnings and Advisories: Public AlertsNational Weather Service Boundary Overlays: AWIPS Shapefile DatabaseSample DataSee Sample Layer Item for sample data during Weather inactivity!Update FrequencyThe services is updated every 5 minutes using the Aggregated Live Feeds methodology.The overlay data is checked and updated daily from the official AWIPS Shapefile Database.Area CoveredUnited States and TerritoriesWhat can you do with this layer?Customize the display of each attribute by using the Change Style option for any layer.Query the layer to display only specific types of weather watches and warnings.Add to a map with other weather data layers to provide insight on hazardous weather events.Use ArcGIS Online analysis tools, such as Enrich Data, to determine the potential impact of weather events on populations.This map is provided for informational purposes and is not monitored 24/7 for accuracy and currency.Additional information on Watches and Warnings.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Highly complex and dynamic protein mixtures are hardly comprehensively resolved by direct shotgun proteomic analysis. As many proteins of biological interest are of low abundance, numerous analytical methodologies have been developed to reduce sample complexity and go deeper into proteomes. The present work describes an analytical strategy to perform cysteinyl-peptide subset enrichment and relative quantification through successive cysteine and amine-isobaric tagging. A cysteine-reactive covalent capture tag (C3T) allowed derivatization of cysteines and specific isolation on a covalent capture (CC) resin. The 6-plex amine-reactive tandem mass tags (TMT) served for relative quantification of the targeted peptides. The strategy was first evaluated on a model protein mixture with increasing concentrations to assess the specificity of the enrichment and the quantitative performances of the workflow. It was then applied to human cerebrospinal fluid (CSF) from post-mortem and ante-mortem samples. These studies confirmed the specificity of the C3T and the CC technique to cysteine-containing peptides. The model protein mixture analysis showed high precision and accuracy of the quantification with coefficients of variation and mean absolute errors of less than 10% on average. The CSF experiments demonstrated the potential of the strategy to study complex biological samples and identify differential brain-related proteins. In addition, the quantification data were highly correlated with a classical TMT experiment (i.e., without C3T cysteine-tagging and enrichment steps). Altogether, these results legitimate the use of this quantitative C3T strategy to enrich and relatively quantify cysteine-containing peptides in complex mixtures.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As an important means for enterprises to acquire technological resources, the impact of mergers and acquisitions on technological innovation and underlying mechanisms deserve in-depth study. Using the merger and acquisition data of A-share listed Chinese companies from 2007 to 2020 in Shanghai and Shenzhen, the causal effects and influence mechanisms between mergers and acquisitions and technological innovation are identified and tested using the Difference-in-Differences method. The study finds that mergers and acquisitions have a long-term, sustained, technological innovation-enhancing effect on firms. Mechanism tests show that mergers and acquisitions can promote the technological innovation of enterprises by improving production efficiency, enriching digital knowledge, and enhancing market power. A heterogeneity analysis shows that the effect of mergers and acquisitions in enhancing technological innovation is more significant when the mergers and acquisitions meet domestic merger and acquisition requirements, when there is a small transaction size, and when the enterprises involved in the mergers and acquisitions are not state-owned. It is suggested that enterprises and the government should use multiple measures, while considering the impact of heterogeneity, to take full advantage of the positive effects of mergers and acquisitions on technological innovation.
This record contains raw data related to article “Artificial intelligence processing electronic health records to identify commonalities and comorbidities cluster at Immuno Center Humanitas”.
Background: Comorbidities are common in chronic inflammatory conditions, requiring multidisciplinary treatment approach. Understanding the link between a single disease and its comorbidities is important for appropriate treatment and management. We evaluate the ability of an NLP-based process for knowledge discovery to detect information about pathologies, patients' phenotype, doctors' prescriptions and commonalities in electronic medical records, by extracting information from free narrative text written by clinicians during medical visits, resulting in the extraction of valuable information and enriching real world evidence data from a multidisciplinary setting.
Methods: We collected clinical notes from the Allergy Department of Humanitas Research Hospital written in the last 3 years and used it to look for diseases that cluster together as comorbidities associated to the main pathology of our patients, and for the extent of prescription of systemic corticosteroids, thus evaluating the ability of NLP-based tools for knowledge discovery to extract structured information from free text.
Results: We found that the 3 most frequent comorbidities to appear in our clusters were asthma, rhinitis, and urticaria, and that 991 (of 2057) patients suffered from at least one of these comorbidities. The clusters which co-occur particularly often are oral allergy syndrome and urticaria (131 patients), angioedema and urticaria (105 patients), rhinitis and asthma (227 patients). With regards to systemic corticosteroid prescription volume by our clinicians, we found it was lower when compared to the therapy the patients followed before coming to our attention, with the exception of two diseases: Chronic obstructive pulmonary disease and Angioedema.
Conclusions: This analysis seems to be valid and is confirmed by the data from the literature. This means that NLP tools could have significant role in many other research fields of medicine, as it may help identify other important, and possibly previously neglected clusters of patients with comorbidities and commonalities. Another potential benefit of this approach lies in its potential ability to foster a multidisciplinary approach, using the same drugs to treat pathologies normally treated by physicians in different branches of medicine, thus saving resources and improving the pharmacological management of patients.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Metal oxide affinity chromatography (MOAC) has become a prominent method to enrich phosphopeptides prior to their analysis by liquid chromatography−mass spectrometry. To overcome limitations in material design, we have previously reported the use of nanocasting as a means to generate metal oxide spheres with tailored properties. Here, we report on the application of two oxides, tin dioxide (stannia) and titanium dioxide (titania), for the analysis of the HeLa phosphoproteome. In combination with nanoflow LC−MS/MS analysis on a linear ion trap-Fourier transform ion cyclotron resonance instrument, we identified 619 phosphopeptides using the new stannia material, and 896 phosphopeptides using titania prepared in house. We also compared the newly developed materials to commercial titania material using an established enrichment protocol. Both titania materials yielded a comparable total number of phosphopeptides, but the overlap of the two data sets was less than one-third. Although fewer peptides were identified using stannia, the complementarity of SnO2-based MOAC could be shown as more than 140 phosphopeptides were exclusively identified by this material.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An updated strong-motion database of the Iranian earthquakes has been used to propose empirical equations for the prediction of peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped Spectral Accelerations (SA) up to0.0 s for geometric mean of horizontal components. Some records from the NGA-West2 are added to the database to enrich it at near source distances for large magnitude events. Lack of data in the near source distances causes less accuracy in previous Iranian Ground Motion Models (GMMs) in comparison with the current study. In this work, the regression analyses have been performed on a truncated database which causes to obtain unbiased results. We used 3015 acceleration time series from 594 earthquakes after truncation of data to develop a new GMM. The provided model is valid for Joyner-Boore distances ranging from 0 km to80 km and magnitudes ranging from 4 to 7.9 and Vs30 ranging from50 m/s to500 m/s.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Xverum empowers tech-driven companies to elevate their solutions by providing comprehensive global company data. With over 50 million comprehensive company profiles, we help you enrich and expand your data, conduct extensive company analysis, and tailor your digital strategies accordingly.
Top 5 characteristics of company data from Xverum:
Monthly Updates: Stay informed about any changes in company data with over 40 data attributes per profile.
3.5x Higher Refresh Rate: Stay ahead of the competition with the freshest prospect data available as you won't find any profile older than 120 days.
5x Better Quality of Company Data: High-quality data means more precise prospecting and data enrichment in your strategies.
100% GDPR and CCPA Compliant: Build digital strategies using legitimate data.
Global Coverage: Access data from over 200 countries, ensuring you have the right audience data you need, wherever you operate.
At Xverum, we're committed to providing you with real-time B2B data to fuel your success. We are happy to learn more about your specific needs and deliver custom company data according to your requirements.