Xverum empowers tech-driven companies to elevate their solutions by providing comprehensive global company data. With over 50 million comprehensive company profiles, we help you enrich and expand your data, conduct extensive company analysis, and tailor your digital strategies accordingly.
Top 5 characteristics of company data from Xverum:
Monthly Updates: Stay informed about any changes in company data with over 40 data attributes per profile.
3.5x Higher Refresh Rate: Stay ahead of the competition with the freshest prospect data available as you won't find any profile older than 120 days.
5x Better Quality of Company Data: High-quality data means more precise prospecting and data enrichment in your strategies.
100% GDPR and CCPA Compliant: Build digital strategies using legitimate data.
Global Coverage: Access data from over 200 countries, ensuring you have the right audience data you need, wherever you operate.
At Xverum, we're committed to providing you with real-time B2B data to fuel your success. We are happy to learn more about your specific needs and deliver custom company data according to your requirements.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Targeted enrichment of conserved genomic regions (e.g., ultraconserved elements or UCEs) has emerged as a promising tool for inferring evolutionary history in many organismal groups. Because the UCE approach is still relatively new, much remains to be learned about how best to identify UCE loci and design baits to enrich them.
We test an updated UCE identification and bait design workflow for the insect order Hymenoptera, with a particular focus on ants. The new strategy augments a previous bait design for Hymenoptera by (a) changing the parameters by which conserved genomic regions are identified and retained, and (b) increasing the number of genomes used for locus identification and bait design. We perform in vitro validation of the approach in ants by synthesizing an ant-specific bait set that targets UCE loci and a set of “legacy” phylogenetic markers. Using this bait set, we generate new data for 84 taxa (16/17 ant subfamilies) and extract loci from an additional 17 genome-enabled taxa. We then use these data to examine UCE capture success and phylogenetic performance across ants. We also test the workability of extracting legacy markers from enriched samples and combining the data with published data sets.
The updated bait design (hym-v2) contained a total of 2,590-targeted UCE loci for Hymenoptera, significantly increasing the number of loci relative to the original bait set (hym-v1; 1,510 loci). Across 38 genome-enabled Hymenoptera and 84 enriched samples, experiments demonstrated a high and unbiased capture success rate, with the mean locus enrichment rate being 2,214 loci per sample. Phylogenomic analyses of ants produced a robust tree that included strong support for previously uncertain relationships. Complementing the UCE results, we successfully enriched legacy markers, combined the data with published Sanger data sets, and generated a comprehensive ant phylogeny containing 1,060 terminals.
Overall, the new UCE bait design strategy resulted in an enhanced bait set for genome-scale phylogenetics in ants and likely all of Hymenoptera. Our in vitro tests demonstrate the utility of the updated design workflow, providing evidence that this approach could be applied to any organismal group with available genomic information.
According to our latest research, the global AI-Powered Knowledge Graph market size reached USD 2.45 billion in 2024, demonstrating a robust momentum driven by rising enterprise adoption of AI-driven data structuring tools. The market is expected to expand at a CAGR of 25.8% from 2025 to 2033, reaching a projected value of USD 19.1 billion by 2033. This significant growth is fueled by the increasing demand for advanced data integration, real-time analytics, and intelligent automation across diverse industry verticals. As per our latest research, the market’s acceleration is underpinned by a confluence of digital transformation initiatives, surging investments in AI infrastructure, and the growing need for contextual data insights to drive business decisions.
The primary growth factor propelling the AI-Powered Knowledge Graph market is the exponential rise in data generation and the urgent need for organizations to derive meaningful, actionable intelligence from vast, disparate data sources. Modern enterprises are inundated with both structured and unstructured data originating from internal systems, customer interactions, social media, IoT devices, and external databases. Traditional data management tools are increasingly inadequate for extracting context-rich insights at scale. AI-powered knowledge graphs leverage advanced machine learning and natural language processing to semantically link data points, enabling enterprises to create a holistic, interconnected view of their information landscape. This capability not only enhances data discoverability and accessibility but also supports intelligent automation, predictive analytics, and personalized customer experiences, all of which are critical for maintaining competitive advantage in today’s digital economy.
Another key driver for the AI-Powered Knowledge Graph market is the growing focus on digital transformation across sectors such as BFSI, healthcare, retail, and manufacturing. Organizations in these industries are under pressure to modernize their IT infrastructure, optimize operations, and deliver superior customer engagement. AI-powered knowledge graphs play a pivotal role in these transformation initiatives by breaking down data silos, enriching data with contextual meaning, and enabling seamless integration of information across platforms and business units. The ability to automate knowledge discovery and reasoning processes streamlines compliance, risk management, and decision-making, which is particularly valuable in highly regulated sectors. Furthermore, the adoption of cloud-based deployment models is accelerating, offering scalability, flexibility, and cost efficiencies that further stimulate market growth.
The proliferation of AI and machine learning technologies, coupled with rapid advancements in natural language understanding, has significantly expanded the capabilities and applications of knowledge graphs. Modern AI-powered knowledge graphs can ingest, process, and interlink data from a multitude of sources in real time, supporting advanced use cases such as fraud detection, recommendation engines, and information retrieval. The integration of AI enables knowledge graphs to evolve dynamically, learning from new data and user interactions to continuously improve accuracy and relevance. This adaptability is particularly valuable as organizations face ever-changing business environments and increasingly complex data ecosystems. As a result, the market is witnessing heightened interest from both large enterprises and SMEs seeking to harness the full potential of their data assets.
Regionally, North America continues to dominate the AI-Powered Knowledge Graph market, accounting for the largest revenue share in 2024, owing to the early adoption of AI technologies, strong presence of leading vendors, and significant investments in digital infrastructure. Europe follows closely, driven by stringent data regulations and a robust ecosystem of technology innovators. Meanwhile, the Asia Pacific region is experiencing the fastest growth, propelled by expanding digital economies, increasing cloud adoption, and supportive government initiatives. Latin America and the Middle East & Africa are also emerging as promising markets, albeit from a smaller base, as enterprises in these regions accelerate their digital transformation journeys. The global market’s trajectory is thus shaped by a combination of technological innovation, industry-specific requirements, and regional economic dynam
Success.ai’s Startup Data with Contact Data for Startup Founders Worldwide provides businesses with unparalleled access to key entrepreneurs and decision-makers shaping the global startup landscape. With data sourced from over 170 million verified professional profiles, this dataset offers essential contact details, including work emails and direct phone numbers, for founders in various industries and regions.
Whether you’re targeting tech innovators in Silicon Valley, fintech entrepreneurs in Europe, or e-commerce trailblazers in Asia, Success.ai ensures that your outreach efforts reach the right individuals at the right time.
Why Choose Success.ai’s Startup Founders Data?
AI-driven validation ensures 99% accuracy, providing reliable data for effective outreach.
Global Reach Across Startup Ecosystems
Includes profiles of startup founders from tech, healthcare, fintech, sustainability, and other emerging sectors.
Covers North America, Europe, Asia-Pacific, South America, and the Middle East, helping you connect with founders on a global scale.
Continuously Updated Datasets
Real-time updates mean you always have the latest contact information, ensuring your outreach is timely and relevant.
Ethical and Compliant
Adheres to GDPR, CCPA, and global data privacy regulations, ensuring ethical and compliant use of data.
Data Highlights
Key Features of the Dataset:
Engage with individuals who can approve partnerships, investments, and collaborations.
Advanced Filters for Precision Targeting
Filter by industry, funding stage, region, or startup size to narrow down your outreach efforts.
Ensure your campaigns target the most relevant contacts for your products, services, or investment opportunities.
AI-Driven Enrichment
Profiles are enriched with actionable data, offering insights that help tailor your messaging and improve response rates.
Strategic Use Cases:
Connect with founders seeking investment, pitch your venture capital or angel investment services, and establish long-term partnerships.
Business Development and Partnerships
Offer collaboration opportunities, strategic alliances, and joint ventures to startups in need of new market entries or product expansions.
Marketing and Sales Campaigns
Launch targeted email and phone outreach to founders who match your ideal customer profile, driving product adoption and long-term client relationships.
Recruitment and Talent Acquisition
Reach founders who may be open to recruitment partnerships or HR solutions, helping them build strong teams and scale effectively.
Why Choose Success.ai?
Enjoy top-quality, verified startup founder data at competitive prices, ensuring maximum return on investment.
Seamless Integration
Easily integrate verified contact data into your CRM or marketing platforms via APIs or customizable downloads.
Data Accuracy with AI Validation
With 99% data accuracy, you can trust the information to guide meaningful and productive outreach campaigns.
Customizable and Scalable Solutions
Tailor the dataset to your needs, focusing on specific industries, regions, or funding stages, and easily scale as your business grows.
APIs for Enhanced Functionality:
Enrich your existing CRM records with verified founder contact data, adding valuable insights for targeted engagements.
Lead Generation API
Automate lead generation and streamline your campaigns, ensuring efficient and scalable outreach to startup founders worldwide.
Leverage Success.ai’s B2B Contact Data for Startup Founders Worldwide to connect with the entrepreneurs driving innovation across global markets. With verified work emails, phone numbers, and continuously updated profiles, your outreach efforts become more impactful, timely, and effective.
Experience AI-validated accuracy and our Best Price Guarantee. Contact Success.ai today to learn how our B2B contact data solutions can help you engage with the startup founders who matter most.
No one beats us on price. Period.
Unfortunately, no README file was found for the datano extension, limiting the ability to provide a detailed and comprehensive description. Therefore, the following description is based on the extension name and general assumptions about data annotation tools within the CKAN ecosystem. The datano
extension for CKAN, presumably short for "data annotation," likely aims to enhance datasets with annotations, metadata enrichment, and quality control features directly within the CKAN environment. It potentially introduces functionalities for adding textual descriptions, classifications, or other forms of annotation to datasets to improve their discoverability, usability, and overall value. This extension could provide an interface for users to collaboratively annotate data, thereby enriching dataset descriptions and making the data more useful for various purposes. Key Features (Assumed): * Dataset Annotation Interface: Provides a user-friendly interface within CKAN for adding structured or unstructured annotations to datasets and associated resources. This allows for a richer understanding of the data's content, purpose, and usage. * Collaborative Annotation: Supports multiple users collaboratively annotating datasets, fostering knowledge sharing and collective understanding of the data. * Annotation Versioning: Maintains a history of annotations, enabling users to track changes and revert to previous versions if necessary. * Annotation Search: Allows users to search for datasets based on annotations, enabling quick discovery of relevant data based on specific criteria. * Metadata Enrichment: Integrates annotations with existing metadata, enhancing metadata schemas to support more detailed descriptions and contextual information. * Quality Control Features: Includes options to rate, validate, or flag annotations to ensure they are accurate and relevant, improving overall data quality. Use Cases (Assumed): 1. Data Discovery Improvement: Enables users to find specific datasets more easily by searching for datasets based on their annotations and enriched metadata. 2. Data Quality Enhancement: Allows data curators to improve the quality of datasets by adding annotations that clarify the data's meaning, provenance, and limitations. 3. Collaborative Data Projects: Facilitates collaborative data annotation efforts, wherein multiple users contribute to the enrichment of datasets with their knowledge and insights. Technical Integration (Assumed): The datano
extension would likely integrate with CKAN's existing plugin framework, adding new UI elements for annotation management and search. It could leverage CKAN's API for programmatic access to annotations and utilize CKAN's security model for managing access permissions. Benefits & Impact (Assumed): By implementing the datano
extension, CKAN users can leverage improvements to data discoverability, quality, and collaborative potential. The enhancement can help data curators to refine the understanding and management of data, making it easier to search, understand and promote data driven decision-making.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Digitalizing highway infrastructure is gaining interest in Germany and other countries due to the need for greater efficiency and sustainability. The maintenance of the built infrastructure accounts for nearly 30% of greenhouse gas emissions in Germany. To address this, Digital Twins are emerging as tools to optimize road systems. A Digital Twin of a built asset relies on a geometric-semantic as-is model of the area of interest, where an essential step for automated model generation is the semantic segmentation of reality capture data. While most approaches handle data without considering real-world context, our approach leverages existing geospatial data to enrich the data foundation through an adaptive feature extraction workflow. This workflow is adaptable to various model architectures, from deep learning methods like PointNet++ and PointNeXt to traditional machine learning models such as Random Forest. Our four-step workflow significantly boosts performance, improving overall accuracy by 20% and unweighted mean Intersection over Union (mIoU) by up to 43.47%. The target application is the semantic segmentation of point clouds in road environments. Additionally, the proposed modular workflow can be easily customized to fit diverse data sources and enhance semantic segmentation performance in a model-agnostic way.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary materials (Appendix A and B) for the article:
Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints
Abstract: Traffic speed prediction for a selected road segment from a short-term and long-term perspective is among the fundamental issues of intelligent transportation systems (ITS). During the course of the past two decades, many artefacts (e.g., models) have been designed dealing with traffic speed prediction. However, no satisfactory solution has been found for the issue of a long-term prediction for days and weeks using the vast spatial and temporal data. This article aims to introduce a long-term traffic speed prediction ensemble model using country-scale historic traffic data from 37,002 km of roads, which constitutes 66% of all roads in the Czech Republic. The designed model comprises three submodels and combines parametric and nonparametric approaches in order to acquire a good-quality prediction that can enrich available real-time traffic information. Furthermore, the model is set into a conceptual design which expects its usage for the improvement of navigation through waypoints (e.g., delivery service, goods distribution, police patrol) and the estimated arrival time. The model validation is carried out using the same network of roads, and the model predicts traffic speed in the period of 1 week. According to the performed validation of average speed prediction at a given hour, it can be stated that the designed model achieves good results, with mean absolute error of 4.67 km/h. The achieved results indicate that the designed solution can effectively predict the long-term speed information using large-scale spatial and temporal data, and that this solution is suitable for use in ITS.
Simunek, M., & Smutny, Z. (2021). Traffic Information Enrichment: Creating Long-Term Traffic Speed Prediction Ensemble Model for Better Navigation through Waypoints. Applied Sciences, 11(1), 315. https://doi.org/10.3390/app11010315
Appendix A Examples of the deviation between the average speed and the FreeFlowSpeed for selected hours.
Appendix B The text file provides a complete overview of all road segments on which basis summary test results were calculated in Section 6 of the article.
Abstract copyright UK Data Service and data collection copyright owner.
The 1981 Census Microdata Teaching Dataset for Great Britain: 1% Sample: Open Access dataset was created from existing digital records from the 1981 Census. It can be used as a 'taster' file for 1981 Census data and is freely available for anyone to download under an Open Government Licence.
The file was created under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1981 data, the team worked on files from the 1961 Census and 1971 Census.
The original 1981 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1981 Census datasets have been created:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This feature service depicts the National Weather Service (NWS) watches, warnings, and advisories within the United States. Watches and warnings are classified into 43 categories.A warning is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. A warning means weather conditions pose a threat to life or property. People in the path of the storm need to take protective action.A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location or timing is still uncertain. It is intended to provide enough lead time so those who need to set their plans in motion can do so. A watch means that hazardous weather is possible. People should have a plan of action in case a storm threatens, and they should listen for later information and possible warnings especially when planning travel or outdoor activities.An advisory is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. Advisories are for less serious conditions than warnings, that cause significant inconvenience and if caution is not exercised, could lead to situations that may threaten life or property.SourceNational Weather Service RSS-CAP Warnings and Advisories: Public AlertsNational Weather Service Boundary Overlays: AWIPS Shapefile DatabaseSample DataSee Sample Layer Item for sample data during Weather inactivity!Update FrequencyThe services is updated every 5 minutes using the Aggregated Live Feeds methodology.The overlay data is checked and updated daily from the official AWIPS Shapefile Database.Area CoveredUnited States and TerritoriesWhat can you do with this layer?Customize the display of each attribute by using the Change Style option for any layer.Query the layer to display only specific types of weather watches and warnings.Add to a map with other weather data layers to provide insight on hazardous weather events.Use ArcGIS Online analysis tools, such as Enrich Data, to determine the potential impact of weather events on populations.
Abstract copyright UK Data Service and data collection copyright owner.
The 1981 Census Microdata Individual File for Great Britain: 5% Sample dataset was created from existing digital records from the 1981 Census under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1981 data, the team worked on files from the 1961 Census and 1971 Census.
The original 1981 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1981 Census datasets have been created:
The main objective of this survey is to provide statistical data on ICT for the enterprises in the Palestinian Territory. The specific objectives can be summarized in the following points: · Enriching ICT statistical data on the actual use and access by the economic enterprises of ICT. · Identifying the characteristics of the tools and means of ICT used in the economic activity, the type of economic activity and size of enterprises. · Providing opportunity for international and regional comparisons which helps in knowing the location of the Palestinian Territory among the technological world countries. · Assisting planners and policy makers in understanding the current status of the Technology-Based Economy in the Palestinian Territory, which helps to meet the future needs of the Palestinian economy.
The data are representative at the region level (West Bank, Gaza Strip).
Enterprises
The enterprises in the Palestinian Territory
Sample survey data [ssd]
The sample is a regular stratified random sample of one stage. The strata of less than 30 enterprises and enterprises that operate 30 or more workers was included. Enterprises were divided into three levels, namely: First level, geographical classification of enterprises and classified into two regions: the West Bank and Gaza Strip. Second Level, economic activity of the enterprises classified according to International Industrial Classification for Economic Activities. Third level, employment size category of the enterprises classified according to the number of employees as follows: 1. Enterprises that operate with less than 5 employees. 2. Enterprises that operate with 5-10 employees. 3. Enterprises that operate with 11-29 employees. 4. Enterprises that operate with 30 employees and over.
Face-to-face [f2f]
The Survey Questionnaire In light of identifying data requirements, the survey instrument was developed following a review of international recommendations and experiences of countries in this area, and following discussion with stakeholders, through a workshop at PCBS to discuss producers and indicators of the survey.
In addition to identification information and data quality control, BICT 2007 survey instrument consists of three main sections, namely:
Section one: Includes readiness, access to ICT; this section contains a collection of examples about the existence of the necessary infrastructure for the use of technology and tools and instruments in the business, such as the availability of the computer and Internet service. It also provides a range of sophisticated devices associated with the use of technology such as telephone, fax, mobile phone, printers, and other related issues.
Section two: includes a series of questions about the use of Internet and computer networks in various activities and projects of economic enterprises, such as using the Internet, and networks to conduct commercial transactions buying and selling, and obstacles faced by Palestinian enterprises in the use of networks and Internet in their economic activities and implementation electronically of commercial transactions.
Section three: includes questions about the future direction of the enterprises in the use of means and tools of ICT, as well as expenditures for some tools and means of ICT that have been adopted.
Data Editing The project's management developed a clear mechanism for editing the data and trained the team of editors accordingly. The mechanism was as follows: · Receiving completed questionnaires on a daily basis; · Checking each questionnaire to make sure that they were completed and that the data covered all eligible enterprises. Checks also focused on the accuracy of the answers to the questions. Returning the uncompleted questionnaires as well as those with errors to the field for completion
The survey sample consists of about 2,966 enterprises; 2,604 enterprises completed the interview, of which 1,746 enterprises were in the West Bank and 858 enterprises in Gaza Strip. The response rate was 92.2%.
Detailed information on the sampling Error is available in the Survey Report, downloadable under the "Resources" tab.
Detailed information on the data appraisal is available in the Survey Report, downloadable under the "Resources" tab.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This feature service depicts the National Weather Service (NWS) watches, warnings, and advisories within the United States. Watches and warnings are classified into 43 categories.A warning is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. A warning means weather conditions pose a threat to life or property. People in the path of the storm need to take protective action.A watch is used when the risk of a hazardous weather or hydrologic event has increased significantly, but its occurrence, location or timing is still uncertain. It is intended to provide enough lead time so those who need to set their plans in motion can do so. A watch means that hazardous weather is possible. People should have a plan of action in case a storm threatens, and they should listen for later information and possible warnings especially when planning travel or outdoor activities.An advisory is issued when a hazardous weather or hydrologic event is occurring, imminent or likely. Advisories are for less serious conditions than warnings, that cause significant inconvenience and if caution is not exercised, could lead to situations that may threaten life or property.SourceNational Weather Service RSS-CAP Warnings and Advisories: Public AlertsNational Weather Service Boundary Overlays: AWIPS Shapefile DatabaseSample DataSee Sample Layer Item for sample data during Weather inactivity!Update FrequencyThe services is updated every 5 minutes using the Aggregated Live Feeds methodology.The overlay data is checked and updated daily from the official AWIPS Shapefile Database.Area CoveredUnited States and TerritoriesWhat can you do with this layer?Customize the display of each attribute by using the Change Style option for any layer.Query the layer to display only specific types of weather watches and warnings.Add to a map with other weather data layers to provide insight on hazardous weather events.Use ArcGIS Online analysis tools, such as Enrich Data, to determine the potential impact of weather events on populations.This map is provided for informational purposes and is not monitored 24/7 for accuracy and currency.Additional information on Watches and Warnings.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Human Proteome Project (HPP) aims at mapping entire human proteins with a systematic effort upon all the emerging techniques, which would enhance understanding of human biology and lay a foundation for development of medical applications. Until now, 2563 missing proteins (MPs, PE2–4) are still undetected even using the most sensitive approach of protein detection. Herein, we propose that enrichment of low-abundance proteins benefits MPs finding. ProteoMiner is an equalizing technique by reducing high-abundance proteins and enriching low-abundance proteins in biological liquids. With triton X-100/TBS buffer extraction, ProteoMiner enrichment, and peptide fractionation, 20 MPs (at least two non-nested unique peptides with more than eight a.a. length) with 60 unique peptides were identified from four human tissues including eight membrane/secreted proteins and five nucleus proteins. Then 15 of them were confirmed with two non-nested unique peptides (≥9 a.a.) identified by matching well with their chemically synthetic peptides in PRM assay. Hence, these results demonstrated ProteoMiner as a powerful means in discovery of MPs.
Abstract copyright UK Data Service and data collection copyright owner.
The 1981 Census Microdata Household File for Great Britain: 0.95% Sample dataset was created from existing digital records from the 1981 Census under a project known as Enhancing and Enriching Historic Census Microdata Samples (EEHCM), which was funded by the Economic and Social Research Council with input from the Office for National Statistics and National Records of Scotland. The project ran from 2012-2014 and was led from the UK Data Archive, University of Essex, in collaboration with the Cathie Marsh Institute for Social Research (CMIST) at the University of Manchester and the Census Offices. In addition to the 1981 data, the team worked on files from the 1961 Census and 1971 Census.
The original 1981 records preceded current data archival standards and were created before microdata sets for secondary use were anticipated. A process of data recovery and quality checking was necessary to maximise their utility for current researchers, though some imperfections remain (see the User Guide for details). Three other 1981 Census datasets have been created:
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
As an important means for enterprises to acquire technological resources, the impact of mergers and acquisitions on technological innovation and underlying mechanisms deserve in-depth study. Using the merger and acquisition data of A-share listed Chinese companies from 2007 to 2020 in Shanghai and Shenzhen, the causal effects and influence mechanisms between mergers and acquisitions and technological innovation are identified and tested using the Difference-in-Differences method. The study finds that mergers and acquisitions have a long-term, sustained, technological innovation-enhancing effect on firms. Mechanism tests show that mergers and acquisitions can promote the technological innovation of enterprises by improving production efficiency, enriching digital knowledge, and enhancing market power. A heterogeneity analysis shows that the effect of mergers and acquisitions in enhancing technological innovation is more significant when the mergers and acquisitions meet domestic merger and acquisition requirements, when there is a small transaction size, and when the enterprises involved in the mergers and acquisitions are not state-owned. It is suggested that enterprises and the government should use multiple measures, while considering the impact of heterogeneity, to take full advantage of the positive effects of mergers and acquisitions on technological innovation.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Hydrophilic interaction liquid chromatography (HILIC) glycopeptide enrichment is an indispensable tool for the high-throughput characterization of glycoproteomes. Despite its utility, HILIC enrichment is associated with a number of shortcomings, including requiring large amounts of starting materials, potentially introducing chemical artifacts such as formylation when high concentrations of formic acid are used, and biasing/undersampling specific classes of glycopeptides. Here, we investigate HILIC enrichment-independent approaches for the study of bacterial glycoproteomes. Using three Burkholderia species (Burkholderia cenocepacia, Burkholderia Dolosa, and Burkholderia ubonensis), we demonstrate that short aliphatic O-linked glycopeptides are typically absent from HILIC enrichments, yet are readily identified in whole proteome samples. Using high-field asymmetric waveform ion mobility spectrometry (FAIMS) fractionation, we show that at high compensation voltages (CVs), short aliphatic glycopeptides can be enriched from complex samples, providing an alternative means to identify glycopeptide recalcitrant to hydrophilic-based enrichment. Combining whole proteome and FAIMS analyses, we show that the observable glycoproteome of these Burkholderia species is at least 25% larger than what was initially thought. Excitingly, the ability to enrich glycopeptides using FAIMS appears generally applicable, with the N-linked glycopeptides of Campylobacter fetus subsp. fetus also being enrichable at high FAIMS CVs. Taken together, these results demonstrate that FAIMS provides an alternative means to access glycopeptides and is a valuable tool for glycoproteomic analysis.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Metal oxide affinity chromatography (MOAC) has become a prominent method to enrich phosphopeptides prior to their analysis by liquid chromatography−mass spectrometry. To overcome limitations in material design, we have previously reported the use of nanocasting as a means to generate metal oxide spheres with tailored properties. Here, we report on the application of two oxides, tin dioxide (stannia) and titanium dioxide (titania), for the analysis of the HeLa phosphoproteome. In combination with nanoflow LC−MS/MS analysis on a linear ion trap-Fourier transform ion cyclotron resonance instrument, we identified 619 phosphopeptides using the new stannia material, and 896 phosphopeptides using titania prepared in house. We also compared the newly developed materials to commercial titania material using an established enrichment protocol. Both titania materials yielded a comparable total number of phosphopeptides, but the overlap of the two data sets was less than one-third. Although fewer peptides were identified using stannia, the complementarity of SnO2-based MOAC could be shown as more than 140 phosphopeptides were exclusively identified by this material.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
An updated strong-motion database of the Iranian earthquakes has been used to propose empirical equations for the prediction of peak ground velocity (PGV), peak ground acceleration (PGA), and 5%-damped Spectral Accelerations (SA) up to0.0 s for geometric mean of horizontal components. Some records from the NGA-West2 are added to the database to enrich it at near source distances for large magnitude events. Lack of data in the near source distances causes less accuracy in previous Iranian Ground Motion Models (GMMs) in comparison with the current study. In this work, the regression analyses have been performed on a truncated database which causes to obtain unbiased results. We used 3015 acceleration time series from 594 earthquakes after truncation of data to develop a new GMM. The provided model is valid for Joyner-Boore distances ranging from 0 km to80 km and magnitudes ranging from 4 to 7.9 and Vs30 ranging from50 m/s to500 m/s.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Impact of corporate mergers and acquisitions on technological innovation.
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Xverum empowers tech-driven companies to elevate their solutions by providing comprehensive global company data. With over 50 million comprehensive company profiles, we help you enrich and expand your data, conduct extensive company analysis, and tailor your digital strategies accordingly.
Top 5 characteristics of company data from Xverum:
Monthly Updates: Stay informed about any changes in company data with over 40 data attributes per profile.
3.5x Higher Refresh Rate: Stay ahead of the competition with the freshest prospect data available as you won't find any profile older than 120 days.
5x Better Quality of Company Data: High-quality data means more precise prospecting and data enrichment in your strategies.
100% GDPR and CCPA Compliant: Build digital strategies using legitimate data.
Global Coverage: Access data from over 200 countries, ensuring you have the right audience data you need, wherever you operate.
At Xverum, we're committed to providing you with real-time B2B data to fuel your success. We are happy to learn more about your specific needs and deliver custom company data according to your requirements.