100+ datasets found
  1. o

    Data Source Type

    • opencontext.org
    Updated Sep 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David G. Anderson; Joshua Wells; Stephen Yerka; Sarah Whitcher Kansa; Eric C. Kansa (2022). Data Source Type [Dataset]. https://opencontext.org/predicates/6aeff869-47cf-4a32-920c-2ad037458bf9
    Explore at:
    Dataset updated
    Sep 29, 2022
    Dataset provided by
    Open Context
    Authors
    David G. Anderson; Joshua Wells; Stephen Yerka; Sarah Whitcher Kansa; Eric C. Kansa
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    An Open Context "predicates" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Variables" record is part of the "Digital Index of North American Archaeology (DINAA)" data publication.

  2. d

    Addresses (Open Data)

    • catalog.data.gov
    • data.tempe.gov
    • +13more
    Updated Sep 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2025). Addresses (Open Data) [Dataset]. https://catalog.data.gov/dataset/addresses-open-data
    Explore at:
    Dataset updated
    Sep 20, 2025
    Dataset provided by
    City of Tempe
    Description

    This dataset is a compilation of address point data for the City of Tempe. The dataset contains a point location, the official address (as defined by The Building Safety Division of Community Development) for all occupiable units and any other official addresses in the City. There are several additional attributes that may be populated for an address, but they may not be populated for every address. Contact: Lynn Flaaen-Hanna, Development Services Specialist Contact E-mail Link: Map that Lets You Explore and Export Address Data Data Source: The initial dataset was created by combining several datasets and then reviewing the information to remove duplicates and identify errors. This published dataset is the system of record for Tempe addresses going forward, with the address information being created and maintained by The Building Safety Division of Community Development.Data Source Type: ESRI ArcGIS Enterprise GeodatabasePreparation Method: N/APublish Frequency: WeeklyPublish Method: AutomaticData Dictionary

  3. A

    Alternative Data Market Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Aug 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Alternative Data Market Report [Dataset]. https://www.archivemarketresearch.com/reports/alternative-data-market-5021
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Aug 23, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    global
    Variables measured
    Market Size
    Description

    The Alternative Data Market size was valued at USD 7.20 billion in 2023 and is projected to reach USD 126.50 billion by 2032, exhibiting a CAGR of 50.6 % during the forecasts period. The use and processing of information that is not in financial databases is known as the alternative data market. Such data involves posts in social networks, satellite images, credit card transactions, web traffic and many others. It is mostly used in financial field to make the investment decisions, managing risks and analyzing competitors, giving a more general view on market trends as well as consumers’ attitude. It has been found that there is increasing requirement for the obtaining of data from unconventional sources as firms strive to nose ahead in highly competitive markets. Some current trend are the finding of AI and machine learning to drive large sets of data and the broadening utilization of the so called “Alternative Data” across industries that are not only the finance industry. Recent developments include: In April 2023, Thinknum Alternative Data launched new data fields to its employee sentiment datasets for people analytics teams and investors to use this as an 'employee NPS' proxy, and support highly-rated employers set up interviews through employee referrals. , In September 2022, Thinknum Alternative Data announced its plan to combine data Similarweb, SensorTower, Thinknum, Caplight, and Pathmatics with Lagoon, a sophisticated infrastructure platform to deliver an alternative data source for investment research, due diligence, deal sourcing and origination, and post-acquisition strategies in private markets. , In May 2022, M Science LLC launched a consumer spending trends platform, providing daily, weekly, monthly, and semi-annual visibility into consumer behaviors and competitive benchmarking. The consumer spending platform provided real-time insights into consumer spending patterns for Australian brands and an unparalleled business performance analysis. .

  4. d

    Global Web Data | Web Scraping Data | Job Postings Data | Source: Company...

    • datarade.ai
    .json
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    PredictLeads, Global Web Data | Web Scraping Data | Job Postings Data | Source: Company Website | 214M+ Records [Dataset]. https://datarade.ai/data-products/predictleads-web-data-web-scraping-data-job-postings-dat-predictleads
    Explore at:
    .jsonAvailable download formats
    Dataset authored and provided by
    PredictLeads
    Area covered
    French Guiana, Bonaire, Bosnia and Herzegovina, Comoros, Kuwait, Virgin Islands (British), Northern Mariana Islands, Guadeloupe, Kosovo, El Salvador
    Description

    PredictLeads Job Openings Data provides high-quality hiring insights sourced directly from company websites - not job boards. Using advanced web scraping technology, our dataset offers real-time access to job trends, salaries, and skills demand, making it a valuable resource for B2B sales, recruiting, investment analysis, and competitive intelligence.

    Key Features:

    ✅214M+ Job Postings Tracked – Data sourced from 92 Million company websites worldwide. ✅7,1M+ Active Job Openings – Updated in real-time to reflect hiring demand. ✅Salary & Compensation Insights – Extract salary ranges, contract types, and job seniority levels. ✅Technology & Skill Tracking – Identify emerging tech trends and industry demands. ✅Company Data Enrichment – Link job postings to employer domains, firmographics, and growth signals. ✅Web Scraping Precision – Directly sourced from employer websites for unmatched accuracy.

    Primary Attributes:

    • id (string, UUID) – Unique identifier for the job posting.
    • type (string, constant: "job_opening") – Object type.
    • title (string) – Job title.
    • description (string) – Full job description, extracted from the job listing.
    • url (string, URL) – Direct link to the job posting.
    • first_seen_at – Timestamp when the job was first detected.
    • last_seen_at – Timestamp when the job was last detected.
    • last_processed_at – Timestamp when the job data was last processed.

    Job Metadata:

    • contract_types (array of strings) – Type of employment (e.g., "full time", "part time", "contract").
    • categories (array of strings) – Job categories (e.g., "engineering", "marketing").
    • seniority (string) – Seniority level of the job (e.g., "manager", "non_manager").
    • status (string) – Job status (e.g., "open", "closed").
    • language (string) – Language of the job posting.
    • location (string) – Full location details as listed in the job description.
    • Location Data (location_data) (array of objects)
    • city (string, nullable) – City where the job is located.
    • state (string, nullable) – State or region of the job location.
    • zip_code (string, nullable) – Postal/ZIP code.
    • country (string, nullable) – Country where the job is located.
    • region (string, nullable) – Broader geographical region.
    • continent (string, nullable) – Continent name.
    • fuzzy_match (boolean) – Indicates whether the location was inferred.

    Salary Data (salary_data)

    • salary (string) – Salary range extracted from the job listing.
    • salary_low (float, nullable) – Minimum salary in original currency.
    • salary_high (float, nullable) – Maximum salary in original currency.
    • salary_currency (string, nullable) – Currency of the salary (e.g., "USD", "EUR").
    • salary_low_usd (float, nullable) – Converted minimum salary in USD.
    • salary_high_usd (float, nullable) – Converted maximum salary in USD.
    • salary_time_unit (string, nullable) – Time unit for the salary (e.g., "year", "month", "hour").

    Occupational Data (onet_data) (object, nullable)

    • code (string, nullable) – ONET occupation code.
    • family (string, nullable) – Broad occupational family (e.g., "Computer and Mathematical").
    • occupation_name (string, nullable) – Official ONET occupation title.

    Additional Attributes:

    • tags (array of strings, nullable) – Extracted skills and keywords (e.g., "Python", "JavaScript").

    📌 Trusted by enterprises, recruiters, and investors for high-precision job market insights.

    PredictLeads Dataset: https://docs.predictleads.com/v3/guide/job_openings_dataset

  5. Seair Exim Solutions

    • seair.co.in
    Updated Mar 31, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim (2015). Seair Exim Solutions [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset updated
    Mar 31, 2015
    Dataset provided by
    Seair Info Solutions
    Authors
    Seair Exim
    Area covered
    United States
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  6. d

    Hourly air temperature in degrees Fahrenheit and three-digit data-source...

    • catalog.data.gov
    • search.dataone.org
    • +1more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Hourly air temperature in degrees Fahrenheit and three-digit data-source flag associated with the data, January 1, 1948 - September 30, 2015 [Dataset]. https://catalog.data.gov/dataset/hourly-air-temperature-in-degrees-fahrenheit-and-three-digit-data-source-flag-associate-30
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Description

    The text file "Air temperature.txt" contains hourly data and associated data-source flag from January 1, 1948, to September 30, 2015. The primary source of the data is the Argonne National Laboratory, Illinois. The first four columns give year, month, day and hour of the observation. Column 5 is the data in degrees Fahrenheit. Column 6 is the three-digit data-source flag. They indicate if the air temperature data are original or missing, the method that was used to fill the missing periods, and any other transformations of the data. These flags consist of a three-digit sequence in the form "xyz". The user of the data should consult Over and others (2010) for the detailed documentation of this hourly data-source flag series. Reference Cited: Over, T.M., Price, T.H., and Ishii, A.L., 2010, Development and analysis of a meteorological database, Argonne National Laboratory, Illinois: U.S. Geological Survey Open File Report 2010-1220, 67 p., http://pubs.usgs.gov/of/2010/1220/.

  7. Z

    Data from: A Large-scale Dataset of (Open Source) License Text Variants

    • data.niaid.nih.gov
    Updated Mar 31, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Stefano Zacchiroli (2022). A Large-scale Dataset of (Open Source) License Text Variants [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6379163
    Explore at:
    Dataset updated
    Mar 31, 2022
    Dataset authored and provided by
    Stefano Zacchiroli
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We introduce a large-scale dataset of the complete texts of free/open source software (FOSS) license variants. To assemble it we have collected from the Software Heritage archive—the largest publicly available archive of FOSS source code with accompanying development history—all versions of files whose names are commonly used to convey licensing terms to software users and developers. The dataset consists of 6.5 million unique license files that can be used to conduct empirical studies on open source licensing, training of automated license classifiers, natural language processing (NLP) analyses of legal texts, as well as historical and phylogenetic studies on FOSS licensing. Additional metadata about shipped license files are also provided, making the dataset ready to use in various contexts; they include: file length measures, detected MIME type, detected SPDX license (using ScanCode), example origin (e.g., GitHub repository), oldest public commit in which the license appeared. The dataset is released as open data as an archive file containing all deduplicated license blobs, plus several portable CSV files for metadata, referencing blobs via cryptographic checksums.

    For more details see the included README file and companion paper:

    Stefano Zacchiroli. A Large-scale Dataset of (Open Source) License Text Variants. In proceedings of the 2022 Mining Software Repositories Conference (MSR 2022). 23-24 May 2022 Pittsburgh, Pennsylvania, United States. ACM 2022.

    If you use this dataset for research purposes, please acknowledge its use by citing the above paper.

  8. Success.ai | LinkedIn Company Data – Access 70M Companies & 700M Profiles at...

    • datarade.ai
    Updated Jan 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Success.ai (2022). Success.ai | LinkedIn Company Data – Access 70M Companies & 700M Profiles at Unbeatable Prices [Dataset]. https://datarade.ai/data-products/success-ai-linkedin-company-data-access-70m-companies-7-success-ai
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Jan 1, 2022
    Dataset provided by
    Area covered
    Ascension and Tristan da Cunha, Tunisia, Georgia, Martinique, Niger, Suriname, Singapore, Macedonia (the former Yugoslav Republic of), Portugal, India
    Description

    Maximize your business potential with Success.ai's LinkedIn Company and Contact Data, a comprehensive solution designed to empower your business with strategic insights drawn from one of the largest professional networks in the world. This extensive dataset includes in-depth profiles from over 700 million professionals and 70 million companies globally, making it a goldmine for businesses aiming to enhance their marketing strategies, refine competitive intelligence, and drive robust B2B lead generation.

    Transform Your Email Marketing Efforts With Success.ai, tap into highly detailed and direct contact data to personalize your communications effectively. By accessing a vast array of email addresses, personalize your outreach efforts to dramatically improve engagement rates and conversion possibilities.

    Data Enrichment for Comprehensive Insights Integrate enriched LinkedIn data seamlessly into your CRM or any analytical system to gain a comprehensive understanding of your market landscape. This enriched view helps you navigate through complex business environments, enhancing decision-making and strategic planning.

    Elevate Your Online Marketing Deploy targeted and precision-based online marketing campaigns leveraging detailed professional data from LinkedIn. Tailor your messages and offers based on specific professional demographics, industry segments, and more, to optimize engagement and maximize online marketing ROI.

    Digital Advertising Optimized Utilize LinkedIn’s precise company and professional data to create highly targeted digital advertising campaigns. By understanding the profiles of key decision-makers, tailor your advertising strategies to resonate well with your target audience, ensuring high impact and better expenditure returns.

    Accelerate B2B Lead Generation Identify and connect directly with key stakeholders and decision-makers to shorten your sales cycles and close deals quicker. With access to high-level contacts in your industry, streamline your lead generation process and enhance the efficiency of your sales funnel.

    Why Partner with Success.ai for LinkedIn Data? - Competitive Pricing Assurance: Success.ai guarantees the most aggressive pricing, ensuring you receive unbeatable value for your investment in high-quality professional data. - Global Data Access: With coverage extending across 195 countries, tap into a rich reservoir of professional information, covering diverse industries and market segments. - High Data Accuracy: Backed by advanced AI technology and manual validation processes, our data accuracy rate stands at 99%, providing you with reliable and actionable insights. - Custom Data Integration: Receive tailored data solutions that fit seamlessly into your existing business processes, delivered in formats such as CSV and Parquet for easy integration. - Ethical Data Compliance: Our data sourcing and processing practices are fully compliant with global standards, ensuring ethical and responsible use of data. - Industry-wide Applications: Whether you’re in technology, finance, healthcare, or any other sector, our data solutions are designed to meet your specific industry needs.

    Strategic Use Cases for Enhanced Business Performance - Email Marketing: Leverage accurate contact details for personalized and effective email marketing campaigns. - Online Marketing and Digital Advertising: Use detailed demographic and professional data to refine your online presence and digital ad targeting. - Data Enrichment and B2B Lead Generation: Enhance your databases and accelerate your lead generation with enriched, up-to-date data. - Competitive Intelligence and Market Research: Stay ahead of the curve by using our data for deep market analysis and competitive research.

    With Success.ai, you’re not just accessing data; you’re unlocking a gateway to strategic business growth and enhanced market positioning. Start with Success.ai today to leverage our LinkedIn Company Data and transform your business operations with precision and efficiency.

    Did we mention that we'll beat any price on the market? Try us.

  9. d

    Circa 1956 Land Area in Coastal Louisiana - Original Data Source - National...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Sep 17, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Circa 1956 Land Area in Coastal Louisiana - Original Data Source - National Wetlands Inventory - Revisions to Georectification [Dataset]. https://catalog.data.gov/dataset/circa-1956-land-area-in-coastal-louisiana-original-data-source-national-wetlands-inventory
    Explore at:
    Dataset updated
    Sep 17, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Area covered
    Louisiana
    Description

    The dataset presented here represents a circa 1956 land/water delineation of coastal Louisiana used in part of a larger study to quantify landscape changes from 1932 to 2016. The original dataset was created by the U.S. Fish and Wildlife Service, Office of Biological Services. The USGS Wetland and Aquatic Research Center altered the original data by improving the geo-rectification in specific areas known to contain geo-rectification error, most notably in coastal wetland areas in the vicinity of Four League Bay in western Terrebonne Basin. The dataset contains two categories, land and water. For the purposes of this effort, land includes areas characterized by emergent vegetation, upland, wetland forest, or scrub-shrub were classified as land, while open water, aquatic beds, and mudflats were classified as water. For additional information regarding this dataset (other than geo-rectification revisions), please contact the dataset originator, the U.S. Fish and Wildlife Service (USFWS).

  10. Alternative Data Market Analysis North America, Europe, APAC, South America,...

    • technavio.com
    pdf
    Updated Jan 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Alternative Data Market Analysis North America, Europe, APAC, South America, Middle East and Africa - US, Canada, China, UK, Mexico, Germany, Japan, India, Italy, France - Size and Forecast 2025-2029 [Dataset]. https://www.technavio.com/report/alternative-data-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jan 17, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    License

    https://www.technavio.com/content/privacy-noticehttps://www.technavio.com/content/privacy-notice

    Time period covered
    2025 - 2029
    Area covered
    Mexico, United States, Canada
    Description

    Snapshot img

    Alternative Data Market Size 2025-2029

    The alternative data market size is valued to increase USD 60.32 billion, at a CAGR of 52.5% from 2024 to 2029. Increased availability and diversity of data sources will drive the alternative data market.

    Major Market Trends & Insights

    North America dominated the market and accounted for a 56% growth during the forecast period.
    By Type - Credit and debit card transactions segment was valued at USD 228.40 billion in 2023
    By End-user - BFSI segment accounted for the largest market revenue share in 2023
    

    Market Size & Forecast

    Market Opportunities: USD 6.00 million
    Market Future Opportunities: USD 60318.00 million
    CAGR from 2024 to 2029 : 52.5%
    

    Market Summary

    The market represents a dynamic and rapidly expanding landscape, driven by the increasing availability and diversity of data sources. With the rise of alternative data-driven investment strategies, businesses and investors are increasingly relying on non-traditional data to gain a competitive edge. Core technologies, such as machine learning and natural language processing, are transforming the way alternative data is collected, analyzed, and utilized. Despite its potential, the market faces challenges related to data quality and standardization. According to a recent study, alternative data accounts for only 10% of the total data used in financial services, yet 45% of firms surveyed reported issues with data quality.
    Service types, including data providers, data aggregators, and data analytics firms, are addressing these challenges by offering solutions to ensure data accuracy and reliability. Regional mentions, such as North America and Europe, are leading the adoption of alternative data, with Europe projected to grow at a significant rate due to increasing regulatory support for alternative data usage. The market's continuous evolution is influenced by various factors, including technological advancements, changing regulations, and emerging trends in data usage.
    

    What will be the Size of the Alternative Data Market during the forecast period?

    Get Key Insights on Market Forecast (PDF) Request Free Sample

    How is the Alternative Data Market Segmented ?

    The alternative data industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Type
    
      Credit and debit card transactions
      Social media
      Mobile application usage
      Web scrapped data
      Others
    
    
    End-user
    
      BFSI
      IT and telecommunication
      Retail
      Others
    
    
    Geography
    
      North America
    
        US
        Canada
        Mexico
    
    
      Europe
    
        France
        Germany
        Italy
        UK
    
    
      APAC
    
        China
        India
        Japan
    
    
      Rest of World (ROW)
    

    By Type Insights

    The credit and debit card transactions segment is estimated to witness significant growth during the forecast period.

    Alternative data derived from credit and debit card transactions plays a significant role in offering valuable insights for market analysts, financial institutions, and businesses. This data category is segmented into credit card and debit card transactions. Credit card transactions serve as a rich source of information on consumers' discretionary spending, revealing their luxury spending tendencies and credit management skills. Debit card transactions, on the other hand, shed light on essential spending habits, budgeting strategies, and daily expenses, providing insights into consumers' practical needs and lifestyle choices. Market analysts and financial institutions utilize this data to enhance their strategies and customer experiences.

    Natural language processing (NLP) and sentiment analysis tools help extract valuable insights from this data. Anomaly detection systems enable the identification of unusual spending patterns, while data validation techniques ensure data accuracy. Risk management frameworks and hypothesis testing methods are employed to assess potential risks and opportunities. Data visualization dashboards and machine learning models facilitate data exploration and trend analysis. Data quality metrics and signal processing methods ensure data reliability and accuracy. Data governance policies and real-time data streams enable timely access to data. Time series forecasting, clustering techniques, and high-frequency data analysis provide insights into trends and patterns.

    Model training datasets and model evaluation metrics are essential for model development and performance assessment. Data security protocols are crucial to protect sensitive financial information. Economic indicators and compliance regulations play a role in the context of this market. Unstructured data analysis, data cleansing pipelines, and statistical significance are essential for deriving meaningful insights from this data. New

  11. w

    Data Use in Academia Dataset

    • datacatalog.worldbank.org
    csv, utf-8
    Updated Nov 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Semantic Scholar Open Research Corpus (S2ORC) (2023). Data Use in Academia Dataset [Dataset]. https://datacatalog.worldbank.org/search/dataset/0065200?version=1
    Explore at:
    utf-8, csvAvailable download formats
    Dataset updated
    Nov 27, 2023
    Dataset provided by
    Semantic Scholar Open Research Corpus (S2ORC)
    Brian William Stacy
    License

    https://datacatalog.worldbank.org/public-licenses?fragment=cchttps://datacatalog.worldbank.org/public-licenses?fragment=cc

    Description

    This dataset contains metadata (title, abstract, date of publication, field, etc) for around 1 million academic articles. Each record contains additional information on the country of study and whether the article makes use of data. Machine learning tools were used to classify the country of study and data use.


    Our data source of academic articles is the Semantic Scholar Open Research Corpus (S2ORC) (Lo et al. 2020). The corpus contains more than 130 million English language academic papers across multiple disciplines. The papers included in the Semantic Scholar corpus are gathered directly from publishers, from open archives such as arXiv or PubMed, and crawled from the internet.


    We placed some restrictions on the articles to make them usable and relevant for our purposes. First, only articles with an abstract and parsed PDF or latex file are included in the analysis. The full text of the abstract is necessary to classify the country of study and whether the article uses data. The parsed PDF and latex file are important for extracting important information like the date of publication and field of study. This restriction eliminated a large number of articles in the original corpus. Around 30 million articles remain after keeping only articles with a parsable (i.e., suitable for digital processing) PDF, and around 26% of those 30 million are eliminated when removing articles without an abstract. Second, only articles from the year 2000 to 2020 were considered. This restriction eliminated an additional 9% of the remaining articles. Finally, articles from the following fields of study were excluded, as we aim to focus on fields that are likely to use data produced by countries’ national statistical system: Biology, Chemistry, Engineering, Physics, Materials Science, Environmental Science, Geology, History, Philosophy, Math, Computer Science, and Art. Fields that are included are: Economics, Political Science, Business, Sociology, Medicine, and Psychology. This third restriction eliminated around 34% of the remaining articles. From an initial corpus of 136 million articles, this resulted in a final corpus of around 10 million articles.


    Due to the intensive computer resources required, a set of 1,037,748 articles were randomly selected from the 10 million articles in our restricted corpus as a convenience sample.


    The empirical approach employed in this project utilizes text mining with Natural Language Processing (NLP). The goal of NLP is to extract structured information from raw, unstructured text. In this project, NLP is used to extract the country of study and whether the paper makes use of data. We will discuss each of these in turn.


    To determine the country or countries of study in each academic article, two approaches are employed based on information found in the title, abstract, or topic fields. The first approach uses regular expression searches based on the presence of ISO3166 country names. A defined set of country names is compiled, and the presence of these names is checked in the relevant fields. This approach is transparent, widely used in social science research, and easily extended to other languages. However, there is a potential for exclusion errors if a country’s name is spelled non-standardly.


    The second approach is based on Named Entity Recognition (NER), which uses machine learning to identify objects from text, utilizing the spaCy Python library. The Named Entity Recognition algorithm splits text into named entities, and NER is used in this project to identify countries of study in the academic articles. SpaCy supports multiple languages and has been trained on multiple spellings of countries, overcoming some of the limitations of the regular expression approach. If a country is identified by either the regular expression search or NER, it is linked to the article. Note that one article can be linked to more than one country.


    The second task is to classify whether the paper uses data. A supervised machine learning approach is employed, where 3500 publications were first randomly selected and manually labeled by human raters using the Mechanical Turk service (Paszke et al. 2019).[1] To make sure the human raters had a similar and appropriate definition of data in mind, they were given the following instructions before seeing their first paper:


    Each of these documents is an academic article. The goal of this study is to measure whether a specific academic article is using data and from which country the data came.

    There are two classification tasks in this exercise:

    1. identifying whether an academic article is using data from any country

    2. Identifying from which country that data came.

    For task 1, we are looking specifically at the use of data. Data is any information that has been collected, observed, generated or created to produce research findings. As an example, a study that reports findings or analysis using a survey data, uses data. Some clues to indicate that a study does use data includes whether a survey or census is described, a statistical model estimated, or a table or means or summary statistics is reported.

    After an article is classified as using data, please note the type of data used. The options are population or business census, survey data, administrative data, geospatial data, private sector data, and other data. If no data is used, then mark "Not applicable". In cases where multiple data types are used, please click multiple options.[2]

    For task 2, we are looking at the country or countries that are studied in the article. In some cases, no country may be applicable. For instance, if the research is theoretical and has no specific country application. In some cases, the research article may involve multiple countries. In these cases, select all countries that are discussed in the paper.

    We expect between 10 and 35 percent of all articles to use data.


    The median amount of time that a worker spent on an article, measured as the time between when the article was accepted to be classified by the worker and when the classification was submitted was 25.4 minutes. If human raters were exclusively used rather than machine learning tools, then the corpus of 1,037,748 articles examined in this study would take around 50 years of human work time to review at a cost of $3,113,244, which assumes a cost of $3 per article as was paid to MTurk workers.


    A model is next trained on the 3,500 labelled articles. We use a distilled version of the BERT (bidirectional Encoder Representations for transformers) model to encode raw text into a numeric format suitable for predictions (Devlin et al. (2018)). BERT is pre-trained on a large corpus comprising the Toronto Book Corpus and Wikipedia. The distilled version (DistilBERT) is a compressed model that is 60% the size of BERT and retains 97% of the language understanding capabilities and is 60% faster (Sanh, Debut, Chaumond, Wolf 2019). We use PyTorch to produce a model to classify articles based on the labeled data. Of the 3,500 articles that were hand coded by the MTurk workers, 900 are fed to the machine learning model. 900 articles were selected because of computational limitations in training the NLP model. A classification of “uses data” was assigned if the model predicted an article used data with at least 90% confidence.


    The performance of the models classifying articles to countries and as using data or not can be compared to the classification by the human raters. We consider the human raters as giving us the ground truth. This may underestimate the model performance if the workers at times got the allocation wrong in a way that would not apply to the model. For instance, a human rater could mistake the Republic of Korea for the Democratic People’s Republic of Korea. If both humans and the model perform the same kind of errors, then the performance reported here will be overestimated.


    The model was able to predict whether an article made use of data with 87% accuracy evaluated on the set of articles held out of the model training. The correlation between the number of articles written about each country using data estimated under the two approaches is given in the figure below. The number of articles represents an aggregate total of

  12. Google Capstone Project - BellaBeats

    • kaggle.com
    Updated Jan 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jason Porzelius (2023). Google Capstone Project - BellaBeats [Dataset]. https://www.kaggle.com/datasets/jasonporzelius/google-capstone-project-bellabeats
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jan 5, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Jason Porzelius
    Description

    Introduction: I have chosen to complete a data analysis project for the second course option, Bellabeats, Inc., using a locally hosted database program, Excel for both my data analysis and visualizations. This choice was made primarily because I live in a remote area and have limited bandwidth and inconsistent internet access. Therefore, completing a capstone project using web-based programs such as R Studio, SQL Workbench, or Google Sheets was not a feasible choice. I was further limited in which option to choose as the datasets for the ride-share project option were larger than my version of Excel would accept. In the scenario provided, I will be acting as a Junior Data Analyst in support of the Bellabeats, Inc. executive team and data analytics team. This combined team has decided to use an existing public dataset in hopes that the findings from that dataset might reveal insights which will assist in Bellabeat's marketing strategies for future growth. My task is to provide data driven insights to business tasks provided by the Bellabeats, Inc.'s executive and data analysis team. In order to accomplish this task, I will complete all parts of the Data Analysis Process (Ask, Prepare, Process, Analyze, Share, Act). In addition, I will break each part of the Data Analysis Process down into three sections to provide clarity and accountability. Those three sections are: Guiding Questions, Key Tasks, and Deliverables. For the sake of space and to avoid repetition, I will record the deliverables for each Key Task directly under the numbered Key Task using an asterisk (*) as an identifier.

    Section 1 - Ask: A. Guiding Questions: Who are the key stakeholders and what are their goals for the data analysis project? What is the business task that this data analysis project is attempting to solve?

    B. Key Tasks: Identify key stakeholders and their goals for the data analysis project *The key stakeholders for this project are as follows: -Urška Sršen and Sando Mur - co-founders of Bellabeats, Inc. -Bellabeats marketing analytics team. I am a member of this team. Identify the business task. *The business task is: -As provided by co-founder Urška Sršen, the business task for this project is to gain insight into how consumers are using their non-BellaBeats smart devices in order to guide upcoming marketing strategies for the company which will help drive future growth. Specifically, the researcher was tasked with applying insights driven by the data analysis process to 1 BellaBeats product and presenting those insights to BellaBeats stakeholders.

    Section 2 - Prepare: A. Guiding Questions: Where is the data stored and organized? Are there any problems with the data? How does the data help answer the business question?

    B. Key Tasks: Research and communicate the source of the data, and how it is stored/organized to stakeholders. *The data source used for our case study is FitBit Fitness Tracker Data. This dataset is stored in Kaggle and was made available through user Mobius in an open-source format. Therefore, the data is public and available to be copied, modified, and distributed, all without asking the user for permission. These datasets were generated by respondents to a distributed survey via Amazon Mechanical Turk reportedly (see credibility section directly below) between 03/12/2016 thru 05/12/2016. *Reportedly (see credibility section directly below), thirty eligible Fitbit users consented to the submission of personal tracker data, including output related to steps taken, calories burned, time spent sleeping, heart rate, and distance traveled. This data was broken down into minute, hour, and day level totals. This data is stored in 18 CSV documents. I downloaded all 18 documents into my local laptop and decided to use 2 documents for the purposes of this project as they were files which had merged activity and sleep data from the other documents. All unused documents were permanently deleted from the laptop. The 2 files used were: -sleepDaymerged.csv -dailyActivitymerged.csv Identify and communicate to stakeholders any problems found with the data related to credibility and bias. *As will be more specifically presented in the Process section, the data seems to have credibility issues related to the reported time frame of the data collected. The metadata seems to indicate that the data collected covered roughly 2 months of FitBit tracking. However, upon my initial data processing, I found that only 1 month of data was reported. *As will be more specifically presented in the Process section, the data has credibility issues related to the number of individuals who reported FitBit data. Specifically, the metadata communicates that 30 individual users agreed to report their tracking data. My initial data processing uncovered 33 individual IDs in the dailyActivity_merged dataset. *Due to the small number of participants (...

  13. i

    Global Financial Inclusion (Global Findex) Database 2021 - Malta

    • catalog.ihsn.org
    • microdata.worldbank.org
    Updated Dec 16, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Development Research Group, Finance and Private Sector Development Unit (2022). Global Financial Inclusion (Global Findex) Database 2021 - Malta [Dataset]. https://catalog.ihsn.org/catalog/10475
    Explore at:
    Dataset updated
    Dec 16, 2022
    Dataset authored and provided by
    Development Research Group, Finance and Private Sector Development Unit
    Time period covered
    2021
    Area covered
    Malta
    Description

    Abstract

    The fourth edition of the Global Findex offers a lens into how people accessed and used financial services during the COVID-19 pandemic, when mobility restrictions and health policies drove increased demand for digital services of all kinds.

    The Global Findex is the world's most comprehensive database on financial inclusion. It is also the only global demand-side data source allowing for global and regional cross-country analysis to provide a rigorous and multidimensional picture of how adults save, borrow, make payments, and manage financial risks. Global Findex 2021 data were collected from national representative surveys of about 128,000 adults in more than 120 economies. The latest edition follows the 2011, 2014, and 2017 editions, and it includes a number of new series measuring financial health and resilience and contains more granular data on digital payment adoption, including merchant and government payments.

    The Global Findex is an indispensable resource for financial service practitioners, policy makers, researchers, and development professionals.

    Geographic coverage

    National coverage

    Analysis unit

    Individual

    Kind of data

    Observation data/ratings [obs]

    Sampling procedure

    In most developing economies, Global Findex data have traditionally been collected through face-to-face interviews. Surveys are conducted face-to-face in economies where telephone coverage represents less than 80 percent of the population or where in-person surveying is the customary methodology. However, because of ongoing COVID-19 related mobility restrictions, face-to-face interviewing was not possible in some of these economies in 2021. Phone-based surveys were therefore conducted in 67 economies that had been surveyed face-to-face in 2017. These 67 economies were selected for inclusion based on population size, phone penetration rate, COVID-19 infection rates, and the feasibility of executing phone-based methods where Gallup would otherwise conduct face-to-face data collection, while complying with all government-issued guidance throughout the interviewing process. Gallup takes both mobile phone and landline ownership into consideration. According to Gallup World Poll 2019 data, when face-to-face surveys were last carried out in these economies, at least 80 percent of adults in almost all of them reported mobile phone ownership. All samples are probability-based and nationally representative of the resident adult population. Phone surveys were not a viable option in 17 economies that had been part of previous Global Findex surveys, however, because of low mobile phone ownership and surveying restrictions. Data for these economies will be collected in 2022 and released in 2023.

    In economies where face-to-face surveys are conducted, the first stage of sampling is the identification of primary sampling units. These units are stratified by population size, geography, or both, and clustering is achieved through one or more stages of sampling. Where population information is available, sample selection is based on probabilities proportional to population size; otherwise, simple random sampling is used. Random route procedures are used to select sampled households. Unless an outright refusal occurs, interviewers make up to three attempts to survey the sampled household. To increase the probability of contact and completion, attempts are made at different times of the day and, where possible, on different days. If an interview cannot be obtained at the initial sampled household, a simple substitution method is used. Respondents are randomly selected within the selected households. Each eligible household member is listed, and the hand-held survey device randomly selects the household member to be interviewed. For paper surveys, the Kish grid method is used to select the respondent. In economies where cultural restrictions dictate gender matching, respondents are randomly selected from among all eligible adults of the interviewer's gender.

    In traditionally phone-based economies, respondent selection follows the same procedure as in previous years, using random digit dialing or a nationally representative list of phone numbers. In most economies where mobile phone and landline penetration is high, a dual sampling frame is used.

    The same respondent selection procedure is applied to the new phone-based economies. Dual frame (landline and mobile phone) random digital dialing is used where landline presence and use are 20 percent or higher based on historical Gallup estimates. Mobile phone random digital dialing is used in economies with limited to no landline presence (less than 20 percent).

    For landline respondents in economies where mobile phone or landline penetration is 80 percent or higher, random selection of respondents is achieved by using either the latest birthday or household enumeration method. For mobile phone respondents in these economies or in economies where mobile phone or landline penetration is less than 80 percent, no further selection is performed. At least three attempts are made to reach a person in each household, spread over different days and times of day.

    Sample size for Malta is 1000.

    Mode of data collection

    Landline and mobile telephone

    Research instrument

    Questionnaires are available on the website.

    Sampling error estimates

    Estimates of standard errors (which account for sampling error) vary by country and indicator. For country-specific margins of error, please refer to the Methodology section and corresponding table in Demirgüç-Kunt, Asli, Leora Klapper, Dorothe Singer, Saniya Ansar. 2022. The Global Findex Database 2021: Financial Inclusion, Digital Payments, and Resilience in the Age of COVID-19. Washington, DC: World Bank.

  14. O

    Open Source Data Labeling Tool Report

    • marketresearchforecast.com
    doc, pdf, ppt
    Updated Mar 7, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Research Forecast (2025). Open Source Data Labeling Tool Report [Dataset]. https://www.marketresearchforecast.com/reports/open-source-data-labeling-tool-28519
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Mar 7, 2025
    Dataset authored and provided by
    Market Research Forecast
    License

    https://www.marketresearchforecast.com/privacy-policyhttps://www.marketresearchforecast.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The open-source data labeling tool market is experiencing robust growth, driven by the increasing demand for high-quality training data in the burgeoning artificial intelligence (AI) and machine learning (ML) sectors. The market's expansion is fueled by several key factors. Firstly, the rising adoption of AI across various industries, including healthcare, automotive, and finance, necessitates large volumes of accurately labeled data. Secondly, open-source tools offer a cost-effective alternative to proprietary solutions, making them attractive to startups and smaller companies with limited budgets. Thirdly, the collaborative nature of open-source development fosters continuous improvement and innovation, leading to more sophisticated and user-friendly tools. While the cloud-based segment currently dominates due to scalability and accessibility, on-premise solutions maintain a significant share, especially among organizations with stringent data security and privacy requirements. The geographical distribution reveals strong growth in North America and Europe, driven by established tech ecosystems and early adoption of AI technologies. However, the Asia-Pacific region is expected to witness significant growth in the coming years, fueled by increasing digitalization and government initiatives promoting AI development. The market faces some challenges, including the need for skilled data labelers and the potential for inconsistencies in data quality across different open-source tools. Nevertheless, ongoing developments in automation and standardization are expected to mitigate these concerns. The forecast period of 2025-2033 suggests a continued upward trajectory for the open-source data labeling tool market. Assuming a conservative CAGR of 15% (a reasonable estimate given the rapid advancements in AI and the increasing need for labeled data), and a 2025 market size of $500 million (a plausible figure considering the significant investments in the broader AI market), the market is projected to reach approximately $1.8 billion by 2033. This growth will be further shaped by the ongoing development of new features, improved user interfaces, and the integration of advanced techniques such as active learning and semi-supervised learning within open-source tools. The competitive landscape is dynamic, with both established players and emerging startups contributing to the innovation and expansion of this crucial segment of the AI ecosystem. Companies are focusing on improving the accuracy, efficiency, and accessibility of their tools to cater to a growing and diverse user base.

  15. d

    Hourly wind speed in miles per hour and three-digit data-source flag...

    • catalog.data.gov
    • data.usgs.gov
    • +3more
    Updated Jul 6, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2024). Hourly wind speed in miles per hour and three-digit data-source flag associated with the data, January 1, 1948 - September 30, 2015 [Dataset]. https://catalog.data.gov/dataset/hourly-wind-speed-in-miles-per-hour-and-three-digit-data-source-flag-associated-with-th-30
    Explore at:
    Dataset updated
    Jul 6, 2024
    Dataset provided by
    U.S. Geological Survey
    Description

    The text file "Wind speed.txt" contains hourly data and associated data-source flag from January 1, 1948, to September 30, 2015. The primary source of the data is the Argonne National Laboratory, Illinois. The first four columns give year, month, day and hour of the observation. Column 5 is the data in miles per hour. Column 6 is the three-digit data-source flag to identify the wind speed data processing and they indicate if the data are original or missing, the method that was used to fill the missing periods, and any other transformations of the data. The data-source flag consist of a three-digit sequence in the form "xyz" that describe the origin and transformations of the data values. The user of the data should consult Over and others (2010) for the detailed documentation of this hourly data-source flag series. Reference Cited: Over, T.M., Price, T.H., and Ishii, A.L., 2010, Development and analysis of a meteorological database, Argonne National Laboratory, Illinois: U.S. Geological Survey Open File Report 2010-1220, 67 p., http://pubs.usgs.gov/of/2010/1220/.

  16. s

    Seair Exim Solutions

    • seair.co.in
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seair Exim, Seair Exim Solutions [Dataset]. https://www.seair.co.in
    Explore at:
    .bin, .xml, .csv, .xlsAvailable download formats
    Dataset provided by
    Seair Info Solutions PVT LTD
    Authors
    Seair Exim
    Area covered
    United States
    Description

    Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.

  17. f

    ODM Data Analysis—A tool for the automatic validation, monitoring and...

    • plos.figshare.com
    • datasetcatalog.nlm.nih.gov
    mp4
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tobias Johannes Brix; Philipp Bruland; Saad Sarfraz; Jan Ernsting; Philipp Neuhaus; Michael Storck; Justin Doods; Sonja Ständer; Martin Dugas (2023). ODM Data Analysis—A tool for the automatic validation, monitoring and generation of generic descriptive statistics of patient data [Dataset]. http://doi.org/10.1371/journal.pone.0199242
    Explore at:
    mp4Available download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Tobias Johannes Brix; Philipp Bruland; Saad Sarfraz; Jan Ernsting; Philipp Neuhaus; Michael Storck; Justin Doods; Sonja Ständer; Martin Dugas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    IntroductionA required step for presenting results of clinical studies is the declaration of participants demographic and baseline characteristics as claimed by the FDAAA 801. The common workflow to accomplish this task is to export the clinical data from the used electronic data capture system and import it into statistical software like SAS software or IBM SPSS. This software requires trained users, who have to implement the analysis individually for each item. These expenditures may become an obstacle for small studies. Objective of this work is to design, implement and evaluate an open source application, called ODM Data Analysis, for the semi-automatic analysis of clinical study data.MethodsThe system requires clinical data in the CDISC Operational Data Model format. After uploading the file, its syntax and data type conformity of the collected data is validated. The completeness of the study data is determined and basic statistics, including illustrative charts for each item, are generated. Datasets from four clinical studies have been used to evaluate the application’s performance and functionality.ResultsThe system is implemented as an open source web application (available at https://odmanalysis.uni-muenster.de) and also provided as Docker image which enables an easy distribution and installation on local systems. Study data is only stored in the application as long as the calculations are performed which is compliant with data protection endeavors. Analysis times are below half an hour, even for larger studies with over 6000 subjects.DiscussionMedical experts have ensured the usefulness of this application to grant an overview of their collected study data for monitoring purposes and to generate descriptive statistics without further user interaction. The semi-automatic analysis has its limitations and cannot replace the complex analysis of statisticians, but it can be used as a starting point for their examination and reporting.

  18. Annual salary of U.S. pediatricians 2018, by data source

    • statista.com
    Updated Jul 10, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Annual salary of U.S. pediatricians 2018, by data source [Dataset]. https://www.statista.com/statistics/963205/pediatric-compensation-us-by-source/
    Explore at:
    Dataset updated
    Jul 10, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    United States
    Description

    This statistic depicts the annual compensation among pediatricians in the U.S. according to different sources (organizations), as of 2018. According to Integrated Healthcare Strategies, annual salaries for pediatricians averaged some *** thousand U.S. dollars.

  19. d

    North America Data | USA, Canada | Decision Makers, Owner, Founder, CEO |...

    • datarade.ai
    Updated Aug 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Exellius Systems (2024). North America Data | USA, Canada | Decision Makers, Owner, Founder, CEO | 99M+ Contacts | 100% Work Verified Emails, Direct Dials | 16+ Attributes [Dataset]. https://datarade.ai/data-products/north-south-america-data-usa-canada-brazil-decision-m-exellius-systems
    Explore at:
    .bin, .json, .xml, .csv, .xls, .sql, .txtAvailable download formats
    Dataset updated
    Aug 31, 2024
    Dataset authored and provided by
    Exellius Systems
    Area covered
    Canada, Americas, United States, Brazil
    Description

    Welcome to the North America Data: Your Gateway to Strategic Connections Across the Americas

    In today’s fast-evolving business landscape, having the right data at your fingertips is crucial for success. North America Data offers an unmatched resource designed to empower businesses by providing access to key decision-makers across the vast and diverse markets of North and South America. Our meticulously curated database serves as the cornerstone of your strategic outreach efforts, enabling you to connect with the right people in the right places at the right time.

    What Makes Our Data Unique?

    Depth and Precision
    Our database is more than just a collection of names and contact details—it’s a gateway to deep, actionable insights about the people who shape industries. We go beyond basic data points to offer a nuanced understanding of top executives, owners, founders, and influencers. Whether you're looking to connect with a CEO of a Fortune 500 company or a founder of a dynamic startup, our data provides the precision you need to identify and engage the most relevant decision-makers.

    Our Data Sourcing Excellence

    Reliability and Integrity
    Our data is sourced from a variety of authoritative channels, ensuring that every entry is both reliable and relevant. We draw from respected business directories, publicly available records, and proprietary research methodologies. Each piece of data undergoes a rigorous vetting process, meticulously checked for accuracy, to ensure that you can trust the integrity of the information you receive.

    Primary Use-Cases and Industry Verticals

    Versatility Across Sectors
    The North America Data is a versatile tool designed to meet the needs of a wide range of industries. Whether you're in finance, manufacturing, technology, healthcare, retail, hospitality, energy, transportation, or any other sector, our database provides the critical insights necessary to drive your business forward. Use our data to:

    • Expand Market Presence: Identify and connect with key players in new markets.
    • Forge Strategic Partnerships: Reach out to potential collaborators and investors.
    • Conduct Market Research: Gain a deeper understanding of industry trends and dynamics.

      Seamless Integration with Broader Data Solutions

    Comprehensive Business Intelligence
    Our North America Data is not an isolated resource; it’s a vital component of our comprehensive business intelligence suite. When combined with our global datasets, it provides a holistic view of the global business landscape. This integrated approach enables businesses to make well-informed decisions, tapping into insights that span across continents and sectors.

    Geographical Coverage Across the Americas

    Pan-American Reach
    Our database covers the entirety of North and South America, offering a robust range of contacts across numerous countries, including but not limited to:

    • United States
    • Canada
    • Mexico
    • Brazil
    • Argentina
    • Colombia
    • Chile
    • Peru
    • Venezuela
    • And many more

      Extensive Industry Coverage

    Tailored to Your Sector
    We cater to a vast array of industries, ensuring that no matter your focus, our database has the coverage you need. Key industries include:

    • Finance: Access to high-level contacts in banking, investment, and financial services.
    • Manufacturing: Connect with decision-makers in production, logistics, and supply chain management.
    • Technology: Engage with leaders in software, hardware, and IT services.
    • Healthcare: Reach out to executives in hospitals, pharmaceuticals, and biotech.
    • Retail: Identify key players in e-commerce, brick-and-mortar stores, and consumer goods.
    • Hospitality: Connect with industry leaders in hotels, travel, and leisure.
    • Energy: Tap into contacts within oil, gas, renewable energy, and utilities.
    • Transportation: Engage with decision-makers in logistics, shipping, and infrastructure.

      Comprehensive Employee Size and Revenue Data

    Insights Across Business Sizes
    Our database doesn’t just provide contact information—it also includes detailed data on employee size and revenue. Whether you’re targeting small startups, mid-sized enterprises, or large multinational corporations, our database has the depth to accommodate your needs. We offer insights into:

    • Employee Size: From small businesses with 1-10 employees to large enterprises with 10,000+ employees.
    • Revenue Size: Covering companies ranging from early-stage startups to global giants in the Fortune 500.

      Empower Your Business with Unmatched Data Access

    Unlock Opportunities Across the Americas
    With the North America Data, you gain access to a powerful resource designed to unlock endless opportunities for growth and success. Whether you’re looking to break into new markets, establish strong business relationships, or enhance your market intelligence, our database equips you with the tools you need to excel.

    Explore the North America Data today and embark on a journey toward business excellence in the dynamic ma...

  20. Annual salary of U.S. neurologists 2018, by data source

    • statista.com
    Updated Jul 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Annual salary of U.S. neurologists 2018, by data source [Dataset]. https://www.statista.com/statistics/963182/neurology-compensation-us-by-source/
    Explore at:
    Dataset updated
    Jul 7, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Area covered
    United States
    Description

    This statistic depicts the annual compensation among neurologists in the U.S. according to different sources (organizations), as of 2018. According to Integrated Healthcare Strategies, annual salaries for neurologists averaged some *** thousand U.S. dollars.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
David G. Anderson; Joshua Wells; Stephen Yerka; Sarah Whitcher Kansa; Eric C. Kansa (2022). Data Source Type [Dataset]. https://opencontext.org/predicates/6aeff869-47cf-4a32-920c-2ad037458bf9

Data Source Type

Explore at:
Dataset updated
Sep 29, 2022
Dataset provided by
Open Context
Authors
David G. Anderson; Joshua Wells; Stephen Yerka; Sarah Whitcher Kansa; Eric C. Kansa
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Description

An Open Context "predicates" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Variables" record is part of the "Digital Index of North American Archaeology (DINAA)" data publication.

Search
Clear search
Close search
Google apps
Main menu