100+ datasets found
  1. E

    Health Insurance Data

    • www-acc.healthinformationportal.eu
    • healthinformationportal.eu
    html
    Updated Sep 13, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zavod za zdravstveno zavarovanje Slovenije (2022). Health Insurance Data [Dataset]. https://www-acc.healthinformationportal.eu/services/find-data?page=29
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Sep 13, 2022
    Dataset authored and provided by
    Zavod za zdravstveno zavarovanje Slovenije
    Variables measured
    sex, title, topics, acronym, country, funding, language, data_owners, description, contact_name, and 13 more
    Measurement technique
    Administrative data
    Dataset funded by
    <p>Health Insurance Fund Slovenia</p>
    Description

    The website shows data on the plan and implementation of the health services program by individual health activities (VZD) :

    • hospital medical activity,
    • general outpatient medical activity,
    • specialist outpatient medical activity,
    • dental practice,
    • other health activities,
    • activity of accommodation facilities for patient care,
    • social care without accommodation for the elderly and disabled,
    • production of pharmaceutical preparations,
    • retail trade in specialized stores with pharmaceutical products,
    • compulsory social security activity.

    Within the framework of each activity, the data for each period are shown separately by contractors and together, the activity by regional units of ZZZS and the activity data at the level of Slovenia together.

    Data on the plan and implementation of the health services program are shown in the accounting unit (e.g. points, quotients, weights, groups of comparable cases, non-medical care day, care, days...), which are used to calculate the work performed in the field of individual activities.

    The publication of information about the plan and implementation of the program on the ZZZS website is primarily intended for the professional public. The displayed program plan for an individual contractor refers to the defined billing period. (example: The plan for the period 1-3 201X is calculated as 3/12 of the annual plan agreed in the contract).

    The data on the implementation of the program represents the implementation of the program at an individual provider for insured persons who benefited from medical services from him during the accounting period. Data on the realization of the program do not refer to persons insured in accordance with the European legal order and bilateral agreements on social security. Data for individual contractors are classified by regional units based on the contractor's headquarters. The content of the data on the "number of cases" is defined in the Instruction on recording and accounting for medical services and issued materials.

    The institute reserves the right to change the data, in the event of subsequently discovered irregularities after already published on the Internet.

  2. d

    Coresignal | Web Data | Company Data | Global / 71M+ Records / Largest...

    • datarade.ai
    .json, .csv
    Updated Feb 21, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Coresignal (2024). Coresignal | Web Data | Company Data | Global / 71M+ Records / Largest Professional Network / Updated Daily [Dataset]. https://datarade.ai/data-products/coresignal-web-data-company-data-global-69m-records-coresignal
    Explore at:
    .json, .csvAvailable download formats
    Dataset updated
    Feb 21, 2024
    Dataset authored and provided by
    Coresignal
    Area covered
    Finland, Trinidad and Tobago, New Zealand, State of, Yemen, Sweden, Hong Kong, United Kingdom, Nauru, Libya
    Description

    Our Web Data dataset includes such data points as company name, location, headcount, industry, and size, among others. It offers extensive fresh and historical data, including even companies that operate in stealth mode.

    For lead generation

    With millions of companies worldwide, Web Company Database helps you filter potential clients based on custom criteria and speed up the conversion process.

    Use cases

    1. Filter potential clients according to location, size, and other criteria
    2. Enrich your existing database
    3. Improve conversion rates
    4. Use predictive models to identify potential leads
    5. Group your leads in segments for more accurate targeting

    For market and business analysis

    Our Web Company Data provides information about millions of companies, allowing you to find your competitors and see their weaknesses and strengths.

    Use cases

    1. Pinpoint your competitors
    2. Learn about your competitors' size, headcount, and revenue
    3. Prepare a data-driven plan for the next quarter

    For Investors

    We recommend B2B Web Data for investors to discover and evaluate businesses with the highest potential.

    Gain strategic business insights, enhance decision-making, and maintain algorithms that signal investment opportunities with Coresignal’s global B2B Web Dataset.

    Use cases

    1. Screen startups and industries showing early signs of growth
    2. Identify companies hungry for the next investment
    3. Check if a startup is about to reach the next maturity phase
    4. Identify and predict a startup's potential at the founding moment
    5. Choose companies that fit you in terms of size and headcount

    For sales prospecting

    B2B Web Database saves time your employees would otherwise use to search for potential clients manually.

    Use cases

    1. Make a short list of the top prospects
    2. Define which companies are large or small enough to buy your product
    3. Based on the revenue, determine which companies are ready to convert
    4. Sort the companies by their distance from your warehouse to draw a line where selling won't result in satisfactory profit
  3. S

    Near Real-time Data Access Portal

    • find.data.gov.scot
    • dtechtive.com
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Scottish and Southern Electricity Networks (2023). Near Real-time Data Access Portal [Dataset]. https://find.data.gov.scot/datasets/42715
    Explore at:
    Dataset updated
    Sep 20, 2023
    Dataset provided by
    Scottish and Southern Electricity Networks
    Area covered
    Scotland
    Description

    The Near Real-time Data Access (NeRDA) Portal is making near real-time data available to our stakeholders and interested parties. We're helping the transition to a smart, flexible system that connects large-scale energy generation right down to the solar panels and electric vehicles installed in homes, businesses and communities right across the country. In line with our Open Networks approach, our Near Real-time Data Access (NeRDA) portal is live and making available power flow information from our EHV, HV, and LV networks, taking in data from a number of sources, including SCADA PowerOn, our installed low voltage monitoring equipment, load model forecasting tool, connectivity model, and our Long-Term Development Statement (LTDS). Making near real-time data accessible from DNOs is facilitating an economic and efficient development and operation in the transition to a low carbon economy. NeRDA is a key enabler for the delivery of Net Zero - by opening network data, it is creating opportunities for the flexible markets, helping to identify the best locations to invest flexible resources, and connect faster. You can access this information via our informative near real-time Dashboard and download portions of data or connect to our API and receive an ongoing stream of near real-time data.

  4. Data from: Finding Relationships

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). Finding Relationships [Dataset]. https://www.johnsnowlabs.com/marketplace/finding-relationships/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Area covered
    N/A
    Description

    This dataset provides the information on relationships between concepts or atoms known to the Metathesaurus for the semantic type "Finding". In the dataset, for asymmetrical relationships there is one row for each direction of the relationship.

  5. SPARKESX: Single-dish PARKES data sets for finding the uneXpected - Part 3

    • data.csiro.au
    • researchdata.edu.au
    Updated Aug 30, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SukYee Yong; George Hobbs; Minh Huynh; Vivien Rolland; Lars Petersson; Ray Norris; Shi Dai; Rui Luo; Andrew Zic (2022). SPARKESX: Single-dish PARKES data sets for finding the uneXpected - Part 3 [Dataset]. http://doi.org/10.25919/4g8p-gd74
    Explore at:
    Dataset updated
    Aug 30, 2022
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    SukYee Yong; George Hobbs; Minh Huynh; Vivien Rolland; Lars Petersson; Ray Norris; Shi Dai; Rui Luo; Andrew Zic
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    International Centre for Radio Astronomy Research
    Macquarie University
    University of Western Sydney
    CSIROhttp://www.csiro.au/
    Description

    We present the Single-dish PARKES data sets for finding the uneXpected (SPARKESX), a compilation of real and simulated high-time resolution observations. SPARKESX comprises three mock surveys from the Parkes ''Murriyang'' radio telescope. A broad selection of simulated and injected expected signals (such as pulsars, fast radio bursts), poorly known signals (such as the features expected from flare stars) and unknown unknowns are generated for each survey. We provide a baseline by presenting how successful a typical pipeline based on the standard pulsar search software, PRESTO, is at finding the injected signals.

    The dataset is designed to aid in the development of new search algorithms, including image processing, machine learning, and deep learning. The raw data, ground truth labels, and baseline are provided.

    The collection is split into 4 parts. See collections in related links. Part 1 - Ground truth labels, injected images, multibeam dataset Part 2 - PAF dataset Part 3 - PAF dataset Part 4 - PAF dataset

    Publication: SPARKESX: Single-dish PARKES data sets for finding the uneXpected - A data challenge (Yong et a. 2022, submitted) Lineage: The injected signals and simulated data were created using CSIRO's open source simulateSearch software. The real data from the multibeam survey were acquired from the CSIRO Data Access Portal.

  6. d

    Find Environmental Data: Mapping

    • fed.dcceew.gov.au
    Updated Mar 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dept of Climate Change, Energy, the Environment & Water (2023). Find Environmental Data: Mapping [Dataset]. https://fed.dcceew.gov.au/datasets/find-environmental-data-mapping
    Explore at:
    Dataset updated
    Mar 27, 2023
    Dataset authored and provided by
    Dept of Climate Change, Energy, the Environment & Water
    Description

    This Guide is designed to assist you with adding and viewing data on a map within the Department of Climate Change, Energy, the Environment and Water's Find Environmental Data (FED) geospatial data catalogue.This Guide assumes that you are familiar with locating data within FED. For further assistance see the Finding Data Guide.

  7. p

    Saudi Arabia Phone Number Data

    • listtodata.com
    .csv, .xls, .txt
    Updated Jul 17, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    List to Data (2025). Saudi Arabia Phone Number Data [Dataset]. https://listtodata.com/saudi-arabia-number-data
    Explore at:
    .csv, .xls, .txtAvailable download formats
    Dataset updated
    Jul 17, 2025
    Dataset authored and provided by
    List to Data
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2025 - Dec 31, 2025
    Area covered
    Saudi Arabia
    Variables measured
    phone numbers, Email Address, full name, Address, City, State, gender,age,income,ip address,
    Description

    Saudi Arabia phone number data is another important collection of phone numbers. These numbers come from trusted sources. We carefully check every number. This means you only get real numbers from reliable places. Furthermore, this data includes source URLs. You can use these URLs to find out where the numbers came from. This adds transparency to the data. If you have questions, you can get help anytime. Support is available 24/7. Moreover, the phone data has an opt-in feature. With customer support always on hand to help, you can feel confident using this data.Saudi Arabia number data is a special collection of phone numbers. Besides, this list includes numbers from people living in Saudi Arabia. Each number in this database has verification for accuracy. If you ever find a number that does not work, there is a replacement guarantee. This means any invalid number gets replaced with a valid one at no extra cost. The data comes from people who have given permission. Thus, this respect for privacy makes it a great tool for businesses. At List to Data, we help you find important phone numbers easily and quickly.

  8. d

    Data from: Handling Reference Questions

    • search.dataone.org
    Updated Dec 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vince Gray (2023). Handling Reference Questions [Dataset]. http://doi.org/10.5683/SP3/M7HSLD
    Explore at:
    Dataset updated
    Dec 28, 2023
    Dataset provided by
    Borealis
    Authors
    Vince Gray
    Description

    This presentation shows how to respond to the reference question "I need data". How to help the patron and find out what they want is explained. Also, where to find the data and what to do if you cannot find it.

  9. DISCOVER-AQ Colorado Deployment NREL-Golden Ground Site Data - Dataset -...

    • data.nasa.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). DISCOVER-AQ Colorado Deployment NREL-Golden Ground Site Data - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/discover-aq-colorado-deployment-nrel-golden-ground-site-data-e3f62
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Area covered
    Golden, Colorado
    Description

    DISCOVERAQ_Colorado_Ground_NREL-Golden_Data contains data collected at the NREL-Golden ground site during the Colorado (Denver) deployment of NASA's DISCOVER-AQ field study. This data product contains data for only the Colorado deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.

  10. E

    Accidents at work

    • www-acc.healthinformationportal.eu
    • healthinformationportal.eu
    html
    Updated Apr 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nacionalni Inštitut za Javno Zdravje (NIJZ) (2022). Accidents at work [Dataset]. https://www-acc.healthinformationportal.eu/services/find-data?page=29
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Apr 28, 2022
    Dataset authored and provided by
    Nacionalni Inštitut za Javno Zdravje (NIJZ)
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Variables measured
    sex, title, topics, acronym, country, funding, language, data_owners, description, age_range_to, and 19 more
    Measurement technique
    Administrative data
    Dataset funded by
    <p>State budget</p>
    Description

    Monitoring of injuries that occurred to employees and the self - employed while working, at business trips and (in special cases) also on the way to and from work.

  11. DISCOVER-AQ Maryland Deployment Edgewood Ground Site Data - Dataset - NASA...

    • data.nasa.gov
    • data.staging.idas-ds1.appdat.jsc.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). DISCOVER-AQ Maryland Deployment Edgewood Ground Site Data - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/discover-aq-maryland-deployment-edgewood-ground-site-data-02d32
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Area covered
    Edgewood, Maryland
    Description

    DISCOVERAQ_Maryland_Ground_Edgewood_Data contains data collected at the Edgewood ground site during the Maryland (Baltimore-Washington) deployment of NASA's DISCOVER-AQ field study. This data product contains data for only the Maryland deployment and data collection is complete.Understanding the factors that contribute to near surface pollution is difficult using only satellite-based observations. The incorporation of surface-level measurements from aircraft and ground-based platforms provides the crucial information necessary to validate and expand upon the use of satellites in understanding near surface pollution. Deriving Information on Surface conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) was a four-year campaign conducted in collaboration between NASA Langley Research Center, NASA Goddard Space Flight Center, NASA Ames Research Center, and multiple universities to improve the use of satellites to monitor air quality for public health and environmental benefit. Through targeted airborne and ground-based observations, DISCOVER-AQ enabled more effective use of current and future satellites to diagnose ground level conditions influencing air quality.DISCOVER-AQ employed two NASA aircraft, the P-3B and King Air, with the P-3B completing in-situ spiral profiling of the atmosphere (aerosol properties, meteorological variables, and trace gas species). The King Air conducted both passive and active remote sensing of the atmospheric column extending below the aircraft to the surface. Data from an existing network of surface air quality monitors, AERONET sun photometers, Pandora UV/vis spectrometers and model simulations were also collected. Further, DISCOVER-AQ employed many surface monitoring sites, with measurements being made on the ground, in conjunction with the aircraft. The B200 and P-3B conducted flights in Baltimore-Washington, D.C. in 2011, Houston, TX in 2013, San Joaquin Valley, CA in 2013, and Denver, CO in 2014. These regions were targeted due to being in violation of the National Ambient Air Quality Standards (NAAQS).The first objective of DISCOVER-AQ was to determine and investigate correlations between surface measurements and satellite column observations for the trace gases ozone (O3), nitrogen dioxide (NO2), and formaldehyde (CH2O) to understand how satellite column observations can diagnose surface conditions. DISCOVER-AQ also had the objective of using surface-level measurements to understand how satellites measure diurnal variability and to understand what factors control diurnal variability. Lastly, DISCOVER-AQ aimed to explore horizontal scales of variability, such as regions with steep gradients and urban plumes.

  12. A study of the impact of data sharing on article citations using journal...

    • plos.figshare.com
    • dataverse.harvard.edu
    • +1more
    docx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Garret Christensen; Allan Dafoe; Edward Miguel; Don A. Moore; Andrew K. Rose (2023). A study of the impact of data sharing on article citations using journal policies as a natural experiment [Dataset]. http://doi.org/10.1371/journal.pone.0225883
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Garret Christensen; Allan Dafoe; Edward Miguel; Don A. Moore; Andrew K. Rose
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This study estimates the effect of data sharing on the citations of academic articles, using journal policies as a natural experiment. We begin by examining 17 high-impact journals that have adopted the requirement that data from published articles be publicly posted. We match these 17 journals to 13 journals without policy changes and find that empirical articles published just before their change in editorial policy have citation rates with no statistically significant difference from those published shortly after the shift. We then ask whether this null result stems from poor compliance with data sharing policies, and use the data sharing policy changes as instrumental variables to examine more closely two leading journals in economics and political science with relatively strong enforcement of new data policies. We find that articles that make their data available receive 97 additional citations (estimate standard error of 34). We conclude that: a) authors who share data may be rewarded eventually with additional scholarly citations, and b) data-posting policies alone do not increase the impact of articles published in a journal unless those policies are enforced.

  13. D

    NTD Annual Data View - Service (by Agency)

    • data.transportation.gov
    application/rdfxml +5
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federal Transit Administration (2024). NTD Annual Data View - Service (by Agency) [Dataset]. https://data.transportation.gov/Public-Transit/NTD-Annual-Data-View-Service-by-Agency-/6y83-7vuw
    Explore at:
    csv, application/rdfxml, application/rssxml, json, tsv, xmlAvailable download formats
    Dataset updated
    Dec 16, 2024
    Dataset authored and provided by
    Federal Transit Administration
    License

    https://www.usa.gov/government-workshttps://www.usa.gov/government-works

    Description

    Provides transit agency-wide totals for service data for applicable agencies reporting to the National Transit Database in the 2022 and 2023 report years. This view is based off of the "2022 - 2023 NTD Annual Data - Service (by Mode and Time Period)" dataset, which displays the same data at a lower level of aggregation. This view displays the data at a higher level (by agency).

    NTD Data Tables organize and summarize data from the 2022 and 2023 National Transit Database in a manner that is more useful for quick reference and summary analysis. The parent dataset is based on the 2022 and 2023 Service database files.

    In years 2015-2021, you can find this data in the "Service" data table on NTD Program website, at https://transit.dot.gov/ntd/ntd-data.

    In versions of the data tables from before 2014, you can find data on service in the file called "Transit Operating Statistics: Service Supplied and Consumed."

    If you have any other questions about this table, please contact the NTD Help Desk at NTDHelp@dot.gov.

  14. c

    UFG Open Data - Sites - CKAN Ecosystem Catalog

    • catalog.civicdataecosystem.org
    Updated May 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). UFG Open Data - Sites - CKAN Ecosystem Catalog [Dataset]. https://catalog.civicdataecosystem.org/dataset/ufg-open-data
    Explore at:
    Dataset updated
    May 14, 2025
    Description

    The UFG Open Data Portal is the tool made available by the Federal University of Goiás to make its data accessible and usable, so that everyone can find and use its public information. The portal values simplicity and organization so that you can easily find the data and information you need. The portal also aims to promote communication between the academic community and UFG, in order to make the university even better. The portal aims to make available any and all types of data within the scope of UFG. This includes data on teaching, research, outreach, materials, assets, processes, etc. The portal functions as a large catalog that facilitates the search and use of data published by the university's Pro-Rectories. Access to information is provided for in the Federal Constitution and the Universal Declaration of Human Rights. Open Data is the publication and dissemination of public data and information on the Internet, organized in such a way that allows its reuse in digital applications developed by society. This provides the community with a better understanding of the university, in accessing public services, in controlling public accounts, and in participating in the planning and development of public policies. On November 18, 2011, the Law on Access to Public Information (Law 12.527/2011) was enacted, which regulates access to data and information held by the government. This law constitutes a milestone for the democratization of public information, and stipulates, among other technical requirements, that the information requested by the citizen must follow technological criteria aligned with the "3 laws of open data". Within this context, the UFG Open Data Portal is the tool built by the university to centralize the search for and access to public data and information. Access the UFG Open Data Plan. Access the Open Data Publication Process at UFG. Brazilian Open Data Portal: http://dados.gov.br/ Translated from Portuguese Original Text: O portal de dados abertos da UFG é a ferramenta disponibilizada pela Universidade Federal de Goiás para tornar seus dados acessíveis e utilizáveis, para que todos possam encontrar e utilizar suas informações públicas. O portal preza pela simplicidade e organização para que você possa encontrar facilmente os dados e informações que precisa. O portal também tem o objetivo de promover a interlocução entre a comunidade acadêmica e com a UFG, afim de tornar uma univerdade cada vez melhor. O portal tem o objetivo de disponibilizar todo e qualquer tipo de dado no âmbito da UFG. São dados de ensino, pesquisa, extensão, materiais, patrimônio, processos, etc. O portal funciona como um grande catálogo que facilita a busca e uso de dados publicados pelas Pró-reitorias da universidade. O acesso a informação está previsto na Constituição Federal e na Declaração Universal dos Direitos Humanos. Dados Abertos é a publicação e disseminação dos dados e informações públicas na Internet, organizados de tal maneira que permita sua reutilização em aplicativos digitais desenvolvidos pela sociedade. Isso proporciona a comunidade um melhor entendimento da universidade, no acesso aos serviços públicos, no controle das contas públicas e na participação no planejamento e desenvolvimento das políticas públicas. Em 18 de novembro de 2011 foi sancionada a Lei de Acesso a Informação Pública (Lei 12.527/2011) que regula o acesso a dados e informações detidas pelo governo. Essa lei constitui um marco para a democratização da informação pública, e preconiza, dentre outros requisitos técnicos, que a informação solicitada pelo cidadão deve seguir critérios tecnológicos alinhados com as “3 leis de dados abertos”. Dentro desse contexto o Portal de dados abertos da UFG é a ferramenta construída pela universidade para centralizar a busca e o acesso dos dados e informações públicas. Acesse o Plano de Dados Abertos da UFG. Acesse o Processo de publicação de dados abertos na UFG. Portal brasileiro de dados abertos: http://dados.gov.br/

  15. Bond Analytics | Financial Data

    • lseg.com
    Updated Nov 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LSEG (2024). Bond Analytics | Financial Data [Dataset]. https://www.lseg.com/en/data-analytics/financial-data/analytics/pricing-analytics/bond-analytics
    Explore at:
    csv,json,python,user interface,xmlAvailable download formats
    Dataset updated
    Nov 25, 2024
    Dataset provided by
    London Stock Exchange Grouphttp://www.londonstockexchangegroup.com/
    Authors
    LSEG
    License

    https://www.lseg.com/en/policies/website-disclaimerhttps://www.lseg.com/en/policies/website-disclaimer

    Description

    Get Bond Analytics from LSEG to better analyze government and corporate bonds, preferred shares, inflation-linked bonds and municipal bonds. Find out more.

  16. Job Offers Web Scraping Search

    • kaggle.com
    Updated Feb 11, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). Job Offers Web Scraping Search [Dataset]. https://www.kaggle.com/datasets/thedevastator/job-offers-web-scraping-search
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 11, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Job Offers Web Scraping Search

    Targeted Results to Find the Optimal Work Solution

    By [source]

    About this dataset

    This dataset collects job offers from web scraping which are filtered according to specific keywords, locations and times. This data gives users rich and precise search capabilities to uncover the best working solution for them. With the information collected, users can explore options that match with their personal situation, skillset and preferences in terms of location and schedule. The columns provide detailed information around job titles, employer names, locations, time frames as well as other necessary parameters so you can make a smart choice for your next career opportunity

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset is a great resource for those looking to find an optimal work solution based on keywords, location and time parameters. With this information, users can quickly and easily search through job offers that best fit their needs. Here are some tips on how to use this dataset to its fullest potential:

    • Start by identifying what type of job offer you want to find. The keyword column will help you narrow down your search by allowing you to search for job postings that contain the word or phrase you are looking for.

    • Next, consider where the job is located – the Location column tells you where in the world each posting is from so make sure it’s somewhere that suits your needs!

    • Finally, consider when the position is available – look at the Time frame column which gives an indication of when each posting was made as well as if it’s a full-time/ part-time role or even if it’s a casual/temporary position from day one so make sure it meets your requirements first before applying!

    • Additionally, if details such as hours per week or further schedule information are important criteria then there is also info provided under Horari and Temps Oferta columns too! Now that all three criteria have been ticked off - key words, location and time frame - then take a look at Empresa (Company Name) and Nom_Oferta (Post Name) columns too in order to get an idea of who will be employing you should you land the gig!

      All these pieces of data put together should give any motivated individual all they need in order to seek out an optimal work solution - keep hunting good luck!

    Research Ideas

    • Machine learning can be used to groups job offers in order to facilitate the identification of similarities and differences between them. This could allow users to specifically target their search for a work solution.
    • The data can be used to compare job offerings across different areas or types of jobs, enabling users to make better informed decisions in terms of their career options and goals.
    • It may also provide an insight into the local job market, enabling companies and employers to identify where there is potential for new opportunities or possible trends that simply may have previously gone unnoticed

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: web_scraping_information_offers.csv | Column name | Description | |:-----------------|:------------------------------------| | Nom_Oferta | Name of the job offer. (String) | | Empresa | Company offering the job. (String) | | Ubicació | Location of the job offer. (String) | | Temps_Oferta | Time of the job offer. (String) | | Horari | Schedule of the job offer. (String) |

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. If you use this dataset in your research, please credit .

  17. Data from: Replication package for the paper: "A Study on the Pythonic...

    • zenodo.org
    zip
    Updated Nov 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous; Anonymous (2023). Replication package for the paper: "A Study on the Pythonic Functional Constructs' Understandability" [Dataset]. http://doi.org/10.5281/zenodo.10101383
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 10, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous; Anonymous
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Replication Package for A Study on the Pythonic Functional Constructs' Understandability

    This package contains several folders and files with code and data used in the study.


    examples/
    Contains the code snippets used as objects of the study, named as reported in Table 1, summarizing the experiment design.

    RQ1-RQ2-files-for-statistical-analysis/
    Contains three .csv files used as input for conducting the statistical analysis and drawing the graphs for addressing the first two research questions of the study. Specifically:

    - ConstructUsage.csv contains the declared frequency usage of the three functional constructs object of the study. This file is used to draw Figure 4.
    - RQ1.csv contains the collected data used for the mixed-effect logistic regression relating the use of functional constructs with the correctness of the change task, and the logistic regression relating the use of map/reduce/filter functions with the correctness of the change task.
    - RQ1Paired-RQ2.csv contains the collected data used for the ordinal logistic regression of the relationship between the perceived ease of understanding of the functional constructs and (i) participants' usage frequency, and (ii) constructs' complexity (except for map/reduce/filter).

    inter-rater-RQ3-files/
    Contains four .csv files used as input for computing the inter-rater agreement for the manual labeling used for addressing RQ3. Specifically, you will find one file for each functional construct, i.e., comprehension.csv, lambda.csv, and mrf.csv, and a different file used for highlighting the reasons why participants prefer to use the procedural paradigm, i.e., procedural.csv.

    Questionnaire-Example.pdf
    This file contains the questionnaire submitted to one of the ten experimental groups within our controlled experiment. Other questionnaires are similar, except for the code snippets used for the first section, i.e., change tasks, and the second section, i.e., comparison tasks.

    RQ2ManualValidation.csv
    This file contains the results of the manual validation being done to sanitize the answers provided by our participants used for addressing RQ2. Specifically, we coded the behavior description using four different levels: (i) correct, (ii) somewhat correct, (iii) wrong, and (iv) automatically generated.

    RQ3ManualValidation.xlsx
    This file contains the results of the open coding applied to address our third research question. Specifically, you will find four sheets, one for each functional construct and one for the procedural paradigm. For each sheet, you will find the provided answers together with the categories assigned to them.

    Appendix.pdf
    This file contains the results of the logistic regression relating the use of map, filter, and reduce functions with the correctness of the change task, not shown in the paper.

    FuncConstructs-Statistics.r
    This file contains an R script that you can reuse to re-run all the analyses conducted and discussed in the paper.

    FuncConstructs-Statistics.ipynb
    This file contains the code to re-execute all the analysis conducted in the paper as a notebook.

  18. f

    Data from: Correlation matrices.

    • plos.figshare.com
    xlsx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bálint Maczák; Gergely Vadai; András Dér; István Szendi; Zoltán Gingl (2023). Correlation matrices. [Dataset]. http://doi.org/10.1371/journal.pone.0261718.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Bálint Maczák; Gergely Vadai; András Dér; István Szendi; Zoltán Gingl
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Our analyses are based on 148×148 time- and frequency-domain correlation matrices. A correlation matrix covers all the possible use cases of every activity metric listed in the article. With these activity metrics and different preprocessing methods, we were able to calculate 148 different activity signals from multiple datasets of a single measurement. Each cell of a correlation matrix contains the mean and standard deviation of the calculated Pearson’s correlation coefficients between two types of activity signals based on 42 different subjects’ 10-days-long motion. The small correlation matrices presented both in the article and in the appendixes are derived from these 148 × 148 correlation matrices. This published Excel workbook contains multiple sheets labelled according to their content. The mean and standard deviation values for both time- and frequency-domain correlations can be found on their own separate sheet. Moreover, we reproduced the correlation matrix with an alternatively parametrized digital filter, which doubled the number of sheets to 8. In the Excel workbook, we used the same notation for both the datasets and activity metrics as presented in this article with an extension to the PIM metric: PIMs denotes the PIM metric where we used Simpson’s 3/8 rule integration method, PIMr indicates the PIM metric where we calculated the integral by simple numerical integration (Riemann sum). (XLSX)

  19. d

    Mobile Location Data | Get The Latest Insights on Consumer Visitation...

    • datarade.ai
    .csv
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    GapMaps, Mobile Location Data | Get The Latest Insights on Consumer Visitation Patterns to Make Informed Business Decisions | Foot Traffic Data | Location Data [Dataset]. https://datarade.ai/data-products/gapmaps-mobile-location-data-by-azira-global-mobile-locatio-gapmaps
    Explore at:
    .csvAvailable download formats
    Dataset authored and provided by
    GapMaps
    Area covered
    Canada, United States
    Description

    GapMaps Mobile Location Data uses location data on mobile phones sourced by Azira which is collected from smartphone apps when the users have given their permission to track their location. It can shed light on consumer visitation patterns (“where from” and “where to”), frequency of visits, profiles of consumers and much more.

    Businesses can utilise mobile location data to answer key questions including: - What is the demographic profile of customers visiting my locations? - What is my primary catchment? And where within that catchment do most of my customers travel from to reach my locations? - What points of interest drive customers to my locations (ie. work, shopping, recreation, hotel or education facilities that are in the area) ? - How far do customers travel to visit my locations? - Where are the potential gaps in my store network for new developments?
    - What is the sales impact on an existing store if a new store is opened nearby? - Is my marketing strategy targeted to the right audience? - Where are my competitor's customers coming from?

    Mobile Location data provides a range of benefits that make it a valuable addition to location intelligence services including: - Real-time - Low-cost at high scale - Accurate - Flexible - Non-proprietary - Empirical

    Azira have created robust screening methods to evaluate the quality of mobile location data collected from multiple sources to ensure that their data lake contains only the highest-quality mobile location data.

    This includes partnering with trusted location SDK providers that get proper end user consent to track their location when they download an application, can detect device movement/visits and use GPS to determine location co-ordinates.

    Data received from partners is put through Azira's data quality algorithm discarding data points that receive a low quality score.

    Use cases in Europe will be considered on a case to case basis.

  20. Z

    Fused Image dataset for convolutional neural Network-based crack Detection...

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Apr 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carlos Canchila (2023). Fused Image dataset for convolutional neural Network-based crack Detection (FIND) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_6383043
    Explore at:
    Dataset updated
    Apr 20, 2023
    Dataset provided by
    Wei Song
    Shanglian Zhou
    Carlos Canchila
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The “Fused Image dataset for convolutional neural Network-based crack Detection” (FIND) is a large-scale image dataset with pixel-level ground truth crack data for deep learning-based crack segmentation analysis. It features four types of image data including raw intensity image, raw range (i.e., elevation) image, filtered range image, and fused raw image. The FIND dataset consists of 2500 image patches (dimension: 256x256 pixels) and their ground truth crack maps for each of the four data types.

    The images contained in this dataset were collected from multiple bridge decks and roadways under real-world conditions. A laser scanning device was adopted for data acquisition such that the captured raw intensity and raw range images have pixel-to-pixel location correspondence (i.e., spatial co-registration feature). The filtered range data were generated by applying frequency domain filtering to eliminate image disturbances (e.g., surface variations, and grooved patterns) from the raw range data [1]. The fused image data were obtained by combining the raw range and raw intensity data to achieve cross-domain feature correlation [2,3]. Please refer to [4] for a comprehensive benchmark study performed using the FIND dataset to investigate the impact from different types of image data on deep convolutional neural network (DCNN) performance.

    If you share or use this dataset, please cite [4] and [5] in any relevant documentation.

    In addition, an image dataset for crack classification has also been published at [6].

    References:

    [1] Shanglian Zhou, & Wei Song. (2020). Robust Image-Based Surface Crack Detection Using Range Data. Journal of Computing in Civil Engineering, 34(2), 04019054. https://doi.org/10.1061/(asce)cp.1943-5487.0000873

    [2] Shanglian Zhou, & Wei Song. (2021). Crack segmentation through deep convolutional neural networks and heterogeneous image fusion. Automation in Construction, 125. https://doi.org/10.1016/j.autcon.2021.103605

    [3] Shanglian Zhou, & Wei Song. (2020). Deep learning–based roadway crack classification with heterogeneous image data fusion. Structural Health Monitoring, 20(3), 1274-1293. https://doi.org/10.1177/1475921720948434

    [4] Shanglian Zhou, Carlos Canchila, & Wei Song. (2023). Deep learning-based crack segmentation for civil infrastructure: data types, architectures, and benchmarked performance. Automation in Construction, 146. https://doi.org/10.1016/j.autcon.2022.104678

    5 Shanglian Zhou, Carlos Canchila, & Wei Song. (2022). Fused Image dataset for convolutional neural Network-based crack Detection (FIND) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6383044

    [6] Wei Song, & Shanglian Zhou. (2020). Laser-scanned roadway range image dataset (LRRD). Laser-scanned Range Image Dataset from Asphalt and Concrete Roadways for DCNN-based Crack Classification, DesignSafe-CI. https://doi.org/10.17603/ds2-bzv3-nc78

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Zavod za zdravstveno zavarovanje Slovenije (2022). Health Insurance Data [Dataset]. https://www-acc.healthinformationportal.eu/services/find-data?page=29

Health Insurance Data

Explore at:
htmlAvailable download formats
Dataset updated
Sep 13, 2022
Dataset authored and provided by
Zavod za zdravstveno zavarovanje Slovenije
Variables measured
sex, title, topics, acronym, country, funding, language, data_owners, description, contact_name, and 13 more
Measurement technique
Administrative data
Dataset funded by
<p>Health Insurance Fund Slovenia</p>
Description

The website shows data on the plan and implementation of the health services program by individual health activities (VZD) :

  • hospital medical activity,
  • general outpatient medical activity,
  • specialist outpatient medical activity,
  • dental practice,
  • other health activities,
  • activity of accommodation facilities for patient care,
  • social care without accommodation for the elderly and disabled,
  • production of pharmaceutical preparations,
  • retail trade in specialized stores with pharmaceutical products,
  • compulsory social security activity.

Within the framework of each activity, the data for each period are shown separately by contractors and together, the activity by regional units of ZZZS and the activity data at the level of Slovenia together.

Data on the plan and implementation of the health services program are shown in the accounting unit (e.g. points, quotients, weights, groups of comparable cases, non-medical care day, care, days...), which are used to calculate the work performed in the field of individual activities.

The publication of information about the plan and implementation of the program on the ZZZS website is primarily intended for the professional public. The displayed program plan for an individual contractor refers to the defined billing period. (example: The plan for the period 1-3 201X is calculated as 3/12 of the annual plan agreed in the contract).

The data on the implementation of the program represents the implementation of the program at an individual provider for insured persons who benefited from medical services from him during the accounting period. Data on the realization of the program do not refer to persons insured in accordance with the European legal order and bilateral agreements on social security. Data for individual contractors are classified by regional units based on the contractor's headquarters. The content of the data on the "number of cases" is defined in the Instruction on recording and accounting for medical services and issued materials.

The institute reserves the right to change the data, in the event of subsequently discovered irregularities after already published on the Internet.

Search
Clear search
Close search
Google apps
Main menu