Facebook
TwitterThe ETCBC database of the Hebrew Bible (formerly known as WIVU database), contains the scholarly text of the Hebrew Bible with linguistic markup. A previous version can be found in EASY (see the link below). The present dataset is an improvement in many ways:
(A) it contains a new version of the data, called ETCBC4. The content has been heavily updated, with new linguistic annotations and a better organisation of them, and lots of additions and corrections as well.
(B) the data format is now Linguistic Annotation Framework (see below). This contrasts with the previous version, which has been archived as a database dump in a specialised format: Emdros (see the link below).
(C) a new tool, LAF-Fabric is added to process the ETCBC4 version directly from its LAF representation. The picture on this page shows a few samples what can be done with it.
(D) extensive documentation is provided, including a description of all the computing steps involved in getting the data in LAF format.
Since 2012 there is an ISO standard for the stand-off markup of language resources, Linguistic Annotation Framework (LAF).
As a result of the SHEBANQ project (see link below), funded by CLARIN-NL and carried out by the ETCBC and DANS, we have a created a tool, LAF-Fabric, by which we can convert EMDROS databases of the ETCBC into LAF and then do data analytic work by means of e.g. IPython notebooks. This has been used for the Hebrew Bible, but it can also be applied to the Syriac text in the CALAP (see link below).
This dataset contains a folder laf with the laf files, and the necessary declarations are contained in the folder decl. Among these declarations are feature declaration documents, in TEI format (see link below), with hyperlinks to concept definitions in ISOcat (see link below). For completeness, the ISOcat definitions are repeated in the feature declaration documents. These definitions are terse, and they are more fully documented in the folder documentation.
Facebook
TwitterGene Expression Omnibus. GEO is a public functional genomics data repository supporting MIAME-compliant data submissions. The GEO DataSets database stores original submitter-supplied records (Series, Samples and Platforms) as well as curated DataSets.
Facebook
TwitterAfter May 3, 2024, this dataset and webpage will no longer be updated because hospitals are no longer required to report data on COVID-19 hospital admissions, and hospital capacity and occupancy data, to HHS through CDC’s National Healthcare Safety Network. Data voluntarily reported to NHSN after May 1, 2024, will be available starting May 10, 2024, at COVID Data Tracker Hospitalizations. This report shows facilities currently in suspense regarding CoP requirements due to being in a work plan or other related reasons is shown if any facilities are currently in suspense. These CCNs will not be included in the tab listing all other hospitals or included in any summary counts while in suspense. 01/05/2024 – As of FAQ 6, the following optional fields have been added to this report: total_adult_patients_hospitalized_confirmed_influenza total_pediatric_patients_hospitalized_confirmed_influenza previous_day_admission_adult_influenza_confirmed previous_day_admission_pediatric_influenza_confirmed staffed_icu_adult_patients_confirmed_influenza staffed_icu_pediatric_patients_confirmed_influenza total_adult_patients_hospitalized_confirmed_rsv total_pediatric_patients_hospitalized_confirmed_rsv previous_day_admission_adult_rsv_confirmed previous_day_admission_pediatric_rsv_confirmed staffed_icu_adult_patients_confirmed_rsv staffed_icu_pediatric_patients_confirmed_rsv 6/17/2023 - With the new 28-day compliance reporting period, CoP reports will be posted every 4 weeks. 9/12/2021 - To view other COVID-19 Hospital Data Coverage datasets, follow this link to view summary page: https://healthdata.gov/stories/s/ws49-ddj5 As of FAQ3, the following field are federally inactive and will no longer be included in this report: previous_week_personnel_covid_vaccinated_doses_administered total_personnel_covid_vaccinated_doses_none total_personnel_covid_vaccinated_doses_one total_personnel_covid_vaccinated_doses_all total_personnel previous_week_patients_covid_vaccinated_doses_one previous_week_patients_covid_vaccinated_doses_all
Facebook
TwitterIn January 2025, around ***** percent of Germany had 5G coverage. Only *** percent was a so-called dead zone, which is an area where there is no 2G, 4G, or 5G. The number of 5G base stations had increased significantly in recent years.
Facebook
Twitterhttps://choosealicense.com/licenses/odbl/https://choosealicense.com/licenses/odbl/
COVID-19 Hospital Data Coverage Summary
Description
After May 3, 2024, this dataset and webpage will no longer be updated because hospitals are no longer required to report data on COVID-19 hospital admissions, and hospital capacity and occupancy data, to HHS through CDC’s National Healthcare Safety Network. Data voluntarily reported to NHSN after May 1, 2024, will be available starting May 10, 2024, at COVID Data Tracker Hospitalizations.
This report shows a summary of… See the full description on the dataset page: https://huggingface.co/datasets/HHS-Official/covid-19-hospital-data-coverage-summary.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
After May 3, 2024, this dataset and webpage will no longer be updated because hospitals are no longer required to report data on COVID-19 hospital admissions, and hospital capacity and occupancy data, to HHS through CDC’s National Healthcare Safety Network. Data voluntarily reported to NHSN after May 1, 2024, will be available starting May 10, 2024, at COVID Data Tracker Hospitalizations.
This shows the facilities details.
01/05/2024 – As of FAQ 6, the following optional fields have been added to this report:
6/17/2023 - With the new 28-day compliance reporting period, CoP reports will be posted every 4 weeks.
9/12/2021 - To view other COVID-19 Hospital Data Coverage datasets, follow this link to view summary page: https://healthdata.gov/stories/s/ws49-ddj5
08/10/2022 - As of FAQ3, the following field are federally inactive and will no longer be included in this report:
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper presents a large-scale document-level comparison of two major bibliographic data sources: Scopus and Dimensions. The focus is on the differences in their coverage of documents at two levels of aggregation: by country and by institution. The main goal is to analyze whether Dimensions offers as good new opportunities for bibliometric analysis at the country and institutional levels as it does at the global level. Differences in the completeness and accuracy of citation links are also studied. The results allow a profile of Dimensions to be drawn in terms of its coverage by country and institution. Dimensions’ coverage is more than 25% greater than Scopus which is consistent with previous studies. However, the main finding of this study is the lack of affiliation data in a large fraction of Dimensions documents. We found that close to half of all documents in Dimensions are not associated with any country of affiliation while the proportion of documents without this data in Scopus is much lower. This situation mainly affects the possibilities that Dimensions can offer as instruments for carrying out bibliometric analyses at the country and institutional level. Both of these aspects are highly pragmatic considerations for information retrieval and the design of policies for the use of scientific databases in research evaluation.
Facebook
TwitterOur location data powers the most advanced address validation solutions for enterprise backend and frontend systems.
A global, standardized, self-hosted location dataset containing all administrative divisions, cities, and zip codes for 247 countries.
All geospatial data for address data validation is updated weekly to maintain the highest data quality, including challenging countries such as China, Brazil, Russia, and the United Kingdom.
Use cases for the Address Validation at Zip Code Level Database (Geospatial data)
Address capture and address validation
Address autocomplete
Address verification
Reporting and Business Intelligence (BI)
Master Data Mangement
Logistics and Supply Chain Management
Sales and Marketing
Product Features
Dedicated features to deliver best-in-class user experience
Multi-language support including address names in local and foreign languages
Comprehensive city definitions across countries
Data export methodology
Our location data packages are offered in variable formats, including .csv. All geospatial data for address validation are optimized for seamless integration with popular systems like Esri ArcGIS, Snowflake, QGIS, and more.
Why do companies choose our location databases
Enterprise-grade service
Full control over security, speed, and latency
Reduce integration time and cost by 30%
Weekly updates for the highest quality
Seamlessly integrated into your software
Note: Custom address validation packages are available. Please submit a request via the above contact button for more details.
Facebook
TwitterGene2Phenotype (G2P) is a detailed collection of expert curated gene-disease associations with information on allelic requirement, observed variant classes and disease mechanism
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Background: In Brazil, studies that map electronic healthcare databases in order to assess their suitability for use in pharmacoepidemiologic research are lacking. We aimed to identify, catalogue, and characterize Brazilian data sources for Drug Utilization Research (DUR).Methods: The present study is part of the project entitled, “Publicly Available Data Sources for Drug Utilization Research in Latin American (LatAm) Countries.” A network of Brazilian health experts was assembled to map secondary administrative data from healthcare organizations that might provide information related to medication use. A multi-phase approach including internet search of institutional government websites, traditional bibliographic databases, and experts’ input was used for mapping the data sources. The reviewers searched, screened and selected the data sources independently; disagreements were resolved by consensus. Data sources were grouped into the following categories: 1) automated databases; 2) Electronic Medical Records (EMR); 3) national surveys or datasets; 4) adverse event reporting systems; and 5) others. Each data source was characterized by accessibility, geographic granularity, setting, type of data (aggregate or individual-level), and years of coverage. We also searched for publications related to each data source.Results: A total of 62 data sources were identified and screened; 38 met the eligibility criteria for inclusion and were fully characterized. We grouped 23 (60%) as automated databases, four (11%) as adverse event reporting systems, four (11%) as EMRs, three (8%) as national surveys or datasets, and four (11%) as other types. Eighteen (47%) were classified as publicly and conveniently accessible online; providing information at national level. Most of them offered more than 5 years of comprehensive data coverage, and presented data at both the individual and aggregated levels. No information about population coverage was found. Drug coding is not uniform; each data source has its own coding system, depending on the purpose of the data. At least one scientific publication was found for each publicly available data source.Conclusions: There are several types of data sources for DUR in Brazil, but a uniform system for drug classification and data quality evaluation does not exist. The extent of population covered by year is unknown. Our comprehensive and structured inventory reveals a need for full characterization of these data sources.
Facebook
TwitterNational Grid Reference System for Northern Ireland composed of 292 sheets and provides the naming convention for the 1:10 000 scale mapping. The 10k Grid is published here for Open Data and can be used as a reference system for the download of the DTM and the Mid Scale Raster map.
The OSNI 1:10,000 raster grid is referenced to the Irish Grid, with each tile covering a quarter of an Irish Grid sheet. Each tile covers 4.8 km x 3.2 km and there are 1058 tiles covering Northern Ireland.Please Note for Open Data NI Users: Esri Rest API is not Broken, it will not open on its own in a Web Browser but can be copied and used in Desktop and Webmaps
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Microsoft Access Database for bibliometric analysis found in the article: Elaine M. Lasda Bergman, Finding Citations to Social Work Literature: The Relative Benefits of Using Web of Science, Scopus, or Google Scholar, The Journal of Academic Librarianship, Volume 38, Issue 6, November 2012, Pages 370-379, ISSN 0099-1333, http://dx.doi.org/10.1016/j.acalib.2012.08.002. (http://www.sciencedirect.com/science/article/pii/S009913331200119X) Abstract: Past studies of citation coverage of Web of Science, Scopus, and Google Scholar do not demonstrate a consistent pattern that can be applied to the interdisciplinary mix of resources used in social work research. To determine the utility of these tools to social work researchers, an analysis of citing references to well-known social work journals was conducted. Web of Science had the fewest citing references and almost no variety in source format. Scopus provided higher citation counts, but the pattern of coverage was similar to Web of Science. Google Scholar provided substantially more citing references, but only a relatively small percentage of them were unique scholarly journal articles. The patterns of database coverage were replicated when the citations were broken out for each journal separately. The results of this analysis demonstrate the need to determine what resources constitute scholarly research and reflect the need for future researchers to consider the merits of each database before undertaking their research. This study will be of interest to scholars in library and information science as well as social work, as it facilitates a greater understanding of the strengths and limitations of each database and brings to light important considerations for conducting future research. Keywords: Citation analysis; Social work; Scopus; Web of Science; Google Scholar
Facebook
TwitterSpeedeon's Consumer Marketing Prospect Database has incredible depth and coverage. Our database includes: - 217+ Million Individuals - 118+ Million Households - 1,000+ Attributes - Predictive Models at ZIP+4 Level
Users can choose from powerful attributes such as age, gender, marital status, presence of children, income and affluence, housing data, personal interesting, ailment data, investments and insurance needs, ethnicity, occupation.
Our consumer prospect data is sourced from over 120 different sources such as survey and warrantee data, deeds, internet sourced data, various opt-in data, non-private and non-FCRA credit data from credit bureaus.
All our data is multi-sourced, carefully analyzed, and undergoes extensive processing to ensure the highest quality lifestyle data that is compliant with all privacy and security regulations.
Speedeon helps clients utilize this data for CRM enrichment, segmentation, prospecting, and predictive modeling. By working with Speedeon, clients can easily activate this rich audience data for: - Direct mail - Email - Digital & mobile display - Social media marketing like Facebook & Instagram - Advanced TV
Facebook
TwitterThis shows the location of each of our 50K tiles. The 50K Grid is published here for Open Data and can be used as a reference system for the download of the 50m DTM.
The OSNI Largescale grid is referenced to the Irish Grid, with all of Northern Ireland being covered by a grid of 17,335 tiles. Urban areas are covered by 1:1250 tiles and rural areas at 1:2500. Each 1:1250 tile covers an area of 0.6 km x 0.4 km and each 1:2500 tile covers an area of 1.2 km x 0.8 km.Please Note for Open Data NI Users: Esri Rest API is not Broken, it will not open on its own in a Web Browser but can be copied and used in Desktop and Webmaps
Facebook
TwitterA GIS polygon shapefile outlining the extent of the 14 individual DEM sections that comprise the seamless, 2-meter resolution DEM for the open-coast region of the San Francisco Bay Area (outside of the Golden Gate Bridge), extending from Half Moon Bay to Bodega Head along the north-central California coastline. The goal was to integrate the most recent high-resolution bathymetric and topographic datasets available (for example, Light Detection and Ranging (lidar) topography, multibeam and single-beam sonar bathymetry) into a seamless surface model extending offshore at least 3 nautical miles and inland beyond the +20 meter elevation contour.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
United States HIC: 25 to 34 Yrs: Uncovered data was reported at 6,906.000 Person th in 2016. This records a decrease from the previous number of 7,128.400 Person th for 2015. United States HIC: 25 to 34 Yrs: Uncovered data is updated yearly, averaging 9,817.933 Person th from Mar 1999 (Median) to 2016, with 18 observations. The data reached an all-time high of 11,565.538 Person th in 2010 and a record low of 6,906.000 Person th in 2016. United States HIC: 25 to 34 Yrs: Uncovered data remains active status in CEIC and is reported by US Census Bureau. The data is categorized under Global Database’s USA – Table US.G082: Health Insurance Coverage.
Facebook
TwitterThe Texas Department of Insurance, Division of Workers’ Compensation (DWC) publishes a quarterly report of employers with active Texas workers’ compensation insurance coverage. Employers with coverage are called “subscribers.” Texas does not require most private employers to have workers' compensation insurance coverage. Insurance carriers report coverage data to DWC using the International Association of Industrial Accident Boards and Commissions’ (IAIABC) IAIABC Proof of Coverage (POC) Release 2.1 electronic data interchange (EDI) standard. The National Council on Workers’ Compensation Insurance (NCCI) collects the POC data for DWC. POC filings are the source of this data set. Visit the DWC Employer Coverage Page for more information.
Facebook
TwitterOpen Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
License information was derived automatically
🇺🇸 United States
Facebook
TwitterDigital Terrestrial Television (DTT) Coverage Database
Facebook
TwitterThe ETCBC database of the Hebrew Bible (formerly known as WIVU database), contains the scholarly text of the Hebrew Bible with linguistic markup. A previous version can be found in EASY (see the link below). The present dataset is an improvement in many ways:
(A) it contains a new version of the data, called ETCBC4. The content has been heavily updated, with new linguistic annotations and a better organisation of them, and lots of additions and corrections as well.
(B) the data format is now Linguistic Annotation Framework (see below). This contrasts with the previous version, which has been archived as a database dump in a specialised format: Emdros (see the link below).
(C) a new tool, LAF-Fabric is added to process the ETCBC4 version directly from its LAF representation. The picture on this page shows a few samples what can be done with it.
(D) extensive documentation is provided, including a description of all the computing steps involved in getting the data in LAF format.
Since 2012 there is an ISO standard for the stand-off markup of language resources, Linguistic Annotation Framework (LAF).
As a result of the SHEBANQ project (see link below), funded by CLARIN-NL and carried out by the ETCBC and DANS, we have a created a tool, LAF-Fabric, by which we can convert EMDROS databases of the ETCBC into LAF and then do data analytic work by means of e.g. IPython notebooks. This has been used for the Hebrew Bible, but it can also be applied to the Syriac text in the CALAP (see link below).
This dataset contains a folder laf with the laf files, and the necessary declarations are contained in the folder decl. Among these declarations are feature declaration documents, in TEI format (see link below), with hyperlinks to concept definitions in ISOcat (see link below). For completeness, the ISOcat definitions are repeated in the feature declaration documents. These definitions are terse, and they are more fully documented in the folder documentation.