With Versium REACH's Firmographic Append tool in the Business to Business Direct product suite you unlock the ability to append valuable firmographic data for your customer and prospect contact lists. With only a few available attributes needed you can tap into Versium's industry-leading identity resolution engine and proprietary database to append rich firmographic data. To append data you will only need any of the following: - Email - Business Domain - Business Name, Address, City, State - Business Name, Phone
With Versium REACH Demographic Append you will have access to many different attributes for enriching your data.
Basic, Household and Financial, Lifestyle and Interests, Political and Donor.
Here is a list of what sorts of attributes are available for each output type listed above:
Basic:
- Senior in Household
- Young Adult in Household
- Small Office or Home Office
- Online Purchasing Indicator
- Language
- Marital Status
- Working Woman in Household
- Single Parent
- Online Education
- Occupation
- Gender
- DOB (MM/YY)
- Age Range
- Religion
- Ethnic Group
- Presence of Children
- Education Level
- Number of Children
Household, Financial and Auto: - Household Income - Dwelling Type - Credit Card Holder Bank - Upscale Card Holder - Estimated Net Worth - Length of Residence - Credit Rating - Home Own or Rent - Home Value - Home Year Built - Number of Credit Lines - Auto Year - Auto Make - Auto Model - Home Purchase Date - Refinance Date - Refinance Amount - Loan to Value - Refinance Loan Type - Home Purchase Price - Mortgage Purchase Amount - Mortgage Purchase Loan Type - Mortgage Purchase Date - 2nd Most Recent Mortgage Amount - 2nd Most Recent Mortgage Loan Type - 2nd Most Recent Mortgage Date - 2nd Most Recent Mortgage Interest Rate Type - Refinance Rate Type - Mortgage Purchase Interest Rate Type - Home Pool
Lifestyle and Interests:
- Mail Order Buyer
- Pets
- Magazines
- Reading
- Current Affairs and Politics
- Dieting and Weight Loss
- Travel
- Music
- Consumer Electronics
- Arts
- Antiques
- Home Improvement
- Gardening
- Cooking
- Exercise
- Sports
- Outdoors
- Womens Apparel
- Mens Apparel
- Investing
- Health and Beauty
- Decorating and Furnishing
Political and Donor: - Donor Environmental - Donor Animal Welfare - Donor Arts and Culture - Donor Childrens Causes - Donor Environmental or Wildlife - Donor Health - Donor International Aid - Donor Political - Donor Conservative Politics - Donor Liberal Politics - Donor Religious - Donor Veterans - Donor Unspecified - Donor Community - Party Affiliation
With Versium REACH's Contact Append or Contact Append Plus you can add consumer contact data, including multiple phone numbers or mobile-only to your list of customers or prospects. With Versium REACH you are connected to our proprietary database of over 300+ million consumers, 1 Billion emails, and over 150 million households in the United States. Through either our API or platform you can have contact data appended to your records with any of the following supplied values; Email Address Phone Postal Address, City, State, ZIP First Name, Last Name, City, State First Name, Last Name, ZIP
This table feeds multiple apps for Tippecanoe County. Columns are the bare minimum details for generating a GRM. When new GRM data points are collected they should be appended here.
We turn your incomplete contact records into complete customer profiles by filling in the missing pieces. Whether you need emails, phone numbers, company details, or deeper insights, we validate what you have and add what you don't.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The real motivation behind creating this dataset was to work on a project of IOT health monitoring device
There are columns heart rate , sysBP , diaBP, height, weight, BMI etc. these parameters are necessary for predicting heart condition
The height/weight tables with heart rate are taken from this website
https://www.mymathtables.com/chart/health-wellness/height-weight-table-for-all-ages.html
The following code has been used to generate the data according t research from different resources on the web: `import numpy as np import pandas as pd
age = np.random.randint(1,70,500000) sex = np.random.randint(0,2,500000) SysBP = np.random.randint(105,147,500000) DiaBP = np.random.randint(73,120,500000) HR = np.random.randint(78,200,500000) weightKg = np.random.randint(2,120,500000) heightCm = np.random.randint(48,185,500000) BMI = weightKg / heightCm / heightCm * 10000 \data=[] for age,sex,SysBP,DiaBP,HR,weightKg,heightCm,BMI in zip(age,sex,SysBP,DiaBP,HR,weightKg,heightCm,BMI): if BMI > 40 or BMI < 10: continue elif ( age < 20): continue elif ( weightKg < 45): continue elif (1 <= age <= 10) & (17 < BMI < 31) & (104< SysBP <121) & ( 73 < DiaBP < 81) & ( 99 < HR <= 200) & ( 3 < weightKg <= 36) & ( 48 < heightCm <= 139) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) elif (10 < age <= 20) & (17 < BMI < 31) & (104< SysBP <121) & ( 73 < DiaBP <= 81) & ( 99 < HR <= 200) & ( 36 < weightKg < 60) & ( 139 < heightCm < 170) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) elif (20 < age <= 30) & (17 < BMI < 31) & (108< SysBP <=134) & ( 75 <= DiaBP <= 84) & ( 94 < HR <= 190) & ( 28 < weightKg < 80) & ( 137 <= heightCm <= 180) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) elif (30 < age <= 40) & (17 < BMI < 31) & (110< SysBP <=135) & ( 81 <= DiaBP <= 86) & ( 93 <= HR <= 180) & ( 50 < weightKg < 90) & ( 137 <= heightCm <= 213) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) elif (40 < age <= 50) & (17 < BMI < 31) & (112< SysBP <=140) & ( 79 <= DiaBP <= 89) & ( 90 <= HR <= 170) & ( 50 < weightKg < 90) & ( 137 <= heightCm <= 213) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) elif (50 < age <= 90) & (17 < BMI < 31) & (116< SysBP <=147) & ( 81 <= DiaBP <= 91) & ( 85 <= HR <= 160) & ( 50 < weightKg < 90) & ( 137 <= heightCm <= 213) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) elif ( 20 <= age < 90) & (17 < BMI < 31) : data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),0]))) else: data.append(dict(zip(['age','sex', 'SysBP', 'DiaBP', 'HR', 'weightKg','heightCm', 'BMI','indication'], [age,sex,SysBP,DiaBP,HR,weightKg,heightCm,np.round(BMI),1]))) df1 = pd.DataFrame(data) df1.to_csv("Health_heart_experimental.csv") `
Observed linkages between consumer and B2B emails and website domains, categorized into IAB classification codes. Hashed emails can be linked to plain-text emails to append all consumer and B2B data fields for a full view of the individual and their online intent and behavior.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains 5 Sentinel-2 Level-2A (12 bands) images that are part of the BigEarthNet Dataset (Sumbul et al. 2019, https://bigearth.net/). The images in this dataset focus on coastal areas.
The image data for each scene and band were upscaled to a common ground sample distance of 10m per pixel using linear interpolation. Furthermore, all bands of each scence were combined into a single NumPy array and stored into separate .npy binary files. Data processing was performed by Linus Scheibenreif, University of St. Gallen.
The data can be easily read in with Python using the following code snippet:
import os
import numpy as np
data = []
for filename in os.listdir('data/'):
if filename.endswith('.npy'):
data.append(np.load(open(os.path.join('data', filename), 'rb'),
allow_pickle=True))
data = np.array(data)
This repository also contains the file coastal_labels.json, which contains polygons for labels grassland, forest, water and sand, using the YOLO format.
This dataset is provided mainly for teaching purposes under the Creative Commons Attribution 4.0 International licence. BigEarthNet data are provided under the Community Data License Agreement (Permissive, Version 1.0).
Michael Mommert, Stuttgart University of Applied Sciences, 2025-03-07
With Versium REACH's Consumer to Business tool you can unlock the professional data for prospects that are only providing you with their personal email address. Get back business information for your prospects needing only their personal email address. Versium's industry-leading identity resolution engine will locate and append the prospect's business email and/or firmographic data for their business.
With over 60 Million business professionals and 30+ million businesses in Versium's proprietary database, you will greatly increase you ability to identify key contact points and attributes that you were missing before.
Trace-metal concentrations in sediment and in the clam Limecola petalum (World Register of Marine Species, 2020; formerly reported as Macoma balthica and M. petalum), clam reproductive activity, and benthic macroinvertebrate community structure were investigated in a mudflat located 1 kilometer south of the discharge of the Palo Alto Regional Water Quality Control Plant (PARWQCP) in south San Francisco Bay, California. This report includes data collected by the U.S. Geological Survey (USGS) starting in January 2019. These data append to long-term datasets extending back to 1974. This dataset supports the City of Palo Alto’s Near-Field Receiving-Water Monitoring Program, initiated in 1994. This data release is presented as two datasets each on its own child page. The first child page contains clam tissue metals data, sediment metals data, percentage fine sediment, total organic carbon, and the salinity of the overlying water. The second child page contains clam reproduction and benthic community data. Please read the metadata file corresponding to each dataset for complete details.
https://www.usa.gov/government-workshttps://www.usa.gov/government-works
After May 3, 2024, this dataset and webpage will no longer be updated because hospitals are no longer required to report data on COVID-19 hospital admissions, and hospital capacity and occupancy data, to HHS through CDC’s National Healthcare Safety Network. Data voluntarily reported to NHSN after May 1, 2024, will be available starting May 10, 2024, at COVID Data Tracker Hospitalizations.
The following dataset provides facility-level data for hospital utilization aggregated on a weekly basis (Sunday to Saturday). These are derived from reports with facility-level granularity across two main sources: (1) HHS TeleTracking, and (2) reporting provided directly to HHS Protect by state/territorial health departments on behalf of their healthcare facilities.
The hospital population includes all hospitals registered with Centers for Medicare & Medicaid Services (CMS) as of June 1, 2020. It includes non-CMS hospitals that have reported since July 15, 2020. It does not include psychiatric, rehabilitation, Indian Health Service (IHS) facilities, U.S. Department of Veterans Affairs (VA) facilities, Defense Health Agency (DHA) facilities, and religious non-medical facilities.
For a given entry, the term “collection_week” signifies the start of the period that is aggregated. For example, a “collection_week” of 2020-11-15 means the average/sum/coverage of the elements captured from that given facility starting and including Sunday, November 15, 2020, and ending and including reports for Saturday, November 21, 2020.
Reported elements include an append of either “_coverage”, “_sum”, or “_avg”.
The file will be updated weekly. No statistical analysis is applied to impute non-response. For averages, calculations are based on the number of values collected for a given hospital in that collection week. Suppression is applied to the file for sums and averages less than four (4). In these cases, the field will be replaced with “-999,999”.
A story page was created to display both corrected and raw datasets and can be accessed at this link: https://healthdata.gov/stories/s/nhgk-5gpv
This data is preliminary and subject to change as more data become available. Data is available starting on July 31, 2020.
Sometimes, reports for a given facility will be provided to both HHS TeleTracking and HHS Protect. When this occurs, to ensure that there are not duplicate reports, deduplication is applied according to prioritization rules within HHS Protect.
For influenza fields listed in the file, the current HHS guidance marks these fields as optional. As a result, coverage of these elements are varied.
For recent updates to the dataset, scroll to the bottom of the dataset description.
On May 3, 2021, the following fields have been added to this data set.
On May 8, 2021, this data set has been converted to a corrected data set. The corrections applied to this data set are to smooth out data anomalies caused by keyed in data errors. To help determine which records have had corrections made to it. An additional Boolean field called is_corrected has been added.
On May 13, 2021 Changed vaccination fields from sum to max or min fields. This reflects the maximum or minimum number reported for that metric in a given week.
On June 7, 2021 Changed vaccination fields from max or min fields to Wednesday reported only. This reflects that the number reported for that metric is only reported on Wednesdays in a given week.
On September 20, 2021, the following has been updated: The use of analytic dataset as a source.
On January 19, 2022, the following fields have been added to this dataset:
On April 28, 2022, the following pediatric fields have been added to this dataset:
On October 24, 2022, the data includes more analytical calculations in efforts to provide a cleaner dataset. For a raw version of this dataset, please follow this link: https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/uqq2-txqb
Due to changes in reporting requirements, after June 19, 2023, a collection week is defined as starting on a Sunday and ending on the next Saturday.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
The data was collected on 2024-04-05 containing 3492 problems.
Cleaned via the following script.
import json
import csv
from io import TextIOWrapper
def clean(data: dict):
questions = data['data']['problemsetQuestionList']['questions']
for q in questions:
yield {
'id': q['frontendQuestionId'],
'difficulty': q['difficulty'],
'title': q['title'],
'titleCn': q['titleCn'],
'titleSlug': q['titleSlug'],
'paidOnly': q['paidOnly'],
'acRate': round(q['acRate'], 3),
'topicTags': [t['name'] for t in q['topicTags']],
}
def out_jsonl(f: TextIOWrapper):
for id in range(0, 35):
with open(f'data/{id}.json', encoding='u8') as f2:
data = json.load(f2)
for q in clean(data):
f.write(json.dumps(q, ensure_ascii=False))
f.write('
')
def out_json(f: TextIOWrapper):
l = []
for id in range(0, 35):
with open(f'data/{id}.json', encoding='u8') as f2:
data = json.load(f2)
for q in clean(data):
l.append(q)
json.dump(l, f, ensure_ascii=False)
def out_csv(f: TextIOWrapper):
writer = csv.DictWriter(f, fieldnames=[
'id', 'difficulty', 'title', 'titleCn', 'titleSlug', 'paidOnly', 'acRate', 'topicTags'
])
writer.writeheader()
for id in range(0, 35):
with open(f'data/{id}.json', encoding='u8') as f2:
data = json.load(f2)
writer.writerows(clean(data))
with open('data.jsonl', 'w', encoding='u8') as f:
out_jsonl(f)
with open('data.json', 'w', encoding='u8') as f:
out_json(f)
with open('data.csv', 'w', encoding='u8', newline='') as f:
out_csv(f)
Open Government Licence 3.0http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3/
License information was derived automatically
API for www.legislation.gov.uk - launched by The National Archives on 29/07/2010 - giving access to the statute book at various levels, for various times, as reusable html fragments, xml and rdf. The API is RESTful and uses content negotiation, so full access to the data can be achieved using http requests. Alternatively, just append data.xml or data.rdf to any legislation page on the website to return the underlying data. The full API is also available from http://legislation.data.gov.uk.
There are many useful strategies for preparing GIS data for Next Generation 9-1-1. One step of preparation is making sure that all of the required fields exist (and sometimes populated) before loading into the system. While some localities add needed fields to their local data, others use an extract, transform, and load process to transform their local data into a Next Generation 9-1-1 GIS data model, and still others may do a combination of both.There are several strategies and considerations when loading data into a Next Generation 9-1-1 GIS data model. The best place to start is using a GIS data model schema template, or an empty file with the needed data layout to which you can append your data. Here are some resources to help you out. 1) The National Emergency Number Association (NENA) has a GIS template available on the Next Generation 9-1-1 GIS Data Model Page.2) The NENA GIS Data Model template uses a WGS84 coordinate system and pre-builds many domains. The slides from the Virginia NG9-1-1 User Group meeting in May 2021 explain these elements and offer some tips and suggestions for working with them. There are also some tips on using field calculator. Click the "open" button at the top right of this screen or here to view this information.3) VGIN adapted the NENA GIS Data Model into versions for Virginia State Plane North and Virginia State Plane South, as Virginia recommends uploading in your local coordinates and having the upload tools consistently transform your data to the WGS84 (4326) parameters required by the Next Generation 9-1-1 system. These customized versions only include the Site Structure Address Point and Street Centerlines feature classes. Address Point domains are set for address number, state, and country. Street Centerline domains are set for address ranges, parity, one way, state, and country. 4) A sample extract, transform, and load (ETL) for NG9-1-1 Upload script is available here.Additional resources and recommendations on GIS related topics are available on the VGIN 9-1-1 & GIS page.
Suppl. Append. 1DNA sequence data from GenBank.SupplApp1.csvSuppl. Append. 2Scored character states and literature sources.SupplApp2.csvSuppl. Append. 3Biogeographic models and model fit statistics. Results for (A) Gesneriaceae, (B) Gesnerioideae, and (C) Didymocarpoideae. Abbreviations: Par. = free parameters; lnLik = log-likelihood; AIC = Akaike Information Criterion; AICc = Akaike Information Criterion, corrected; ΔAICc = change in AICc; AICw = AIC weights; BIC = Bayesian Information Criterion; ΔBIC = change in BIC; DEC = Dispersal Extinction Cladogenesis model; DIVALIKE = BioGeoBEARS implementation of DIVA model; BAYAREALIKE = BioGeoBEARS implementation of BayArea model; s = subset sympatry; J = founder-event speciation.SupplApp3.pdfSuppl. Append. 4Summary of gene sequences used in the present study.SupplApp4.pdfSuppl. Append. 5Taxonomic comments and conclusions of the revised phylogenetic hypotheses for the Gesneriaceae.SupplApp5.pdfSuppl. Append. 6Stem and crown age estimates for Gesneriaceae clades and outgroups. For comparison, the ages of stems and crowns from Petrova et al. (2015), Perret et al. (2013), Woo et al. (2011), Bell et al. (2010), and Roalson et al. (2008) are provided. Estimation methods are indicated below reference names. Dates are indicated as Mean (Minimum, Maximum). Abbreviations: BEAST, Bayesian Evolutionary Analysis Sampling Trees; PL, penalized likelihood.SupplApp6.pdfSuppl. Append. 7GeoSSE model testing. Results for (A) Africa and Madagascar, (B) Temperate and Tropical Andes, (C) Amazon and Atlantic Brazil, (D) Caribbean and West Indies, and (E) Pacific and Southeast Asia. Gray boxes denote the model with the best-fit. Significance of constrained models versus unconstrained (full) model is assessed as follows: N.S., P>0.1; *, P<0.1; **, P<0.05; ***, P<0.001. Rate categories: λA, speciation in focal area (endemic species); λB, speciation in all other areas combined; λAB, speciation in widespread species; μA, extinction in focal area (endemic species); μB, extinction in all other areas combined; qA, dispersal out of focal area; qB, dispersal out of all other areas into focal area. Abbreviations: Df = degrees of freedom; lnLik = log-likelihood; AIC = Akaike Information Criterion; AICc = Akaike Information Criterion, corrected; ΔAICc = change in AICc; AICw = Akaike weights; LRT = likelihood ratio test; BIC = Bayesian Information Criterion; ΔBIC = change in BIC.SupplApp7.pdfSuppl. Append. 8SIMMAP ancestral character estimations of flower characters. Results for flower color in (A) Gesneriaceae, (B) Gesnerioideae, (C) Didymocarpoideae; corolla shape in (D) Gesneriaceae, (E) Gesnerioideae, (F) Didymocarpoideae; pollination syndrome (G) Gesneriaceae, (H) Gesnerioideae, (I) Didymocarpoideae.SupplApp8.pdfSuppl. Append. 9SIMMAP ancestral character estimations of epiphytism and growth form characters. Results for Gesneriaceae for (A) epiphytism and (B) unifoliate growth form.SupplApp9.pdfSuppl. Append. 10Geiger statistics for phylogenetic signal (λ), trait evolution at speciation (κ), and rate increase over time (δ). Significance of model fit with the addition of λ, κ, and δ parameters against the null model is assessed as follows: N.S., not significant; *, P<0.01; **, P<0.001. Corolla gibbosity is abbreviated "gibb." and epiphytism is abbreviated "epi."SupplApp10.pdfSuppl. Append. 11BiSSE model testing. Results for epiphytism in (A) Gesneriaceae, (B) Gesnerioideae, (C) Didymocarpoideae; ornithophily in (D) Gesneriaceae, (E) Gesnerioideae, (F) Didymocarpoideae; unifoliate growth in (G) Didymocarpoideae. Gray boxes denote the best fitting model. Significance of constrained models versus unconstrained (full) model is assessed as follows: N.S., P>0.1; *, P<0.1; **, P<0.05; ***, P<0.001. Rate categories: λ, speciation; μ, extinction; q, transition rate. In all cases, estimated rates for the characters of interest are indicated by λ1 and μ1, respectively. Abbreviations: Df = degrees of freedom; lnLik = log-likelihood; AIC = Akaike Information Criterion; AICc = Akaike Information Criterion, corrected; ΔAICc = change in AICc; AICw = Akaike weights; LRT = likelihood ratio test; BIC = Bayesian Information Criterion; ΔBIC = change in BIC.SupplApp11.pdfSuppl. Figure 1Gesneriaceae phylogenetic hypothesis. Numbers above branches refer to (A) aLRT and (B) ML bootstrap percentages, respectively. (C) ML phylogram with branch lengths.SupplFig1.pdfSuppl. Figure 2Calibrated Gesneriaceae phylogenetic hypothesis. Bars on branches reflect the 95% confidence interval on the time estimate. Circled numbers at nodes indicate fossil, geologic, and secondary calibration points, respectively.SupplFig2.pdfSuppl. Figure 3Historical biogeographical hypothesis for Gesneriaceae using the best-fit model BAYAREALIKE+s+J. Geographic areas: A, Temperate and Tropical Andes; B = Amazon and Atlantic Brazil; C = Central America...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of ‘COVID-19 Reported Patient Impact and Hospital Capacity by Facility’ provided by Analyst-2 (analyst-2.ai), based on source dataset retrieved from https://catalog.data.gov/dataset/e6ff9332-7a6d-42a7-986b-3deb14475c11 on 13 February 2022.
--- Dataset description provided by original source is as follows ---
The "COVID-19 Reported Patient Impact and Hospital Capacity by Facility" dataset from the U.S. Department of Health & Human Services, filtered for Connecticut. View the full dataset and detailed metadata here: https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/anag-cw7u
The following dataset provides facility-level data for hospital utilization aggregated on a weekly basis (Friday to Thursday). These are derived from reports with facility-level granularity across two main sources: (1) HHS TeleTracking, and (2) reporting provided directly to HHS Protect by state/territorial health departments on behalf of their healthcare facilities.
The hospital population includes all hospitals registered with Centers for Medicare & Medicaid Services (CMS) as of June 1, 2020. It includes non-CMS hospitals that have reported since July 15, 2020. It does not include psychiatric, rehabilitation, Indian Health Service (IHS) facilities, U.S. Department of Veterans Affairs (VA) facilities, Defense Health Agency (DHA) facilities, and religious non-medical facilities.
For a given entry, the term “collection_week” signifies the start of the period that is aggregated. For example, a “collection_week” of 2020-11-20 means the average/sum/coverage of the elements captured from that given facility starting and including Friday, November 20, 2020, and ending and including reports for Thursday, November 26, 2020.
Reported elements include an append of either “_coverage”, “_sum”, or “_avg”.
A “_coverage” append denotes how many times the facility reported that element during that collection week.
A “_sum” append denotes the sum of the reports provided for that facility for that element during that collection week.
A “_avg” append is the average of the reports provided for that facility for that element during that collection week.
The file will be updated weekly. No statistical analysis is applied to impute non-response. For averages, calculations are based on the number of values collected for a given hospital in that collection week. Suppression is applied to the file for sums and averages less than four (4). In these cases, the field will be replaced with “-999,999”.
This data is preliminary and subject to change as more data become available. Data is available starting on July 31, 2020.
Sometimes, reports for a given facility will be provided to both HHS TeleTracking and HHS Protect. When this occurs, to ensure that there are not duplicate reports, deduplication is applied according to prioritization rules within HHS Protect.
For influenza fields listed in the file, the current HHS guidance marks these fields as optional. As a result, coverage of these elements are varied.
On May 3, 2021, the following fields have been added to this data set. hhs_ids previous_day_admission_adult_covid_confirmed_7_day_coverage previous_day_admission_pediatric_covid_confirmed_7_day_coverage previous_day_admission_adult_covid_suspected_7_day_coverage previous_day_admission_pediatric_covid_suspected_7_day_coverage previous_week_personnel_covid_vaccinated_doses_administered_7_day_sum total_personnel_covid_vaccinated_doses_none_7_day_sum total_personnel_covid_vaccinated_doses_one_7_day_sum total_personnel_covid_vaccinated_doses_all_7_day_sum previous_week_patients_covid_vaccinated_doses_one_7_day_sum previous_week_patients_covid_vaccinated_doses_all_7_day_sum
On May 8, 2021, this data set has been converted to a corrected data set. The corrections applied to this data set are to smooth out data anomalies caused by keyed in data errors. To help determine which records have had corrections made to it. An additional Boolean field called is_corrected has been added. To see the numbers as reported by the facilities, go to: https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/uqq2-txqb
On May 13, 2021 Changed vaccination fields from sum to max or min fields. This reflects the maximum or minimum number report
--- Original source retains full ownership of the source dataset ---
After May 3, 2024, this dataset and webpage will no longer be updated because hospitals are no longer required to report data on COVID-19 hospital admissions, and hospital capacity and occupancy data, to HHS through CDC’s National Healthcare Safety Network. Data voluntarily reported to NHSN after May 1, 2024, will be available starting May 10, 2024, at COVID Data Tracker Hospitalizations. The following dataset provides facility-level data for hospital utilization aggregated on a weekly basis (Sunday to Saturday). These are derived from reports with facility-level granularity across two main sources: (1) HHS TeleTracking, and (2) reporting provided directly to HHS Protect by state/territorial health departments on behalf of their healthcare facilities. The hospital population includes all hospitals registered with Centers for Medicare & Medicaid Services (CMS) as of June 1, 2020. It includes non-CMS hospitals that have reported since July 15, 2020. It does not include psychiatric, rehabilitation, Indian Health Service (IHS) facilities, U.S. Department of Veterans Affairs (VA) facilities, Defense Health Agency (DHA) facilities, and religious non-medical facilities. For a given entry, the term “collection_week” signifies the start of the period that is aggregated. For example, a “collection_week” of 2020-11-15 means the average/sum/coverage of the elements captured from that given facility starting and including Sunday, November 15, 2020, and ending and including reports for Saturday, November 21, 2020. Reported elements include an append of either “_coverage”, “_sum”, or “_avg”. A “_coverage” append denotes how many times the facility reported that element during that collection week. A “_sum” append denotes the sum of the reports provided for that facility for that element during that collection week. A “_avg” append is the average of the reports provided for that facility for that element during that collection week. The file will be updated weekly. No statistical analysis is applied to impute non-response. For averages, calculations are based on the number of values collected for a given hospital in that collection week. Suppression is applied to the file for sums and averages less than four (4). In these cases, the field will be replaced with “-999,999”. A story page was created to display both corrected and raw datasets and can be accessed at this link: https://healthdata.gov/stories/s/nhgk-5gpv This data is preliminary and subject to change as more data become available. Data is available starting on July 31, 2020. Sometimes, reports for a given facility will be provided to both HHS TeleTracking and HHS Protect. When this occurs, to ensure that there are not duplicate reports, deduplication is applied according to prioritization rules within HHS Protect. For influenza fields listed in the file, the current HHS guidance marks these fields as optional. As a result, coverage of these elements are varied. For recent updates to the dataset, scroll to the bottom of the dataset description. On May 3, 2021, the following fields have been added to this data set. hhs_ids previous_day_admission_adult_covid_confirmed_7_day_coverage previous_day_admission_pediatric_covid_confirmed_7_day_coverage previous_day_admission_adult_covid_suspected_7_day_coverage previous_day_admission_pediatric_covid_suspected_7_day_coverage previous_week_personnel_covid_vaccinated_doses_administered_7_day_sum total_personnel_covid_vaccinated_doses_none_7_day_sum total_personnel_covid_vaccinated_doses_one_7_day_sum total_personnel_covid_vaccinated_doses_all_7_day_sum previous_week_patients_covid_vaccinated_doses_one_7_day_sum previous_week_patients_covid_vaccinated_doses_all_
The "COVID-19 Reported Patient Impact and Hospital Capacity by Facility" dataset from the U.S. Department of Health & Human Services, filtered for Connecticut. View the full dataset and detailed metadata here: https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/anag-cw7u The following dataset provides facility-level data for hospital utilization aggregated on a weekly basis (Friday to Thursday). These are derived from reports with facility-level granularity across two main sources: (1) HHS TeleTracking, and (2) reporting provided directly to HHS Protect by state/territorial health departments on behalf of their healthcare facilities. The hospital population includes all hospitals registered with Centers for Medicare & Medicaid Services (CMS) as of June 1, 2020. It includes non-CMS hospitals that have reported since July 15, 2020. It does not include psychiatric, rehabilitation, Indian Health Service (IHS) facilities, U.S. Department of Veterans Affairs (VA) facilities, Defense Health Agency (DHA) facilities, and religious non-medical facilities. For a given entry, the term “collection_week” signifies the start of the period that is aggregated. For example, a “collection_week” of 2020-11-20 means the average/sum/coverage of the elements captured from that given facility starting and including Friday, November 20, 2020, and ending and including reports for Thursday, November 26, 2020. Reported elements include an append of either “_coverage”, “_sum”, or “_avg”. A “_coverage” append denotes how many times the facility reported that element during that collection week. A “_sum” append denotes the sum of the reports provided for that facility for that element during that collection week. A “_avg” append is the average of the reports provided for that facility for that element during that collection week. The file will be updated weekly. No statistical analysis is applied to impute non-response. For averages, calculations are based on the number of values collected for a given hospital in that collection week. Suppression is applied to the file for sums and averages less than four (4). In these cases, the field will be replaced with “-999,999”. This data is preliminary and subject to change as more data become available. Data is available starting on July 31, 2020. Sometimes, reports for a given facility will be provided to both HHS TeleTracking and HHS Protect. When this occurs, to ensure that there are not duplicate reports, deduplication is applied according to prioritization rules within HHS Protect. For influenza fields listed in the file, the current HHS guidance marks these fields as optional. As a result, coverage of these elements are varied. On May 3, 2021, the following fields have been added to this data set. hhs_ids previous_day_admission_adult_covid_confirmed_7_day_coverage previous_day_admission_pediatric_covid_confirmed_7_day_coverage previous_day_admission_adult_covid_suspected_7_day_coverage previous_day_admission_pediatric_covid_suspected_7_day_coverage previous_week_personnel_covid_vaccinated_doses_administered_7_day_sum total_personnel_covid_vaccinated_doses_none_7_day_sum total_personnel_covid_vaccinated_doses_one_7_day_sum total_personnel_covid_vaccinated_doses_all_7_day_sum previous_week_patients_covid_vaccinated_doses_one_7_day_sum previous_week_patients_covid_vaccinated_doses_all_7_day_sum On May 8, 2021, this data set has been converted to a corrected data set. The corrections applied to this data set are to smooth out data anomalies caused by keyed in data errors. To help determine which records have had corrections made to it. An additional Boolean field called is_corrected has been added. To see the numbers as reported by the facilities, go to: https://healthdata.gov/Hospital/COVID-19-Reported-Patient-Impact-and-Hospital-Capa/uqq2-txqb On May 13, 2021 Changed vaccination fields from sum to max or min fields. This reflects the maximum or minimum number report
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the main data of the paper "Optimal Rejection-Free Path Sampling," and the source code for generating/appending the independent RFPS-AIMMD and AIMMD runs.
Due to size constraints, the data has been split into separate repositories. The following repositories contain the trajectory files generated by the runs:
all the WQ runs: 10.5281/zenodo.14830317
chignolin, fps0: 10.5281/zenodo.14826023
chignolin, fps1: 10.5281/zenodo.14830200
chignolin, fps2: 10.5281/zenodo.14830224
chignolin, tps0: 10.5281/zenodo.14830251
chignolin, tps1: 10.5281/zenodo.14830270
chignolin, tps2: 10.5281/zenodo.14830280
The trajectory files are not required for running the main analysis, as all necessary information for machine learning and path reweighting is contained in the "PatEnsemble" object files stored in this repository. However, these trajectories are essential for projecting the path ensemble estimate onto an arbitrary set of collective variables.
To reconstruct the full dataset, please merge all the data folders you find in the supplemental repositories.
analysis (code for analyzing the data and generating the figures of the
| paper)
|- figures.ipynb (Jupyter notebook for the analysis)
|- figures (the figures created by the Jupyter notebook)
|- ...
data (all the AIMMD and reference runs, plus general info about the
| simulated systems)
|- chignolin
|- *.py: (code for generating/appending AIMMD runs on a Workstation or
| HPC cluster via Slurm; see the "src" folder below)
|- run.gro (full system positions in the native conformation)
|- mol.pdb (only the peptide positions in the native conformation)
|- topol.top (the system's topology for the GROMACS MD engine)
|- charmmm22star.ff (force field parameter files)
|- run.mdp (GROMACS MD parameters when appending a simulation)
|- randomvelocities.mdp (GROMACS MD parameters when initializing a
| simulation with random velocities)
|- signature.npy, r0.npy (parameters for defining the fraction of native
| contacts involved in the folded/unfolded states
| definition; used by params.py function
| "states_function")
|- dmax.npy, dmin.npy (parameters for defining the feature representation
| of the AIMMD NN model; used by params.py
| function "descriptors_function")
|- equilibrium (reference long equilibrium trajectory files; only the
| peptide positions are saved!)
|- run0.xtc, ..., run3.xtc
|- validation
|- validation.xtc (the validation SPs all together in an XTC file)
|- validation.npy (for each SP, collects the cumulative shooting results
after 10 two-way shooting simulations)
|- fps0 (the first AIMMD-RFPS independent run)
|- equilibriumA (the free simulations around A, already processed
| in PathEnsemble files)
|- traj000001.h5
|- traj000001.tpr (for running the simulation; in that case, please
| retrieve all the trajectory files in the right
| supplemental repository first)
|- traj000001.cpt (for appending the simulation; in that case, please
| retrieve all the trajectory files in the right
| supplemental repository first)
|- traj000002.h5 (in case of re-initialization)
|- ...
|- equilibriumB (the free simulations around B, ...)
|- ...
|- shots0
|- chain.h5 (the path sampling chain)
|- pool.h5 (the selection pool, containing the frames from which
| shooting points are currently selected from)
|- params.py (file containing the states and descriptors definitions,
| the NN fit function, and the AIMMD runs hyperparameters;
| if can be modified to allow for RFPS-AIMMD or the original
| algorithm AIMMD runs)
|- initial.trr (the initial transition for path sampling)
|- manager.log (reports info about the run)
|- network
src (code for generating/appending AIMMD runs on a Workstation or HPC
| cluster via Slurm)
|- generate.py (on a Workstation: initializes the processes; on an HPC
| cluster: creates the sh file for submitting a job)
|- slurm_options.py (to customize and use in case of running on HPC)
|- manager.py (controls SP selection; reweights the paths)
|- shooter.py (performs path sampling simulations)
|- equilibrium.py (performs free simulations)
|- pathensemble.py (code of the PathEnsemble class)
|- utils.py (auxiliary functions for data production and analysis)
* To initialize a new RFPS-AIMMD (or AIMMD) run for the systems of this paper:
1. Create a "run directory" folder (same depth as "fps0")
2. Copy "initial.trr" and "params.py" from another AIMMD run folder. It is possible to change "params.py" to customize the run.
3. (On a Workstation) call:
python generate.py
where nsteps is the final number of path sampling steps for the run, n the number of independent path sampling chains, nA the number of independent free simulators around A, and nB that of free simulators around B.
4. (On a HPC cluster) call:
python generate.py
sbatch .
* To append to an existing RFPS-AIMMD or AIMMD run
1. Merge the supplemental repository with the trajectory files into this one.
2. Just call again (on a Workstation)
python generate.py
or (on a HPC cluster)
sbatch .
after updating the "nsteps" parameters.
* To run enhanced sampling for a new system: please keep the data structure as close as possible to the original. Different names for the files can generate incompatibilities. We are currently trying to make it easier.
Run the analysis/figures.ipynb notebook. Some groups of cells have to be run multiple times after changing the parameters in the preamble.
With Versium REACH Demographic Append you will have access to many different attributes for enriching your data.
Basic, Household and Financial, Lifestyle and Interests, Political and Donor.
Here is a list of what sorts of attributes are available for each output type listed above:
Basic:
- Senior in Household
- Young Adult in Household
- Small Office or Home Office
- Online Purchasing Indicator
- Language
- Marital Status
- Working Woman in Household
- Single Parent
- Online Education
- Occupation
- Gender
- DOB (MM/YY)
- Age Range
- Religion
- Ethnic Group
- Presence of Children
- Education Level
- Number of Children
Household, Financial and Auto: - Household Income - Dwelling Type - Credit Card Holder Bank - Upscale Card Holder - Estimated Net Worth - Length of Residence - Credit Rating - Home Own or Rent - Home Value - Home Year Built - Number of Credit Lines - Auto Year - Auto Make - Auto Model - Home Purchase Date - Refinance Date - Refinance Amount - Loan to Value - Refinance Loan Type - Home Purchase Price - Mortgage Purchase Amount - Mortgage Purchase Loan Type - Mortgage Purchase Date - 2nd Most Recent Mortgage Amount - 2nd Most Recent Mortgage Loan Type - 2nd Most Recent Mortgage Date - 2nd Most Recent Mortgage Interest Rate Type - Refinance Rate Type - Mortgage Purchase Interest Rate Type - Home Pool
Lifestyle and Interests:
- Mail Order Buyer
- Pets
- Magazines
- Reading
- Current Affairs and Politics
- Dieting and Weight Loss
- Travel
- Music
- Consumer Electronics
- Arts
- Antiques
- Home Improvement
- Gardening
- Cooking
- Exercise
- Sports
- Outdoors
- Womens Apparel
- Mens Apparel
- Investing
- Health and Beauty
- Decorating and Furnishing
Political and Donor: - Donor Environmental - Donor Animal Welfare - Donor Arts and Culture - Donor Childrens Causes - Donor Environmental or Wildlife - Donor Health - Donor International Aid - Donor Political - Donor Conservative Politics - Donor Liberal Politics - Donor Religious - Donor Veterans - Donor Unspecified - Donor Community - Party Affiliation
With Versium REACH's Firmographic Append tool in the Business to Business Direct product suite you unlock the ability to append valuable firmographic data for your customer and prospect contact lists. With only a few available attributes needed you can tap into Versium's industry-leading identity resolution engine and proprietary database to append rich firmographic data. To append data you will only need any of the following: - Email - Business Domain - Business Name, Address, City, State - Business Name, Phone