Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data from California resident tax returns filed with California adjusted gross income and self-assessed tax listed by zip code. This dataset contains data for taxable years 1992 to the most recent tax year available.
Facebook
TwitterOfficial statistics are produced impartially and free from political influence.
Facebook
TwitterThe Woodland Carbon Code is a voluntary standard, initiated in July 2011, for woodland creation projects that make claims about the carbon they sequester (take out of the atmosphere).
Woodland Carbon Code statistics are used to monitor the uptake of this new voluntary standard, and are published quarterly since January 2013.
Facebook
TwitterFinancial overview and grant giving statistics of Code for Progress
Facebook
TwitterComprehensive YouTube channel statistics for Learn Code With Durgesh, featuring 346,000 subscribers and 67,055,116 total views. This dataset includes detailed performance metrics such as subscriber growth, video views, engagement rates, and estimated revenue. The channel operates in the Technology category and is based in IN. Track 1,553 videos with daily and monthly performance data, including view counts, subscriber changes, and earnings estimates. Analyze growth trends, engagement patterns, and compare performance against similar channels in the same category.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data and statistical analysis scripts for manuscript on wheat root response to nitrate using X-ray CT and OpenSimRoot
X-ray CT reveals 4D root system development and lateral root responses to nitrate in soil - [https://doi.org/10.1002/ppj2.20036]
The ZIP file contains:
MCT1_Rcode.R - Statistics script for candidate single-timepoint experiment. Requires all CSV data files in the directory. User needs to set working directory to location of this script and the CSV data files before running.MCT1... .csv - 3 CSV data files required by the R script.MCT2_Rcode.R - Statistics script for time-series experiment. Requires all CSV data files in the directory. User needs to set working directory to location of this script and the CSV data files before running.MCT2... .csv - 3 CSV data files required by the R script.R_RooThProcessing.R - R code for aggregating root traits from RooTh software.Modelling folder - OpenSimRoot with model parameters and root data used in manuscript.
Facebook
TwitterThis dataset contains the ICD-10 code lists used to test the sensitivity and specificity of the Clinical Practice Research Datalink (CPRD) medical code lists for dementia subtypes. The provided code lists are used to define dementia subtypes in linked data from the Hospital Episode Statistic (HES) inpatient dataset and the Office of National Statistics (ONS) death registry, which are then used as the 'gold standard' for comparison against dementia subtypes defined using the CPRD medical code lists. The CPRD medical code lists used in this comparison are available here: Venexia Walker, Neil Davies, Patrick Kehoe, Richard Martin (2017): CPRD codes: neurodegenerative diseases and commonly prescribed drugs. https://doi.org/10.5523/bris.1plm8il42rmlo2a2fqwslwckm2 Complete download (zip, 3.9 KiB)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Zip Code; Median household income; Unemployed (ages GE 16); Families below 185% FPL; Children (ages 0-17) below 185% FPL; Children (ages 3-4) enrolled in preschool or nursery school; Less than high school; High school graduate; Some college or associates degree; College graduate or higher; High school graduate or less. Percentages unless otherwise noted. Source information provided at: https://www.sccgov.org/sites/phd/hi/hd/Documents/City%20Profiles/Methodology/Neighborhood%20profile%20methodology_082914%20final%20for%20web.pdf
Facebook
Twitterfinbarr/rlvr-code-data-rust-edited dataset hosted on Hugging Face and contributed by the HF Datasets community
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Codes for "Dynamic Oligopoly and Price Stickiness" in Mathematica and MATLAB.Abstract: How does market concentration affect the potency of monetary policy? To tackle this question we build a model with oligopolistic sectors. We provide a formula for the response of aggregate output to monetary shocks in terms of sufficient statistics: demand elas- ticities, concentration, and markups. We calibrate our model to the evidence on pass-through, and find that higher concentration significantly amplifies non-neutrality. To isolate the strategic effects of oligopoly, we compare our model to one with monopolistic competition recalibrated to ensure firms face comparable demand functions. Finally, we compute an exact Phillips curve for our model. Qualitatively, our Phillips curve incorporates extra terms relative to the standard New Keynesian one. However, quantitatively, we show that a standard Phillips curve, appropriately recalibrated, provides an excellent approximation.
Facebook
TwitterThis dataset includes soil wet aggregate stability measurements from the Upper Mississippi River Basin LTAR site in Ames, Iowa. Samples were collected in 2021 from this long-term tillage and cover crop trial in a corn-based agroecosystem. We measured wet aggregate stability using digital photography to quantify disintegration (slaking) of submerged aggregates over time, similar to the technique described by Fajardo et al. (2016) and Rieke et al. (2021). However, we adapted the technique to larger sample numbers by using a multi-well tray to submerge 20-36 aggregates simultaneously. We used this approach to measure slaking index of 160 soil samples (2120 aggregates). This dataset includes slaking index calculated for each aggregates, and also summarized by samples. There were usually 10-12 aggregates measured per sample. We focused primarily on methodological issues, assessing the statistical power of slaking index, needed replication, sensitivity to cultural practices, and sensitivity to sample collection date. We found that small numbers of highly unstable aggregates lead to skewed distributions for slaking index. We concluded at least 20 aggregates per sample were preferred to provide confidence in measurement precision. However, the experiment had high statistical power with only 10-12 replicates per sample. Slaking index was not sensitive to the initial size of dry aggregates (3 to 10 mm diameter); therefore, pre-sieving soils was not necessary. The field trial showed greater aggregate stability under no-till than chisel plow practice, and changing stability over a growing season. These results will be useful to researchers and agricultural practitioners who want a simple, fast, low-cost method for measuring wet aggregate stability on many samples.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
JSON data related to average statistics on children food habits and physical activities (per postcode)
Facebook
TwitterHousing code enforcement activities, including inspections and violations.
Facebook
TwitterThe latest update provides data for summer term 2019 on the:
Facebook
TwitterFirm-product data provide information for various research questions in international trade or innovation economics. However, working with these data require harmonizing product classifications consistently over time to avoid internal validity issues. Harmonization is required because classification systems like the EU classifications Combined Nomenclature (CN) for goods or the Prodcom for the production of manufactured goods undergo several changes. We have addressed this problem and developed an approach to harmonize product codes. This approach tracks product codes from 1995 to 2022 for CN and 2001 to 2021 for Prodcom. Additional years can be conveniently added. We provide the harmonized product codes for CN and Prodcom in the selected period's last (or first) year. Our approach is summarized in an open-source R package so that researchers can consistently track product codes for their selected period. We demonstrate the importance of harmonization using the micro-level trade data for Croatia as a case study. Our approach facilitates working with firm-product data, allowing the analysis of important research questions.
Facebook
TwitterDownload https://khub.net/documents/135939561/1051496671/NCSP+slide+set+2015+to+2024.odp/51bf65d0-6b2c-6488-73b3-1411a89f641f">NCSP slide set 2024 for presentational use.
Download https://khub.net/documents/135939561/1051496671/Sexually+transmitted+infections+in+England+2024.pdf/389966d2-91b0-6bde-86d5-c8f218c443e5">STI and NCSP infographic 2024 for presentational use.
The UK Health Security Agency (UKHSA) collects data on all local authority commissioned chlamydia tests undertaken in England, to measure screening activity.
The data provides information on the:
Figures by various demographic characteristics and by geographical distribution are also included.
View the pre-release access lists for these statistics.
Previous reports, data tables, slide sets, infographics, and pre-release access lists are available online:
Our statistical practice is regulated by the Office for Statistics Regulation (OSR). The OSR sets the standards of trustworthiness, quality and value in the https://code.statisticsauthority.gov.uk/">Code of Practice for Statistics that all producers of Official Statistics should adhere to.
Facebook
TwitterDataset contains counts of individuals certified eligible for Medi-Cal, by Month of Eligibility, Zip Code, and Sex, from Calendar Year 2005 to the most recent reportable month. Due to the amount of data presented, below the dataset has been split into three files. All datasets are derived from the most recent reportable months information.
Facebook
TwitterThis dataset contains all data and code necessary to reproduce the analysis presented in the manuscript: Winzeler, H.E., Owens, P.R., Read Q.D.., Libohova, Z., Ashworth, A., Sauer, T. 2022. 2022. Topographic wetness index as a proxy for soil moisture in a hillslope catena: flow algorithms and map generalization. Land 11:2018. DOI: 10.3390/land11112018. There are several steps to this analysis. The relevant scripts for each are listed below. The first step is to use the raw digital elevation data (DEM) to produce different versions of the topographic wetness index (TWI) for the study region (Calculating TWI). Then, these TWI output files are processed, along with soil moisture (volumetric water content or VWC) time series data from a number of sensors located within the study region, to create analysis-ready data objects (Processing TWI and VWC). Next, models are fit relating TWI to soil moisture (Model fitting) and results are plotted (Visualizing main results). A number of additional analyses were also done (Additional analyses). Input data The DEM of the study region is archived in this dataset as SourceDem.zip. This contains the DEM of the study region (DEM1.sgrd) and associated auxiliary files all called DEM1.* with different extensions. In addition, the DEM is provided as a .tif file called USGS_one_meter_x39y400_AR_R6_WashingtonCO_2015.tif. The remaining data and code files are archived in the repository created with a GitHub release on 2022-10-11, twi-moisture-0.1.zip. The data are found in a subfolder called data. 2017_LoggerData_HEW.csv through 2021_HEW.csv: Soil moisture (VWC) logger data for each year 2017-2021 (5 files total). 2882174.csv: weather data from a nearby station. DryPeriods2017-2021.csv: starting and ending days for dry periods 2017-2021. LoggerLocations.csv: Geographic locations and metadata for each VWC logger. Logger_Locations_TWI_2017-2021.xlsx: 546 topographic wetness indexes calculated at each VWC logger location. note: This is intermediate input created in the first step of the pipeline. Code pipeline To reproduce the analysis in the manuscript run these scripts in the following order. The scripts are all found in the root directory of the repository. See the manuscript for more details on the methods. Calculating TWI TerrainAnalysis.R: Taking the DEM file as input, calculates 546 different topgraphic wetness indexes using a variety of different algorithms. Each algorithm is run multiple times with different input parameters, as described in more detail in the manuscript. After performing this step, it is necessary to use the SAGA-GIS GUI to extract the TWI values for each of the sensor locations. The output generated in this way is included in this repository as Logger_Locations_TWI_2017-2021.xlsx. Therefore it is not necessary to rerun this step of the analysis but the code is provided for completeness. Processing TWI and VWC read_process_data.R: Takes raw TWI and moisture data files and processes them into analysis-ready format, saving the results as CSV. qc_avg_moisture.R: Does additional quality control on the moisture data and averages it across different time periods. Model fitting Models were fit regressing soil moisture (average VWC for a certain time period) against a TWI index, with and without soil depth as a covariate. In each case, for both the model without depth and the model with depth, prediction performance was calculated with and without spatially-blocked cross-validation. Where cross validation wasn't used, we simply used the predictions from the model fit to all the data. fit_combos.R: Models were fit to each combination of soil moisture averaged over 57 months (all months from April 2017-December 2021) and 546 TWI indexes. In addition models were fit to soil moisture averaged over years, and to the grand mean across the full study period. fit_dryperiods.R: Models were fit to soil moisture averaged over previously identified dry periods within the study period (each 1 or 2 weeks in length), again for each of the 546 indexes. fit_summer.R: Models were fit to the soil moisture average for the months of June-September for each of the five years, again for each of the 546 indexes. Visualizing main results Preliminary visualization of results was done in a series of RMarkdown notebooks. All the notebooks follow the same general format, plotting model performance (observed-predicted correlation) across different combinations of time period and characteristics of the TWI indexes being compared. The indexes are grouped by SWI versus TWI, DEM filter used, flow algorithm, and any other parameters that varied. The notebooks show the model performance metrics with and without the soil depth covariate, and with and without spatially-blocked cross-validation. Crossing those two factors, there are four values for model performance for each combination of time period and TWI index presented. performance_plots_bymonth.Rmd: Using the results from the models fit to each month of data separately, prediction performance was averaged by month across the five years of data to show within-year trends. performance_plots_byyear.Rmd: Using the results from the models fit to each month of data separately, prediction performance was averaged by year to show trends across multiple years. performance_plots_dry_periods.Rmd: Prediction performance was presented for the models fit to the previously identified dry periods. performance_plots_summer.Rmd: Prediction performance was presented for the models fit to the June-September moisture averages. Additional analyses Some additional analyses were done that may not be published in the final manuscript but which are included here for completeness. 2019dryperiod.Rmd: analysis, done separately for each day, of a specific dry period in 2019. alldryperiodsbyday.Rmd: analysis, done separately for each day, of the same dry periods discussed above. best_indices.R: after fitting models, this script was used to quickly identify some of the best-performing indexes for closer scrutiny. wateryearfigs.R: exploratory figures showing median and quantile interval of VWC for sensors in low and high TWI locations for each water year. Resources in this dataset:Resource Title: Digital elevation model of study region. File Name: SourceDEM.zipResource Description: .zip archive containing digital elevation model files for the study region. See dataset description for more details.Resource Title: twi-moisture-0.1: Archived git repository containing all other necessary data and code . File Name: twi-moisture-0.1.zipResource Description: .zip archive containing all data and code, other than the digital elevation model archived as a separate file. This file was generated by a GitHub release made on 2022-10-11 of the git repository hosted at https://github.com/qdread/twi-moisture (private repository). See dataset description and README file contained within this archive for more details.
Facebook
TwitterSubscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
Facebook
TwitterAdult respondents ages 18+ who were ever diagnosed with asthma by a doctor. Years covered are from 2011 to 2012 by zip code. Data taken from the California Health Interview Survey Neighborhood Edition (AskCHIS (http://askchisne.ucla.edu/), downloaded January 2016.
"Field" = "Definition"
"ZIPCODE" = postal zip code in LA County "Zip_code" = postal zip code in LA County "PAdAsthma" = used fraction of projected 18 and older population with Asthma conditions residing in Zip Code"PAdAsthma2" = percentage of projected 18 and older population with Asthma conditions residing in Zip Code"NAdAsthma" = number of projected 18 and older population with Asthma conditions residing in Zip Code"Pop_18olde" = projected 18 and older population total residing in Zip Code
Health estimates available in AskCHIS NE (Neighborhood Edition) are model-based small area estimates (SAEs). SAEs are not direct estimates (estimates produced directly from survey data, such as those provided through AskCHIS). CHIS data and analytic results are used extensively in California in policy development, service planning and research, and is recognized and valued nationally as a model population-based health survey
FAQ:
All health estimates in this version of AskCHIS Neighborhood Edition are based on data from the 2011- 2012 California Health Interview Survey. Socio-demographic indicators come from the 2008-2012 American Community Survey (ACS) 5-year summary tables.
The population estimates in AskCHIS NE represent the CHIS 2011-2012 population sample, which excludes Californians living in group quarters (such as prisons, nursing homes, and dormitories).
While AskCHIS NE has data on all ZCTAs (Zip Code Tabulation Areas), two factors may influence our ability to display the estimates:
A small population (under 15,000): currently, the application only shows estimates for geographic entities with populations above 15,000. If your ZCTA has a population below this threshold, the easiest way to obtain data is to combine it with a neighboring ZCTA and obtain a pooled estimate. A high coefficient of variation: high coefficients of variation denote statistical instability.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data from California resident tax returns filed with California adjusted gross income and self-assessed tax listed by zip code. This dataset contains data for taxable years 1992 to the most recent tax year available.