Facebook
Twitterhttps://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Creator Nicolas Mejia-Petit My Kofi
Code-Preference-Pairs Dataset
Overview
This dataset was created while created Open-critic-GPT. Here is a little Overview: The Open-Critic-GPT dataset is a synthetic dataset created to train models in both identifying and fixing bugs in code. The dataset is generated using a unique synthetic data pipeline which involves:
Prompting a local model with an existing code example. Introducing bugs into the code. While also having the model… See the full description on the dataset page: https://huggingface.co/datasets/Vezora/Code-Preference-Pairs.
Facebook
TwitterThis dataset includes soil wet aggregate stability measurements from the Upper Mississippi River Basin LTAR site in Ames, Iowa. Samples were collected in 2021 from this long-term tillage and cover crop trial in a corn-based agroecosystem. We measured wet aggregate stability using digital photography to quantify disintegration (slaking) of submerged aggregates over time, similar to the technique described by Fajardo et al. (2016) and Rieke et al. (2021). However, we adapted the technique to larger sample numbers by using a multi-well tray to submerge 20-36 aggregates simultaneously. We used this approach to measure slaking index of 160 soil samples (2120 aggregates). This dataset includes slaking index calculated for each aggregates, and also summarized by samples. There were usually 10-12 aggregates measured per sample. We focused primarily on methodological issues, assessing the statistical power of slaking index, needed replication, sensitivity to cultural practices, and sensitivity to sample collection date. We found that small numbers of highly unstable aggregates lead to skewed distributions for slaking index. We concluded at least 20 aggregates per sample were preferred to provide confidence in measurement precision. However, the experiment had high statistical power with only 10-12 replicates per sample. Slaking index was not sensitive to the initial size of dry aggregates (3 to 10 mm diameter); therefore, pre-sieving soils was not necessary. The field trial showed greater aggregate stability under no-till than chisel plow practice, and changing stability over a growing season. These results will be useful to researchers and agricultural practitioners who want a simple, fast, low-cost method for measuring wet aggregate stability on many samples.
Facebook
TwitterLocal authorities must publish the information about their counter fraud work under the Local Government Transparency Code.
Facebook
TwitterODC Public Domain Dedication and Licence (PDDL) v1.0http://www.opendatacommons.org/licenses/pddl/1.0/
License information was derived automatically
These department codes are maintained in the City's financial system of record. Department Groups, Divisions, Sections, Units, Sub Units and Departments are nested in the dataset from left to right. Each nested unit has both a code and an associated name.
The dataset represents a flattened tree (hierarchy) so that each leaf on the tree has it's own row. Thus certain rows will have repeated codes across columns.
Data changes as needed.
Facebook
TwitterAutoTrain Dataset for project: new_model
Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project new_model.
Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
Data Instances
A sample from this dataset looks as follows: [ { "text":… See the full description on the dataset page: https://huggingface.co/datasets/dddb/autotrain-data-new_model.
Facebook
TwitterThis dataset contains all data and code necessary to reproduce the analysis presented in the manuscript: Winzeler, H.E., Owens, P.R., Read Q.D.., Libohova, Z., Ashworth, A., Sauer, T. 2022. 2022. Topographic wetness index as a proxy for soil moisture in a hillslope catena: flow algorithms and map generalization. Land 11:2018. DOI: 10.3390/land11112018. There are several steps to this analysis. The relevant scripts for each are listed below. The first step is to use the raw digital elevation data (DEM) to produce different versions of the topographic wetness index (TWI) for the study region (Calculating TWI). Then, these TWI output files are processed, along with soil moisture (volumetric water content or VWC) time series data from a number of sensors located within the study region, to create analysis-ready data objects (Processing TWI and VWC). Next, models are fit relating TWI to soil moisture (Model fitting) and results are plotted (Visualizing main results). A number of additional analyses were also done (Additional analyses). Input data The DEM of the study region is archived in this dataset as SourceDem.zip. This contains the DEM of the study region (DEM1.sgrd) and associated auxiliary files all called DEM1.* with different extensions. In addition, the DEM is provided as a .tif file called USGS_one_meter_x39y400_AR_R6_WashingtonCO_2015.tif. The remaining data and code files are archived in the repository created with a GitHub release on 2022-10-11, twi-moisture-0.1.zip. The data are found in a subfolder called data. 2017_LoggerData_HEW.csv through 2021_HEW.csv: Soil moisture (VWC) logger data for each year 2017-2021 (5 files total). 2882174.csv: weather data from a nearby station. DryPeriods2017-2021.csv: starting and ending days for dry periods 2017-2021. LoggerLocations.csv: Geographic locations and metadata for each VWC logger. Logger_Locations_TWI_2017-2021.xlsx: 546 topographic wetness indexes calculated at each VWC logger location. note: This is intermediate input created in the first step of the pipeline. Code pipeline To reproduce the analysis in the manuscript run these scripts in the following order. The scripts are all found in the root directory of the repository. See the manuscript for more details on the methods. Calculating TWI TerrainAnalysis.R: Taking the DEM file as input, calculates 546 different topgraphic wetness indexes using a variety of different algorithms. Each algorithm is run multiple times with different input parameters, as described in more detail in the manuscript. After performing this step, it is necessary to use the SAGA-GIS GUI to extract the TWI values for each of the sensor locations. The output generated in this way is included in this repository as Logger_Locations_TWI_2017-2021.xlsx. Therefore it is not necessary to rerun this step of the analysis but the code is provided for completeness. Processing TWI and VWC read_process_data.R: Takes raw TWI and moisture data files and processes them into analysis-ready format, saving the results as CSV. qc_avg_moisture.R: Does additional quality control on the moisture data and averages it across different time periods. Model fitting Models were fit regressing soil moisture (average VWC for a certain time period) against a TWI index, with and without soil depth as a covariate. In each case, for both the model without depth and the model with depth, prediction performance was calculated with and without spatially-blocked cross-validation. Where cross validation wasn't used, we simply used the predictions from the model fit to all the data. fit_combos.R: Models were fit to each combination of soil moisture averaged over 57 months (all months from April 2017-December 2021) and 546 TWI indexes. In addition models were fit to soil moisture averaged over years, and to the grand mean across the full study period. fit_dryperiods.R: Models were fit to soil moisture averaged over previously identified dry periods within the study period (each 1 or 2 weeks in length), again for each of the 546 indexes. fit_summer.R: Models were fit to the soil moisture average for the months of June-September for each of the five years, again for each of the 546 indexes. Visualizing main results Preliminary visualization of results was done in a series of RMarkdown notebooks. All the notebooks follow the same general format, plotting model performance (observed-predicted correlation) across different combinations of time period and characteristics of the TWI indexes being compared. The indexes are grouped by SWI versus TWI, DEM filter used, flow algorithm, and any other parameters that varied. The notebooks show the model performance metrics with and without the soil depth covariate, and with and without spatially-blocked cross-validation. Crossing those two factors, there are four values for model performance for each combination of time period and TWI index presented. performance_plots_bymonth.Rmd: Using the results from the models fit to each month of data separately, prediction performance was averaged by month across the five years of data to show within-year trends. performance_plots_byyear.Rmd: Using the results from the models fit to each month of data separately, prediction performance was averaged by year to show trends across multiple years. performance_plots_dry_periods.Rmd: Prediction performance was presented for the models fit to the previously identified dry periods. performance_plots_summer.Rmd: Prediction performance was presented for the models fit to the June-September moisture averages. Additional analyses Some additional analyses were done that may not be published in the final manuscript but which are included here for completeness. 2019dryperiod.Rmd: analysis, done separately for each day, of a specific dry period in 2019. alldryperiodsbyday.Rmd: analysis, done separately for each day, of the same dry periods discussed above. best_indices.R: after fitting models, this script was used to quickly identify some of the best-performing indexes for closer scrutiny. wateryearfigs.R: exploratory figures showing median and quantile interval of VWC for sensors in low and high TWI locations for each water year. Resources in this dataset:Resource Title: Digital elevation model of study region. File Name: SourceDEM.zipResource Description: .zip archive containing digital elevation model files for the study region. See dataset description for more details.Resource Title: twi-moisture-0.1: Archived git repository containing all other necessary data and code . File Name: twi-moisture-0.1.zipResource Description: .zip archive containing all data and code, other than the digital elevation model archived as a separate file. This file was generated by a GitHub release made on 2022-10-11 of the git repository hosted at https://github.com/qdread/twi-moisture (private repository). See dataset description and README file contained within this archive for more details.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains data from California resident tax returns filed with California adjusted gross income and self-assessed tax listed by zip code. This dataset contains data for taxable years 1992 to the most recent tax year available.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
This dataset is a cleaned version of the Dallas Police Department’s public crime data, sourced from the Dallas Police Crime Analytics Dashboard. It contains detailed information about crime incidents in Dallas from 2022 to January 2025. The data represents RMS (Records Management System) Incidents reported by the Dallas Police Department, reflecting crimes as reported to law enforcement authorities.
The dataset includes a range of crime classifications and related incident details based on preliminary information provided by the reporting parties.
Facebook
TwitterAutoTrain Dataset for project: goodreads_without_bookid
Dataset Description
This dataset has been automatically processed by AutoTrain for project goodreads_without_bookid.
Languages
The BCP-47 code for the dataset's language is en.
Dataset Structure
Data Instances
A sample from this dataset looks as follows: [ { "target": 5, "text": "This book was absolutely ADORABLE!!!!!!!!!!! It was an awesome light and FUN read. I loved… See the full description on the dataset page: https://huggingface.co/datasets/fernanda-dionello/autotrain-data-goodreads_without_bookid.
Facebook
TwitterHousing code enforcement activities, including inspections and violations.
Facebook
TwitterThis dataset contains the ICD-10 code lists used to test the sensitivity and specificity of the Clinical Practice Research Datalink (CPRD) medical code lists for dementia subtypes. The provided code lists are used to define dementia subtypes in linked data from the Hospital Episode Statistic (HES) inpatient dataset and the Office of National Statistics (ONS) death registry, which are then used as the 'gold standard' for comparison against dementia subtypes defined using the CPRD medical code lists. The CPRD medical code lists used in this comparison are available here: Venexia Walker, Neil Davies, Patrick Kehoe, Richard Martin (2017): CPRD codes: neurodegenerative diseases and commonly prescribed drugs. https://doi.org/10.5523/bris.1plm8il42rmlo2a2fqwslwckm2 Complete download (zip, 3.9 KiB)
Facebook
TwitterAutoTrain Dataset for project: Rusynpannonianpure
Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project Rusynpannonianpure.
Languages
The BCP-47 code for the dataset's language is en2es.
Dataset Structure
Data Instances
A sample from this dataset looks as follows: [ { "source": ""I came to the region to meet with the leaders of the parties and discuss the progress in normalizin[...]"… See the full description on the dataset page: https://huggingface.co/datasets/Tritkoman/autotrain-data-Rusynpannonianpure.
Facebook
TwitterThis repository provides the data and code necessary to reproduce the manuscript "Peering into the world of wild passerines with 3D-SOCS: synchronized video capture for posture estimation".This repository also contains sample datasets for running the code and bounding box and keypoint annotations. Collection of large behavioral data-sets on wild animals in natural habitats is vital in ecology and evolution studies. Recent progress in machine learning and computer vision, combined with inexpensive microcomputers, have unlocked a new frontier of fine-scale markerless measurements. Here, we leverage these advancements to develop a 3D Synchronized Outdoor Camera System (3D-SOCS): an inexpensive, mobile and automated method for collecting behavioral data on wild animals using synchronized video frames from Raspberry Pi controlled cameras. Accuracy tests demonstrate 3D-SOCS’ markerless tracking can estimate postures with a 3mm tolerance. To illustrate its research potential, we place 3D-SOCS in the field and conduct a stimulus presentation experiment. We estimate 3D postures and trajectories for multiple individuals of different bird species, and use this data to characterize the visual field configuration of wild great tits (Parus major), a model species in behavioral ecology. We find their optic axes at approximately ±60◦ azimuth and −5◦ elevation. Furthermore, birds exhibit functional lateralization in their use of the right eye with conspecific stimulus, and show individual differences in lateralization. We also show that birds’ convex hulls predicts body weight, highlighting 3D-SOCS’ potential for non-invasive population monitoring. 3D-SOCS is a first-of-its-kind camera system for wild research, presenting exciting potential to measure fine-scaled behavior and morphology in wild birds.
Facebook
TwitterSubscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Go Linter Evaluation Dataset
This is a publicly available dataset for 'An empirical evaluation of Golang static code analysis tools for real-world issues.' Please refer to the data according to the names of the spreadsheets.
Authors: Jianwei Wu, James Clause
Collected Survey Data:
- This Excel file contains the collected survey data for the empirical study in details.
R Scripts and Raw Data:
- These scripts are used for data analysis and processing.
- This is the initial data collected from surveys or other sources before any processing or analysis.
Surveys for External Participants:
- This Excel file contains survey data collected for the evaluation of Go linters.
- This folder contains the surveys sent to external participants for collecting their feedback or data.
Recruitment Letter.pdf:
- This PDF contains an example of the recruitment letter sent to potential survey participants, inviting them to take part in the study.
Outputs from Existing Go Linters and Summarized Categories.xlsx:
- This Excel file contains outputs from various Go linters and categorized summaries of these outputs. It helps in comparing the performance and features of different linters.
Selection of Go Linters.xlsx:
- This Excel file lists the Go linters selected for evaluation, along with criteria or reasons for their selection.
UD IRB Exempt Letter.pdf:
- This PDF contains the Institutional Review Board (IRB) exemption letter from the University of Delaware (UD), indicating that the study involving human participants was exempt from full review.
Survey Template.pdf:
- This PDF contains an example of the survey sent to the participants.
govet issues.pdf:
- This PDF contains a list of reported issues about govet. Collected from various pull requests.
Approved linters:
- staticcheck gofmt govet revive gosec deadcode errcheck
Facebook
TwitterSubscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
Facebook
TwitterFind the updated imports data of 84798910 HS code with product name, description, pricing, Indian import port, and importers in India.
Facebook
TwitterSubscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
Facebook
TwitterA self-hosted location dataset containing all administrative divisions, cities, and zip codes for Isle of Man. All geospatial data is updated weekly to maintain the highest data quality, including coverage of complex regions within the country.
Use cases for the Global Zip Code Database (Geospatial data) - Address capture and validation - Map and visualization - Reporting and Business Intelligence (BI) - Master Data Management - Logistics and Supply Chain Management - Sales and Marketing
Data export methodology Our location data packages are offered in variable formats, including .csv. All geospatial data are optimized for seamless integration with popular systems like Esri ArcGIS, Snowflake, QGIS, and more.
Product Features - Fully and accurately geocoded - Administrative areas with a level range of 0-4 - Multi-language support including address names in local and foreign languages - Comprehensive city definitions across countries
For additional insights, you can combine the map data with: - UNLOCODE and IATA codes - Time zones and Daylight Saving Times
Why do companies choose our location databases - Enterprise-grade service - Reduce integration time and cost by 30% - Weekly updates for the highest quality
Note: Custom geospatial data packages are available. Please submit a request via the above contact button for more details.
Facebook
TwitterA self-hosted location dataset containing all administrative divisions, cities, and zip codes for Montenegro. All geospatial data is updated weekly to maintain the highest data quality, including coverage of complex regions within the country.
Use cases for the Global Zip Code Database (Geospatial data) - Address capture and validation - Map and visualization - Reporting and Business Intelligence (BI) - Master Data Management - Logistics and Supply Chain Management - Sales and Marketing
Data export methodology Our location data packages are offered in variable formats, including .csv. All geospatial data are optimized for seamless integration with popular systems like Esri ArcGIS, Snowflake, QGIS, and more.
Product Features - Fully and accurately geocoded - Administrative areas with a level range of 0-4 - Multi-language support including address names in local and foreign languages - Comprehensive city definitions across countries
For additional insights, you can combine the map data with: - UNLOCODE and IATA codes - Time zones and Daylight Saving Times
Why do companies choose our location databases - Enterprise-grade service - Reduce integration time and cost by 30% - Weekly updates for the highest quality
Note: Custom geospatial data packages are available. Please submit a request via the above contact button for more details.
Facebook
Twitterhttps://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Creator Nicolas Mejia-Petit My Kofi
Code-Preference-Pairs Dataset
Overview
This dataset was created while created Open-critic-GPT. Here is a little Overview: The Open-Critic-GPT dataset is a synthetic dataset created to train models in both identifying and fixing bugs in code. The dataset is generated using a unique synthetic data pipeline which involves:
Prompting a local model with an existing code example. Introducing bugs into the code. While also having the model… See the full description on the dataset page: https://huggingface.co/datasets/Vezora/Code-Preference-Pairs.