100+ datasets found
  1. R

    Distance Calculation Dataset

    • universe.roboflow.com
    zip
    Updated Mar 2, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    jatin-rane (2023). Distance Calculation Dataset [Dataset]. https://universe.roboflow.com/jatin-rane/distance-calculation/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 2, 2023
    Dataset authored and provided by
    jatin-rane
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Variables measured
    Vehicles Bounding Boxes
    Description

    Distance Calculation

    ## Overview
    
    Distance Calculation is a dataset for object detection tasks - it contains Vehicles annotations for 4,056 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
    
  2. R

    Golf Ball Distance Calculation Dataset

    • universe.roboflow.com
    zip
    Updated Jun 14, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Awais Ahmad (2025). Golf Ball Distance Calculation Dataset [Dataset]. https://universe.roboflow.com/awais-ahmad-dtpcl/golf-ball-distance-calculation/dataset/1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 14, 2025
    Dataset authored and provided by
    Awais Ahmad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Golf Balls Bounding Boxes
    Description

    Golf Ball Distance Calculation

    ## Overview
    
    Golf Ball Distance Calculation is a dataset for object detection tasks - it contains Golf Balls annotations for 318 images.
    
    ## Getting Started
    
    You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
    
      ## License
    
      This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
    
  3. f

    Multi-way table showing CHIKV DALY range calculations.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Mar 4, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    LaBeaud, Angelle Desiree; King, Charles H.; Puntasecca, Christopher J. (2021). Multi-way table showing CHIKV DALY range calculations. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000861976
    Explore at:
    Dataset updated
    Mar 4, 2021
    Authors
    LaBeaud, Angelle Desiree; King, Charles H.; Puntasecca, Christopher J.
    Description

    CHIKV, chikungunya virus; DALY, disability-adjusted life year. (XLSX)

  4. f

    Dataset for journal article "Relationships Between Chloride Migration, Bulk...

    • datasetcatalog.nlm.nih.gov
    • data.dtu.dk
    Updated Jun 23, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ranger, Maxime Adrien Auguste (2023). Dataset for journal article "Relationships Between Chloride Migration, Bulk Electrical Conductivity and Formation Factor Across a Wide Range of Blended Cements" [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001101176
    Explore at:
    Dataset updated
    Jun 23, 2023
    Authors
    Ranger, Maxime Adrien Auguste
    Description

    Supplementary material for the journal article "Relationships Between Chloride Migration, Bulk Electrical Conductivity and Formation Factor Across a Wide Range of Blended Cements". The files contain raw data and calculations. This dataset contains the following files: - Comparison between paste and concrete for bulk electrical conductivity and chloride migration coefficient (Comparison paste and concrete.xlsx) - Chloride migration results, bulk electrical conductivity and pore solution conductivity on paste (Migration_and_conductivities_Paste.xlsx) - Comparison between the SCM reactivity and chloride migration (R3_vs_Dnssm.xlsx)

  5. Braking Distance data

    • kaggle.com
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ritu Parna Ghosh (2024). Braking Distance data [Dataset]. https://www.kaggle.com/datasets/rituparnaghosh18/braking-distance-data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Jun 11, 2024
    Dataset provided by
    Kaggle
    Authors
    Ritu Parna Ghosh
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This mini dataset is having 2 columns for calculating the braking distance of a car i.e., the distance a vehicle will travel from the point when its brakes are fully applied to when it comes to a complete stop.

  6. r

    Dataset for The effects of a number line intervention on calculation skills

    • researchdata.edu.au
    • figshare.mq.edu.au
    Updated May 18, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Saskia Kohnen; Rebecca Bull; Carola Ruiz Hornblas (2023). Dataset for The effects of a number line intervention on calculation skills [Dataset]. http://doi.org/10.25949/22799717.V1
    Explore at:
    Dataset updated
    May 18, 2023
    Dataset provided by
    Macquarie University
    Authors
    Saskia Kohnen; Rebecca Bull; Carola Ruiz Hornblas
    Description

    Study information

    The sample included in this dataset represents five children who participated in a number line intervention study. Originally six children were included in the study, but one of them fulfilled the criterion for exclusion after missing several consecutive sessions. Thus, their data is not included in the dataset.

    All participants were currently attending Year 1 of primary school at an independent school in New South Wales, Australia. For children to be able to eligible to participate they had to present with low mathematics achievement by performing at or below the 25th percentile in the Maths Problem Solving and/or Numerical Operations subtests from the Wechsler Individual Achievement Test III (WIAT III A & NZ, Wechsler, 2016). Participants were excluded from participating if, as reported by their parents, they have any other diagnosed disorders such as attention deficit hyperactivity disorder, autism spectrum disorder, intellectual disability, developmental language disorder, cerebral palsy or uncorrected sensory disorders.

    The study followed a multiple baseline case series design, with a baseline phase, a treatment phase, and a post-treatment phase. The baseline phase varied between two and three measurement points, the treatment phase varied between four and seven measurement points, and all participants had 1 post-treatment measurement point.

    The number of measurement points were distributed across participants as follows:

    Participant 1 – 3 baseline, 6 treatment, 1 post-treatment

    Participant 3 – 2 baseline, 7 treatment, 1 post-treatment

    Participant 5 – 2 baseline, 5 treatment, 1 post-treatment

    Participant 6 – 3 baseline, 4 treatment, 1 post-treatment

    Participant 7 – 2 baseline, 5 treatment, 1 post-treatment

    In each session across all three phases children were assessed in their performance on a number line estimation task, a single-digit computation task, a multi-digit computation task, a dot comparison task and a number comparison task. Furthermore, during the treatment phase, all children completed the intervention task after these assessments. The order of the assessment tasks varied randomly between sessions.


    Measures

    Number Line Estimation. Children completed a computerised bounded number line task (0-100). The number line is presented in the middle of the screen, and the target number is presented above the start point of the number line to avoid signalling the midpoint (Dackermann et al., 2018). Target numbers included two non-overlapping sets (trained and untrained) of 30 items each. Untrained items were assessed on all phases of the study. Trained items were assessed independent of the intervention during baseline and post-treatment phases, and performance on the intervention is used to index performance on the trained set during the treatment phase. Within each set, numbers were equally distributed throughout the number range, with three items within each ten (0-10, 11-20, 21-30, etc.). Target numbers were presented in random order. Participants did not receive performance-based feedback. Accuracy is indexed by percent absolute error (PAE) [(number estimated - target number)/ scale of number line] x100.


    Single-Digit Computation. The task included ten additions with single-digit addends (1-9) and single-digit results (2-9). The order was counterbalanced so that half of the additions present the lowest addend first (e.g., 3 + 5) and half of the additions present the highest addend first (e.g., 6 + 3). This task also included ten subtractions with single-digit minuends (3-9), subtrahends (1-6) and differences (1-6). The items were presented horizontally on the screen accompanied by a sound and participants were required to give a verbal response. Participants did not receive performance-based feedback. Performance on this task was indexed by item-based accuracy.


    Multi-digit computational estimation. The task included eight additions and eight subtractions presented with double-digit numbers and three response options. None of the response options represent the correct result. Participants were asked to select the option that was closest to the correct result. In half of the items the calculation involved two double-digit numbers, and in the other half one double and one single digit number. The distance between the correct response option and the exact result of the calculation was two for half of the trials and three for the other half. The calculation was presented vertically on the screen with the three options shown below. The calculations remained on the screen until participants responded by clicking on one of the options on the screen. Participants did not receive performance-based feedback. Performance on this task is measured by item-based accuracy.


    Dot Comparison and Number Comparison. Both tasks included the same 20 items, which were presented twice, counterbalancing left and right presentation. Magnitudes to be compared were between 5 and 99, with four items for each of the following ratios: .91, .83, .77, .71, .67. Both quantities were presented horizontally side by side, and participants were instructed to press one of two keys (F or J), as quickly as possible, to indicate the largest one. Items were presented in random order and participants did not receive performance-based feedback. In the non-symbolic comparison task (dot comparison) the two sets of dots remained on the screen for a maximum of two seconds (to prevent counting). Overall area and convex hull for both sets of dots is kept constant following Guillaume et al. (2020). In the symbolic comparison task (Arabic numbers), the numbers remained on the screen until a response was given. Performance on both tasks was indexed by accuracy.


    The Number Line Intervention

    During the intervention sessions, participants estimated the position of 30 Arabic numbers in a 0-100 bounded number line. As a form of feedback, within each item, the participants’ estimate remained visible, and the correct position of the target number appeared on the number line. When the estimate’s PAE was lower than 2.5, a message appeared on the screen that read “Excellent job”, when PAE was between 2.5 and 5 the message read “Well done, so close! and when PAE was higher than 5 the message read “Good try!” Numbers were presented in random order.


    Variables in the dataset

    Age = age in ‘years, months’ at the start of the study

    Sex = female/male/non-binary or third gender/prefer not to say (as reported by parents)

    Math_Problem_Solving_raw = Raw score on the Math Problem Solving subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).

    Math_Problem_Solving_Percentile = Percentile equivalent on the Math Problem Solving subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).

    Num_Ops_Raw = Raw score on the Numerical Operations subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).

    Math_Problem_Solving_Percentile = Percentile equivalent on the Numerical Operations subtest from the WIAT III (WIAT III A & NZ, Wechsler, 2016).


    The remaining variables refer to participants’ performance on the study tasks. Each variable name is composed by three sections. The first one refers to the phase and session. For example, Base1 refers to the first measurement point of the baseline phase, Treat1 to the first measurement point on the treatment phase, and post1 to the first measurement point on the post-treatment phase.


    The second part of the variable name refers to the task, as follows:

    DC = dot comparison

    SDC = single-digit computation

    NLE_UT = number line estimation (untrained set)

    NLE_T= number line estimation (trained set)

    CE = multidigit computational estimation

    NC = number comparison

    The final part of the variable name refers to the type of measure being used (i.e., acc = total correct responses and pae = percent absolute error).


    Thus, variable Base2_NC_acc corresponds to accuracy on the number comparison task during the second measurement point of the baseline phase and Treat3_NLE_UT_pae refers to the percent absolute error on the untrained set of the number line task during the third session of the Treatment phase.





  7. f

    Data from: Predicting Partition Coefficients with a Simple...

    • acs.figshare.com
    • datasetcatalog.nlm.nih.gov
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samuel Genheden (2023). Predicting Partition Coefficients with a Simple All-Atom/Coarse-Grained Hybrid Model [Dataset]. http://doi.org/10.1021/acs.jctc.5b00963.s001
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    ACS Publications
    Authors
    Samuel Genheden
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The solvation free energy is an essential quantity in force field development and in numerous applications. Here, we present the estimation of solvation free energies in polar (water, hexanol, octanol, and nonanol) and in apolar (hexane, octane, and nonane) media. The estimates are produced using molecular dynamics simulations employing a simple all-atom/coarse-grained hybrid model (AA/ELBA) and are therefore very efficient. More than 150 solutes were taken from the Minnesota solvation database and represent small, organic molecules. The mean absolute deviation for the different solvents ranges between 2.0 and 4.1 kJ/mol, and the correlation coefficient ranges between 0.78 and 0.99, indicating that the predictions are accurate. Outliers are identified, and potential avenues for improvements are discussed. Furthermore, partition coefficients between water and the organic solvents were estimated, and the percentage of the predictions that has the correct sign ranges between 74% (for octane) and 92% (for octanol and hexanol). Finally, membrane/water partition coefficients are replaced with hexane/water and octanol/water partition coefficients, and the latter is found to be as accurate as the expensive membrane calculations, indicating a wider application area.

  8. Z

    Data from: A New Bayesian Approach to Increase Measurement Accuracy Using a...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Domjan, Peter; Angyal, Viola; Bertalan, Adam; Vingender, Istvan; Dinya, Elek (2025). A New Bayesian Approach to Increase Measurement Accuracy Using a Precision Entropy Indicator [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_14417120
    Explore at:
    Dataset updated
    Feb 25, 2025
    Dataset provided by
    Semmelweis University
    Authors
    Domjan, Peter; Angyal, Viola; Bertalan, Adam; Vingender, Istvan; Dinya, Elek
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    "We believe that by accounting for the inherent uncertainty in the system during each measurement, the relationship between cause and effect can be assessed more accurately, potentially reducing the duration of research."

    Short description

    This dataset was created as part of a research project investigating the efficiency and learning mechanisms of a Bayesian adaptive search algorithm supported by the Imprecision Entropy Indicator (IEI) as a novel method. It includes detailed statistical results, posterior probability values, and the weighted averages of IEI across multiple simulations aimed at target localization within a defined spatial environment. Control experiments, including random search, random walk, and genetic algorithm-based approaches, were also performed to benchmark the system's performance and validate its reliability.

    The task involved locating a target area centered at (100; 100) within a radius of 10 units (Research_area.png), inside a circular search space with a radius of 100 units. The search process continued until 1,000 successful target hits were achieved.

    To benchmark the algorithm's performance and validate its reliability, control experiments were conducted using alternative search strategies, including random search, random walk, and genetic algorithm-based approaches. These control datasets serve as baselines, enabling comprehensive comparisons of efficiency, randomness, and convergence behavior across search methods, thereby demonstrating the effectiveness of our novel approach.

    Uploaded files

    The first dataset contains the average IEI values, generated by randomly simulating 300 x 1 hits for 10 bins per quadrant (4 quadrants in total) using the Python programming language, and calculating the corresponding IEI values. This resulted in a total of 4 x 10 x 300 x 1 = 12,000 data points. The summary of the IEI values by quadrant and bin is provided in the file results_1_300.csv. The calculation of IEI values for averages is based on likelihood, using an absolute difference-based approach for the likelihood probability computation. IEI_Likelihood_Based_Data.zip

    The weighted IEI average values for likelihood calculation (Bayes formula) are provided in the file Weighted_IEI_Average_08_01_2025.xlsx

    This dataset contains the results of a simulated target search experiment using Bayesian posterior updates and Imprecision Entropy Indicators (IEI). Each row represents a hit during the search process, including metrics such as Shannon entropy (H), Gini index (G), average distance, angular deviation, and calculated IEI values. The dataset also includes bin-specific posterior probability updates and likelihood calculations for each iteration. The simulation explores adaptive learning and posterior penalization strategies to optimize the search efficiency. Our Bayesian adaptive searching system source code (search algorithm, 1000 target searches): IEI_Self_Learning_08_01_2025.pyThis dataset contains the results of 1,000 iterations of a successful target search simulation. The simulation runs until the target is successfully located for each iteration. The dataset includes further three main outputs: a) Results files (results{iteration_number}.csv): Details of each hit during the search process, including entropy measures, Gini index, average distance and angle, Imprecision Entropy Indicators (IEI), coordinates, and the bin number of the hit. b) Posterior updates (Pbin_all_steps_{iter_number}.csv): Tracks the posterior probability updates for all bins during the search process acrosations multiple steps. c) Likelihoodanalysis(likelihood_analysis_{iteration_number}.csv): Contains the calculated likelihood values for each bin at every step, based on the difference between the measured IEI and pre-defined IE bin averages. IEI_Self_Learning_08_01_2025.py

    Based on the mentioned Python source code (see point 3, Bayesian adaptive searching method with IEI values), we performed 1,000 successful target searches, and the outputs were saved in the:Self_learning_model_test_output.zip file.

    Bayesian Search (IEI) from different quadrant. This dataset contains the results of Bayesian adaptive target search simulations, including various outputs that represent the performance and analysis of the search algorithm. The dataset includes: a) Heatmaps (Heatmap_I_Quadrant, Heatmap_II_Quadrant, Heatmap_III_Quadrant, Heatmap_IV_Quadrant): These heatmaps represent the search results and the paths taken from each quadrant during the simulations. They indicate how frequently the system selected each bin during the search process. b) Posterior Distributions (All_posteriors, Probability_distribution_posteriors_values, CDF_posteriors_values): Generated based on posterior values, these files track the posterior probability updates, including cumulative distribution functions (CDF) and probability distributions. c) Macro Summary (summary_csv_macro): This file aggregates metrics and key statistics from the simulation. It summarizes the results from the individual results.csv files. d) Heatmap Searching Method Documentation (Bayesian_Heatmap_Searching_Method_05_12_2024): This document visualizes the search algorithm's path, showing how frequently each bin was selected during the 1,000 successful target searches. e) One-Way ANOVA Analysis (Anova_analyze_dataset, One_way_Anova_analysis_results): This includes the database and SPSS calculations used to examine whether the starting quadrant influences the number of search steps required. The analysis was conducted at a 5% significance level, followed by a Games-Howell post hoc test [43] to identify which target-surrounding quadrants differed significantly in terms of the number of search steps. Results were saved in the Self_learning_model_test_results.zip

    This dataset contains randomly generated sequences of bin selections (1-40) from a control search algorithm (random search) used to benchmark the performance of Bayesian-based methods. The process iteratively generates random numbers until a stopping condition is met (reaching target bins 1, 11, 21, or 31). This dataset serves as a baseline for analyzing the efficiency, randomness, and convergence of non-adaptive search strategies. The dataset includes the following: a) The Python source code of the random search algorithm. b) A file (summary_random_search.csv) containing the results of 1000 successful target hits. c) A heatmap visualizing the frequency of search steps for each bin, providing insight into the distribution of steps across the bins. Random_search.zip

    This dataset contains the results of a random walk search algorithm, designed as a control mechanism to benchmark adaptive search strategies (Bayesian-based methods). The random walk operates within a defined space of 40 bins, where each bin has a set of neighboring bins. The search begins from a randomly chosen starting bin and proceeds iteratively, moving to a randomly selected neighboring bin, until one of the stopping conditions is met (bins 1, 11, 21, or 31). The dataset provides detailed records of 1,000 random walk iterations, with the following key components: a) Individual Iteration Results: Each iteration's search path is saved in a separate CSV file (random_walk_results_.csv), listing the sequence of steps taken and the corresponding bin at each step. b) Summary File: A combined summary of all iterations is available in random_walk_results_summary.csv, which aggregates the step-by-step data for all 1,000 random walks. c) Heatmap Visualization: A heatmap file is included to illustrate the frequency distribution of steps across bins, highlighting the relative visit frequencies of each bin during the random walks. d) Python Source Code: The Python script used to generate the random walk dataset is provided, allowing reproducibility and customization for further experiments. Random_walk.zip

    This dataset contains the results of a genetic search algorithm implemented as a control method to benchmark adaptive Bayesian-based search strategies. The algorithm operates in a 40-bin search space with predefined target bins (1, 11, 21, 31) and evolves solutions through random initialization, selection, crossover, and mutation over 1000 successful runs. Dataset Components: a) Run Results: Individual run data is stored in separate files (genetic_algorithm_run_.csv), detailing: Generation: The generation number. Fitness: The fitness score of the solution. Steps: The path length in bins. Solution: The sequence of bins visited. b) Summary File: summary.csv consolidates the best solutions from all runs, including their fitness scores, path lengths, and sequences. c) All Steps File: summary_all_steps.csv records all bins visited during the runs for distribution analysis. d) A heatmap was also generated for the genetic search algorithm, illustrating the frequency of bins chosen during the search process as a representation of the search pathways.Genetic_search_algorithm.zip

    Technical Information

    The dataset files have been compressed into a standard ZIP archive using Total Commander (version 9.50). The ZIP format ensures compatibility across various operating systems and tools.

    The XLSX files were created using Microsoft Excel Standard 2019 (Version 1808, Build 10416.20027)

    The Python program was developed using Visual Studio Code (Version 1.96.2, user setup), with the following environment details: Commit fabd6a6b30b49f79a7aba0f2ad9df9b399473380f, built on 2024-12-19. The Electron version is 32.6, and the runtime environment includes Chromium 128.0.6263.186, Node.js 20.18.1, and V8 12.8.374.38-electron.0. The operating system is Windows NT x64 10.0.19045.

    The statistical analysis included in this dataset was partially conducted using IBM SPSS Statistics, Version 29.0.1.0

    The CSV files in this dataset were created following European standards, using a semicolon (;) as the delimiter instead of a comma, encoded in UTF-8 to ensure compatibility with a wide

  9. 🛒 Supermarket Data

    • kaggle.com
    zip
    Updated Jul 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    mexwell (2024). 🛒 Supermarket Data [Dataset]. https://www.kaggle.com/datasets/mexwell/supermarket-data/versions/1
    Explore at:
    zip(78427538 bytes)Available download formats
    Dataset updated
    Jul 19, 2024
    Authors
    mexwell
    Description

    This is the dataset released as companion for the paper “Explaining the Product Range Effect in Purchase Data“, presented at the BigData 2013 conference.

    • supermarket_distances: three columns. The first column is the customer id, the second is the shop id and the third is the distance between the customer’s house and the shop location. The distance is a calculated in meters as a straight line so it does not take into account the road graph.
    • supermarket_prices: two columns. The first column is the product id and the second column is its unit price. The price is in Euro and it is calculated as the average unit price for the time span of the dataset.
    • supermarket_purchases: four columns. The first column is the customer id, the second is the product id, the third is the shop id and the fourth is the total amount of items that the customer bought the product in that particular shop. The data is recorded from January 2007 to December 2011.

    Citation

    Pennacchioli, D., Coscia, M., Rinzivillo, S., Pedreschi, D. and Giannotti, F., Explaining the Product Range Effect in Purchase Data. In BigData, 2013.

    Acknowlegement

    Foto von Eduardo Soares auf Unsplash

  10. d

    Data from: U.S. Geological Survey calculated half interpercentile range...

    • catalog.data.gov
    • search.dataone.org
    • +1more
    Updated Nov 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). U.S. Geological Survey calculated half interpercentile range (half of the difference between the 16th and 84th percentiles) of wave-current bottom shear stress in the South Atlantic Bight from May 2010 to May 2011 (SAB_hIPR.shp, polygon shapefile, Geographic, WGS84) [Dataset]. https://catalog.data.gov/dataset/u-s-geological-survey-calculated-half-interpercentile-range-half-of-the-difference-between
    Explore at:
    Dataset updated
    Nov 26, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    The U.S. Geological Survey has been characterizing the regional variation in shear stress on the sea floor and sediment mobility through statistical descriptors. The purpose of this project is to identify patterns in stress in order to inform habitat delineation or decisions for anthropogenic use of the continental shelf. The statistical characterization spans the continental shelf from the coast to approximately 120 m water depth, at approximately 5 km resolution. Time-series of wave and circulation are created using numerical models, and near-bottom output of steady and oscillatory velocities and an estimate of bottom roughness are used to calculate a time-series of bottom shear stress at 1-hour intervals. Statistical descriptions such as the median and 95th percentile, which are the output included with this database, are then calculated to create a two-dimensional picture of the regional patterns in shear stress. In addition, time-series of stress are compared to critical stress values at select points calculated from observed surface sediment texture data to determine estimates of sea floor mobility.

  11. GRACEnet: GHG Emissions, C Sequestration and

    • kaggle.com
    zip
    Updated Jan 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). GRACEnet: GHG Emissions, C Sequestration and [Dataset]. https://www.kaggle.com/datasets/thedevastator/gracenet-ghg-emissions-c-sequestration-and-envir
    Explore at:
    zip(1943875 bytes)Available download formats
    Dataset updated
    Jan 19, 2023
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    GRACEnet: GHG Emissions, C Sequestration and Environmental Benefits

    Quantifying Climate Change Mitigation and Sustainable Agricultural Practices

    By US Open Data Portal, data.gov [source]

    About this dataset

    This Kaggle dataset showcases the groundbreaking research undertaken by the GRACEnet program, which is attempting to better understand and minimize greenhouse gas (GHG) emissions from agro-ecosystems in order to create a healthier world for all. Through multi-location field studies that utilize standardized protocols – combined with models, producers, and policy makers – GRACEnet seeks to: typify existing production practices, maximize C sequestration, minimize net GHG emissions, and meet sustainable production goals. This Kaggle dataset allows us to evaluate the impact of different management systems on factors such as carbon dioxide and nitrous oxide emissions, C sequestration levels, crop/forest yield levels – plus additional environmental effects like air quality etc. With this data we can start getting an idea of the ways that agricultural policies may be influencing our planet's ever-evolving climate dilemma

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    Step 1: Familiarize yourself with the columns in this dataset. In particular, pay attention to Spreadsheet tab description (brief description of each spreadsheet tab), Element or value display name (name of each element or value being measured), Description (detailed description), Data type (type of data being measured) Unit (unit of measurement for the data) Calculation (calculation used to determine a value or percentage) Format (format required for submitting values), Low Value and High Value (range for acceptable entries).

    Step 2: Familiarize yourself with any additional information related to calculations. Most calculations made use of accepted best estimates based on standard protocols defined by GRACEnet. Every calculation was described in detail and included post-processing steps such as quality assurance/quality control changes as well as measurement uncertainty assessment etc., as available sources permit relevant calculations were discussed collaboratively between all participating partners at every level where they felt necessary. All terms were rigorously reviewed before all partners agreed upon any decision(s). A range was established when several assumptions were needed or when there was a high possibility that samples might fall outside previously accepted ranges associated with standard protocol conditions set up at GRACEnet Headquarters laboratories resulting due to other external factors like soil type, climate etc,.

    Step 3: Determine what types of operations are allowed within each spreadsheet tab (.csv file). For example on some tabs operations like adding an entire row may be permitted but using formulas is not permitted since all non-standard manipulations often introduce errors into an analysis which is why users are encouraged only add new rows/columns provided it is seen fit for their specific analysis operations like fill blank cells by zeros or delete rows/columns made redundant after standard filtering process which have been removed earlier from different tabs should be avoided since these nonstandard changes create unverified extra noise which can bias your results later on during robustness testing processes related to self verification process thereby creating erroneous output results also such action also might result into additional FET values due API's specially crafted excel documents while selecting two ways combo box therefore

    Research Ideas

    • Analyzing and comparing the environmental benefits of different agricultural management practices, such as crop yields and carbon sequestration rates.
    • Developing an app or other mobile platform to help farmers find management practices that maximize carbon sequestration and minimize GHG emissions in their area, based on their specific soil condition and climate data.
    • Building an AI-driven model to predict net greenhouse gas emissions and C sequestration from potential weekly/monthly production plans across different regions in the world, based on optimal allocation of resources such as fertilizers, equipment, water etc

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the ...

  12. Z

    Ensemble calculations of "ETR" from EURO-CORDEX data for Europe

    • data-staging.niaid.nih.gov
    • data.niaid.nih.gov
    • +2more
    Updated Jan 31, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Robert Goler (2020). Ensemble calculations of "ETR" from EURO-CORDEX data for Europe [Dataset]. https://data-staging.niaid.nih.gov/resources?id=zenodo_3632591
    Explore at:
    Dataset updated
    Jan 31, 2020
    Dataset provided by
    ZAMG
    Authors
    Robert Goler
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Climate Index: Extreme temperature range (ETR)

    Definition: Greatest daily extreme temperature range (Tmax-Tmin).

    Additional information: The dataset is based on an ensemble of EURO-CORDEX model simulations of daily near-surface maximum temperature. All ensemble members are bias-corrected against the gridded daily observational dataset E-OBS.

    Results (ensemble mean and standard deviation) are available for historical (1971-2000) and future (2011-2040, 2041-2070, 2071-2100) climate periods and for the representative concentration pathways RCP2.6, RCP4.5 and RCP8.5.

    The bias-corrected EURO-CORDEX climate model simulations used are:

    CLMcom-CCLM4-8-17/ICHEC-EC-EARTH, CLMcom-CCLM4-8-17/MOHC-HadGEM2-ES

    DMI-HIRHAM5/ICHEC-EC-EARTH

    KNMI-RACMO22E/ICHEC-EC-EARTH, KNMI-RACMO22E/MOHC-HadGEM2-ES

    SMHI-RCA4/ICHEC-EC-EARTH, SMHI-RCA4/MOHC-HadGEM2-ES

  13. d

    Data from: Home range and movement metric data of resident and translocated...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Oct 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Home range and movement metric data of resident and translocated Giant gartersnakes (Thamnophis gigas) in Sacramento, CA, USA, in 2018 to 2021 [Dataset]. https://catalog.data.gov/dataset/home-range-and-movement-metric-data-of-resident-and-translocated-giant-gartersnakes-thamno
    Explore at:
    Dataset updated
    Oct 2, 2025
    Dataset provided by
    U.S. Geological Survey
    Area covered
    United States, California, Sacramento
    Description

    The dataset consists of two csv files one for home range and net displacement analysis for adult snakes with at least 20 locations collected during the study period and one for the analysis of movement metrics for all adult snakes in the study. The home range data contains the calculated 100 percent and 95 percent minimum convex polygons (MCP) and 95 percent adaptive local convex hull (a-LoCoH) home range estimates, 3 measures of net displacement from the release location of the snake as well as other pertinent information about individual snakes (year included in study, id, treatment group, site, snout to vent length (SVL)). The movement data contains calculations of the following movement metrics: sinuosity, start-to-end distance, and total distance traversed of seasonal movement paths as well as information about individual snakes described above.

  14. NIST Stopping-Power & Range Tables for Electrons, Protons, and Helium Ions -...

    • catalog.data.gov
    • data.amerigeoss.org
    • +1more
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2025). NIST Stopping-Power & Range Tables for Electrons, Protons, and Helium Ions - SRD 124 [Dataset]. https://catalog.data.gov/dataset/nist-stopping-power-range-tables-for-electrons-protons-and-helium-ions-srd-124
    Explore at:
    Dataset updated
    Sep 30, 2025
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    The databases ESTAR, PSTAR, and ASTAR calculate stopping-power and range tables for electrons, protons, or helium ions. Stopping-power and range tables can be calculated for electrons in any user-specified material and for protons and helium ions in 74 materials.

  15. d

    Data from: Range size, local abundance and effect inform species...

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    Updated Dec 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). Data from: Range size, local abundance and effect inform species descriptions at scales relevant for local conservation practice [Dataset]. https://catalog.data.gov/dataset/data-from-range-size-local-abundance-and-effect-inform-species-descriptions-at-scales-rele-108d2
    Explore at:
    Dataset updated
    Dec 2, 2025
    Dataset provided by
    Agricultural Research Service
    Description

    Understanding species abundances and distributions, especially at local to landscape scales, is critical for land managers and conservationists to prioritize management decisions and informs the effort and expense that may be required. The metrics of range size and local abundance reflect aspects of the biology and ecology of a given species, and together with its per capita (or per unit area) effects on other members of the community comprise a well-accepted theoretical paradigm describing invasive species. Although these metrics are readily calculated from vegetation monitoring data, they have not generally (and effect in particular) been applied to native species. We describe how metrics defining invasions may be more broadly applied to both native and invasive species in vegetation management, supporting their relevance to local scales of species conservation and management. We then use a sample monitoring dataset to compare range size, local abundance and effect as well as summary calculations of landscape penetration (range size × local abundance) and impact (landscape penetration × effect) for native and invasive species in the mixed-grass plant community of western North Dakota, USA. This paper uses these summary statistics to quantify the impact for 13 of 56 commonly encountered species, with statistical support for effects of 6 of the 13 species. Our results agree with knowledge of invasion severity and natural history of native species in the region. We contend that when managers are using invasion metrics in monitoring, extending them to common native species is biologically and ecologically informative, with little additional investment. Resources in this dataset:Resource Title: Supporting Data (xlsx). File Name: Espeland-Sylvain-BiodivConserv-2019-raw-data.xlsxResource Description: Occurrence data per quadrangle, site, and transect. Species Codes and habitat identifiers are defined in a separate sheet.Resource Title: Data Dictionary. File Name: Espeland-Sylvain-BiodivConserv-2019-data-dictionary.csvResource Description: Details Species and Habitat codes for abundance data collected.Resource Title: Supporting Data (csv). File Name: Espeland-Sylvain-BiodivConserv-2019-raw-data.csvResource Description: Occurrence data per quadrangle, site, and transect.Resource Title: Supplementary Table S1.1. File Name: 10531_2019_1701_MOESM1_ESM.docxResource Description: Scientific name, common name, life history group, family, status (N= native, I= introduced), percent of plots present, and average cover when present of 56 vascular plant species recorded in 1196 undisturbed plots in federally-managed grasslands of western North Dakota. Life history groups: C3 = cool season perennial grass, C4 = warm season perennial grass, SE = sedge, SH = shrub, PF= perennial forb, BF = biennial forb, APF = annual, biennial, or perennial forb.

  16. Estimated stand-off distance between ADS-B equipped aircraft and obstacles

    • zenodo.org
    • data.niaid.nih.gov
    • +1more
    jpeg, zip
    Updated Jul 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Weinert; Andrew Weinert (2024). Estimated stand-off distance between ADS-B equipped aircraft and obstacles [Dataset]. http://doi.org/10.5281/zenodo.7741273
    Explore at:
    zip, jpegAvailable download formats
    Dataset updated
    Jul 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Andrew Weinert; Andrew Weinert
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Summary:

    Estimated stand-off distance between ADS-B equipped aircraft and obstacles. Obstacle information was sourced from the FAA Digital Obstacle File and the FHWA National Bridge Inventory. Aircraft tracks were sourced from processed data curated from the OpenSky Network. Results are presented as histograms organized by aircraft type and distance away from runways.

    Description:

    For many aviation safety studies, aircraft behavior is represented using encounter models, which are statistical models of how aircraft behave during close encounters. They are used to provide a realistic representation of the range of encounter flight dynamics where an aircraft collision avoidance system would be likely to alert. These models currently and have historically have been limited to interactions between aircraft; they have not represented the specific interactions between obstacles and aircraft equipped transponders. In response, we calculated the standoff distance between obstacles and ADS-B equipped manned aircraft.

    For robustness, this assessment considered two different datasets of manned aircraft tracks and two datasets of obstacles. For robustness, MIT LL calculated the standoff distance using two different datasets of aircraft tracks and two datasets of obstacles. This approach aligned with the foundational research used to support the ASTM F3442/F3442M-20 well clear criteria of 2000 feet laterally and 250 feet AGL vertically.

    The two datasets of processed tracks of ADS-B equipped aircraft curated from the OpenSky Network. It is likely that rotorcraft were underrepresented in these datasets. There were also no considerations for aircraft equipped only with Mode C or not equipped with any transponders. The first dataset was used to train the v1.3 uncorrelated encounter models and referred to as the “Monday” dataset. The second dataset is referred to as the “aerodrome” dataset and was used to train the v2.0 and v3.x terminal encounter model. The Monday dataset consisted of 104 Mondays across North America. The other dataset was based on observations at least 8 nautical miles within Class B, C, D aerodromes in the United States for the first 14 days of each month from January 2019 through February 2020. Prior to any processing, the datasets required 714 and 847 Gigabytes of storage. For more details on these datasets, please refer to "Correlated Bayesian Model of Aircraft Encounters in the Terminal Area Given a Straight Takeoff or Landing" and “Benchmarking the Processing of Aircraft Tracks with Triples Mode and Self-Scheduling.”

    Two different datasets of obstacles were also considered. First was point obstacles defined by the FAA digital obstacle file (DOF) and consisted of point obstacle structures of antenna, lighthouse, meteorological tower (met), monument, sign, silo, spire (steeple), stack (chimney; industrial smokestack), transmission line tower (t-l tower), tank (water; fuel), tramway, utility pole (telephone pole, or pole of similar height, supporting wires), windmill (wind turbine), and windsock. Each obstacle was represented by a cylinder with the height reported by the DOF and a radius based on the report horizontal accuracy. We did not consider the actual width and height of the structure itself. Additionally, we only considered obstacles at least 50 feet tall and marked as verified in the DOF.

    The other obstacle dataset, termed as “bridges,” was based on the identified bridges in the FAA DOF and additional information provided by the National Bridge Inventory. Due to the potential size and extent of bridges, it would not be appropriate to model them as point obstacles; however, the FAA DOF only provides a point location and no information about the size of the bridge. In response, we correlated the FAA DOF with the National Bridge Inventory, which provides information about the length of many bridges. Instead of sizing the simulated bridge based on horizontal accuracy, like with the point obstacles, the bridges were represented as circles with a radius of the longest, nearest bridge from the NBI. A circle representation was required because neither the FAA DOF or NBI provided sufficient information about orientation to represent bridges as rectangular cuboid. Similar to the point obstacles, the height of the obstacle was based on the height reported by the FAA DOF. Accordingly, the analysis using the bridge dataset should be viewed as risk averse and conservative. It is possible that a manned aircraft was hundreds of feet away from an obstacle in actuality but the estimated standoff distance could be significantly less. Additionally, all obstacles are represented with a fixed height, the potentially flat and low level entrances of the bridge are assumed to have the same height as the tall bridge towers. The attached figure illustrates an example simulated bridge.

    It would had been extremely computational inefficient to calculate the standoff distance for all possible track points. Instead, we define an encounter between an aircraft and obstacle as when an aircraft flying 3069 feet AGL or less comes within 3000 feet laterally of any obstacle in a 60 second time interval. If the criteria were satisfied, then for that 60 second track segment we calculate the standoff distance to all nearby obstacles. Vertical separation was based on the MSL altitude of the track and the maximum MSL height of an obstacle.

    For each combination of aircraft track and obstacle datasets, the results were organized seven different ways. Filtering criteria were based on aircraft type and distance away from runways. Runway data was sourced from the FAA runways of the United States, Puerto Rico, and Virgin Islands open dataset. Aircraft type was identified as part of the em-processing-opensky workflow.

    • All: No filter, all observations that satisfied encounter conditions
    • nearRunway: Aircraft within or at 2 nautical miles of a runway
    • awayRunway: Observations more than 2 nautical miles from a runway
    • glider: Observations when aircraft type is a glider
    • fwme: Observations when aircraft type is a fixed-wing multi-engine
    • fwse: Observations when aircraft type is a fixed-wing single engine
    • rotorcraft: Observations when aircraft type is a rotorcraft

    License

    This dataset is licensed under Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International(CC BY-NC-ND 4.0).

    This license requires that reusers give credit to the creator. It allows reusers to copy and distribute the material in any medium or format in unadapted form and for noncommercial purposes only. Only noncommercial use of your work is permitted. Noncommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. Exceptions are given for the not for profit standards organizations of ASTM International and RTCA.

    MIT is releasing this dataset in good faith to promote open and transparent research of the low altitude airspace. Given the limitations of the dataset and a need for more research, a more restrictive license was warranted. Namely it is based only on only observations of ADS-B equipped aircraft, which not all aircraft in the airspace are required to employ; and observations were source from a crowdsourced network whose surveillance coverage has not been robustly characterized.

    As more research is conducted and the low altitude airspace is further characterized or regulated, it is expected that a future version of this dataset may have a more permissive license.

    Distribution Statement

    DISTRIBUTION STATEMENT A. Approved for public release. Distribution is unlimited.

    Š 2021 Massachusetts Institute of Technology.

    Delivered to the U.S. Government with Unlimited Rights, as defined in DFARS Part 252.227-7013 or 7014 (Feb 2014). Notwithstanding any copyright notice, U.S. Government rights in this work are defined by DFARS 252.227-7013 or DFARS 252.227-7014 as detailed above. Use of this work other than as specifically authorized by the U.S. Government may violate any copyrights that exist in this work.

    This material is based upon work supported by the Federal Aviation Administration under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the Federal Aviation Administration.

    This document is derived from work done for the FAA (and possibly others); it is not the direct product of work done for the FAA. The information provided herein may include content supplied by third parties. Although the data and information contained herein has been produced or processed from sources believed to be reliable, the Federal Aviation Administration makes no warranty, expressed or implied, regarding the accuracy, adequacy, completeness, legality, reliability or usefulness of any information, conclusions or recommendations provided herein. Distribution of the information contained herein does not constitute an endorsement or warranty of the data or information provided herein by the Federal Aviation Administration or the U.S. Department of Transportation. Neither the Federal Aviation Administration nor the U.S. Department of

  17. U

    Dataset for "Anion-polarisation–directed short-range-order in antiperovskite...

    • researchdata.bath.ac.uk
    zip
    Updated Apr 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samuel Coles; Alex Squires; Kit McColl; Saiful Islam; Benjamin Morgan (2023). Dataset for "Anion-polarisation–directed short-range-order in antiperovskite Li2FeSO" [Dataset]. http://doi.org/10.15125/BATH-01260
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 12, 2023
    Dataset provided by
    University of Bath
    Authors
    Samuel Coles; Alex Squires; Kit McColl; Saiful Islam; Benjamin Morgan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    Royal Societyhttp://royalsociety.org/
    Science and Technology Facilities Councilhttps://stfc.ukri.org/
    European Research Council
    Diamond Light Source
    Faraday Institution
    Engineering and Physical Sciences Research Council
    Description

    This dataset contains the DFT calculations used in the paper "Anion-polarisation–directed short-range-order in antiperovskite Li2FeSO". This consists of three constituent parts: cluster expansion training calculations, relaxation of large shells, and Wannierisaton analysis of structures of interest. These calculations feed into an additional python workflow which is located on GitHub and is associated with the DOI 10.5281/zenodo.7828910 (see data link below).

  18. d

    Data from: Geothermal Life Cycle Calculator

    • catalog.data.gov
    • data.openei.org
    • +5more
    Updated Jan 20, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Argonne National Laboratory (2025). Geothermal Life Cycle Calculator [Dataset]. https://catalog.data.gov/dataset/geothermal-life-cycle-calculator-0d282
    Explore at:
    Dataset updated
    Jan 20, 2025
    Dataset provided by
    Argonne National Laboratory
    Description

    This calculator is a handy tool for interested parties to estimate two key life cycle metrics, fossil energy consumption (Etot) and greenhouse gas emission (ghgtot) ratios, for geothermal electric power production. It is based solely on data developed by Argonne National Laboratory for DOE's Geothermal Technologies office. The calculator permits the user to explore the impact of a range of key geothermal power production parameters, including plant capacity, lifetime, capacity factor, geothermal technology, well numbers and depths, field exploration, and others on the two metrics just mentioned. Estimates of variations in the results are also available to the user.

  19. f

    Definitions and calculations of temperature extremes, based on the Expert...

    • figshare.com
    xls
    Updated Jun 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sally Jahn; Keith Fraser; Katy A. M. Gaythorpe; Caroline M. Wainwright; Neil M. Ferguson (2025). Definitions and calculations of temperature extremes, based on the Expert Team on Climate Change Detection and Indices (ETCCDI), and of bioclimatic variables (BCVs). ETCCDI calculations utilized daily maximum (TX) and minimum (TN) temperature timeseries; therefore, only data sets with daily resolution were selected (excluding Climatic Research Unit Time-Series data set, CRUTS). BCVs were calculated from monthly climatological averages and hence include all data sets. Indices and variables were computed using the global gridded temperature data sets at their native spatial resolutions as well as on a common 0.5∘ grid to account for uncertainties across different data sets. [Dataset]. http://doi.org/10.1371/journal.pclm.0000601.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 25, 2025
    Dataset provided by
    PLOS Climate
    Authors
    Sally Jahn; Keith Fraser; Katy A. M. Gaythorpe; Caroline M. Wainwright; Neil M. Ferguson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Definitions and calculations of temperature extremes, based on the Expert Team on Climate Change Detection and Indices (ETCCDI), and of bioclimatic variables (BCVs). ETCCDI calculations utilized daily maximum (TX) and minimum (TN) temperature timeseries; therefore, only data sets with daily resolution were selected (excluding Climatic Research Unit Time-Series data set, CRUTS). BCVs were calculated from monthly climatological averages and hence include all data sets. Indices and variables were computed using the global gridded temperature data sets at their native spatial resolutions as well as on a common 0.5∘ grid to account for uncertainties across different data sets.

  20. GLAS/ICESat L1B Global Waveform-based Range Corrections Data (HDF5) V034 -...

    • data.nasa.gov
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). GLAS/ICESat L1B Global Waveform-based Range Corrections Data (HDF5) V034 - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/glas-icesat-l1b-global-waveform-based-range-corrections-data-hdf5-v034
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    GLAH05 Level-1B waveform parameterization data include output parameters from the waveform characterization procedure and other parameters required to calculate surface slope and relief characteristics. GLAH05 contains parameterizations of both the transmitted and received pulses and other characteristics from which elevation and footprint-scale roughness and slope are calculated. The received pulse characterization uses two implementations of the retracking algorithms: one tuned for ice sheets, called the standard parameterization, used to calculate surface elevation for ice sheets, oceans, and sea ice; and another for land (the alternative parameterization). Each data granule has an associated browse product.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
jatin-rane (2023). Distance Calculation Dataset [Dataset]. https://universe.roboflow.com/jatin-rane/distance-calculation/model/3

Distance Calculation Dataset

distance-calculation

distance-calculation-dataset

Explore at:
zipAvailable download formats
Dataset updated
Mar 2, 2023
Dataset authored and provided by
jatin-rane
License

CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically

Variables measured
Vehicles Bounding Boxes
Description

Distance Calculation

## Overview

Distance Calculation is a dataset for object detection tasks - it contains Vehicles annotations for 4,056 images.

## Getting Started

You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.

  ## License

  This dataset is available under the [Public Domain license](https://creativecommons.org/licenses/Public Domain).
Search
Clear search
Close search
Google apps
Main menu