100+ datasets found
  1. Unemployment Insurance Benefit Accuracy Measurement (BAM) Data

    • catalog.data.gov
    • s.cnmilf.com
    Updated Apr 18, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Employment and Training Administration (2024). Unemployment Insurance Benefit Accuracy Measurement (BAM) Data [Dataset]. https://catalog.data.gov/dataset/unemployment-insurance-benefit-accuracy-measurement-bam-data
    Explore at:
    Dataset updated
    Apr 18, 2024
    Dataset provided by
    Employment and Training Administrationhttps://www.dol.gov/agencies/eta
    Description

    This dataset includes the historical series of sample Unemployment Insurance (UI) data collected through the benefit accuracy measurement (BAM) program. BAM is a statistical survey used to identify and support resolutions of deficiencies in the state’s (UI) system as well as to estimate state UI improper payments to be reported to DOL as required by the Improper Payments Information Act (IPIA) and the Elimination and Recovery Act (IPERA). BAM is also used to identify the root causes of improper payments and supports other analyses conducted by DOL to highlight improper payment prevention strategies and measure progress in meeting improper payment reduction targets.

  2. H

    Replication data for: A New Multinomial Accuracy Measure for Polling Bias

    • dataverse.harvard.edu
    Updated Oct 1, 2014
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kai Arzheimer; Jocelyn Evans (2014). Replication data for: A New Multinomial Accuracy Measure for Polling Bias [Dataset]. http://doi.org/10.7910/DVN/1V0FCS
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 1, 2014
    Dataset provided by
    Harvard Dataverse
    Authors
    Kai Arzheimer; Jocelyn Evans
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Replication data for our manuscript "A New Multinomial Accuracy Measure for Polling Bias". We provide data and Stata scripts to replicate our analysis of survey bias in French pre-election polls as well as code to replicate our simulations of the properties of our measures B and B_w

  3. The relation of error/accuracy measures and data properties.

    • figshare.com
    xls
    Updated Jun 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jin Li (2023). The relation of error/accuracy measures and data properties. [Dataset]. http://doi.org/10.1371/journal.pone.0183250.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 19, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Jin Li
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The relation of error/accuracy measures and data properties.

  4. Success measurement in using AI-driven personalization worldwide 2023

    • statista.com
    Updated Nov 25, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Success measurement in using AI-driven personalization worldwide 2023 [Dataset]. https://www.statista.com/statistics/1415821/success-measurement-in-using-ai-driven-personalization-worldwide-2023/
    Explore at:
    Dataset updated
    Nov 25, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Mar 8, 2023 - Mar 24, 2023
    Area covered
    Worldwide
    Description

    When asked how their company measured effectiveness in using artificial-intelligence-driven personalization, ** percent of global business leaders identified data accuracy as the foremost criterion. Following closely behind were the speed of real-time data and customer retention or repeat purchases, each mentioned by ** percent of the respondents. Subsequently, ** percent identified time-saving for the business as another indicator of success.

  5. H

    Replication Data for: Measuring precision precisely: A Dictionary-Based...

    • dataverse.harvard.edu
    • search.dataone.org
    Updated Sep 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Markus Gastinger; Henning Schmidtke (2022). Replication Data for: Measuring precision precisely: A Dictionary-Based Measure of Imprecision [Dataset]. http://doi.org/10.7910/DVN/2DACNY
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 14, 2022
    Dataset provided by
    Harvard Dataverse
    Authors
    Markus Gastinger; Henning Schmidtke
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Abstract: How can we measure and explain the precision of international organizations’ (IOs) founding treaties? We define precision by its negative – imprecision – as indeterminate language that intentionally leaves a wide margin of interpretation for actors after agreements enter into force. Compiling a “dictionary of imprecision” from almost 500 scholarly contributions and leveraging insight from linguists that a single vague word renders the whole sentence vague, we introduce a dictionary-based measure of imprecision (DIMI) that is replicable, applicable to all written documents, and yields a continuous measure bound between zero and one. To demonstrate that DIMI usefully complements existing approaches and advances the study of (im-)precision, we apply it to a sample of 76 IOs. Our descriptive results show high face validity and closely track previous characterizations of these IOs. Finally, we explore patterns in the data, expecting that imprecision in IO treaties increases with the number of states, power asymmetries, and the delegation of authority, while it decreases with the pooling of authority. In a sample of major IOs, we find robust empirical support for the power asymmetries and delegation propositions. Overall, DIMI provides exciting new avenues to study precision in International Relations and beyond. The files uploaded entail the material necessary to replicate the results from the article and Online appendix published in: Gastinger, M. and Schmidtke, H. (2022) ‘Measuring precision precisely: A dictionary-based measure of imprecision’, The Review of International Organizations, available at Doi: 10.1007/s11558-022-09476-y. Please let us know if you spot any mistakes or if we may be of any further assistance!

  6. A Systematic Investigation of Accuracy and Response Time Based Measures Used...

    • plos.figshare.com
    xlsx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Julia Felicitas Dietrich; Stefan Huber; Elise Klein; Klaus Willmes; Silvia Pixner; Korbinian Moeller (2023). A Systematic Investigation of Accuracy and Response Time Based Measures Used to Index ANS Acuity [Dataset]. http://doi.org/10.1371/journal.pone.0163076
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Julia Felicitas Dietrich; Stefan Huber; Elise Klein; Klaus Willmes; Silvia Pixner; Korbinian Moeller
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The approximate number system (ANS) was proposed to be a building block for later mathematical abilities. Several measures have been used interchangeably to assess ANS acuity. Some of these measures were based on accuracy data, whereas others relied on response time (RT) data or combined accuracy and RT data. Previous studies challenged the view that all these measures can be used interchangeably, because low correlations between some of the measures had been observed. These low correlations might be due to poor reliability of some of the measures, since the majority of these measures are mathematically related. Here we systematically investigated the relationship between common ANS measures while avoiding the potential confound of poor reliability. Our first experiment revealed high correlations between all accuracy based measures supporting the assumption that all of them can be used interchangeably. In contrast, not all RT based measures were highly correlated. Additionally, our results revealed a speed-accuracy trade-off. Thus, accuracy and RT based measures provided conflicting conclusions regarding ANS acuity. Therefore, we investigated in two further experiments which type of measure (accuracy or RT) is more informative about the underlying ANS acuity, depending on participants’ preferences for accuracy or speed. To this end, we manipulated participants’ preferences for accuracy or speed both explicitly using different task instructions and implicitly varying presentation duration. Accuracy based measures were more informative about the underlying ANS acuity than RT based measures. Moreover, the influence of the underlying representations on accuracy data was more pronounced when participants preferred accuracy over speed after the accuracy instruction as well as for long or unlimited presentation durations. Implications regarding the diffusion model as a theoretical framework of dot comparison as well as regarding the relationship between ANS acuity and math performance are discussed.

  7. d

    National Residential Efficiency Measures Database (REMDB)

    • catalog.data.gov
    • data.openei.org
    • +2more
    Updated Mar 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Renewable Energy Lab - NREL (2025). National Residential Efficiency Measures Database (REMDB) [Dataset]. https://catalog.data.gov/dataset/national-residential-efficiency-measures-database-remdb
    Explore at:
    Dataset updated
    Mar 8, 2025
    Dataset provided by
    National Renewable Energy Lab - NREL
    Description

    This project provides a national unified database of residential building retrofit measures and associated retail prices and end-user might experience. These data are accessible to software programs that evaluate most cost-effective retrofit measures to improve the energy efficiency of residential buildings and are used in the consumer-facing website https://remdb.nrel.gov/ This publicly accessible, centralized database of retrofit measures offers the following benefits: Provides information in a standardized format Improves the technical consistency and accuracy of the results of software programs Enables experts and stakeholders to view the retrofit information and provide comments to improve data quality Supports building science R&D Enhances transparency This database provides full price estimates for many different retrofit measures. For each measure, the database provides a range of prices, as the data for a measure can vary widely across regions, houses, and contractors. Climate, construction, home features, local economy, maturity of a market, and geographic location are some of the factors that may affect the actual price of these measures. This database is not intended to provide specific cost estimates for a specific project. The cost estimates do not include any rebates or tax incentives that may be available for the measures. Rather, it is meant to help determine which measures may be more cost-effective. The National Renewable Energy Laboratory (NREL) makes every effort to ensure accuracy of the data; however, NREL does not assume any legal liability or responsibility for the accuracy or completeness of the information.

  8. f

    Data from: Evaluation of Peak Picking Quality in LC−MS Metabolomics Data

    • acs.figshare.com
    zip
    Updated Jun 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Leonid Brodsky; Arieh Moussaieff; Nir Shahaf; Asaph Aharoni; Ilana Rogachev (2023). Evaluation of Peak Picking Quality in LC−MS Metabolomics Data [Dataset]. http://doi.org/10.1021/ac101216e.s003
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2023
    Dataset provided by
    ACS Publications
    Authors
    Leonid Brodsky; Arieh Moussaieff; Nir Shahaf; Asaph Aharoni; Ilana Rogachev
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The output of LC−MS metabolomics experiments consists of mass-peak intensities identified through a peak-picking/alignment procedure. Besides imperfections in biological samples and instrumentation, data accuracy is highly dependent on the applied algorithms and their parameters. Consequently, quality control (QC) is essential for further data analysis. Here, we present a QC approach that is based on discrepancies between replicate samples. First, the quantile normalization of per-sample log-signal distributions is applied to each group of biologically homogeneous samples. Next, the overall quality of each replicate group is characterized by the Z-transformed correlation coefficients between samples. This general QC allows a tuning of the procedure’s parameters which minimizes the inter-replicate discrepancies in the generated output. Subsequently, an in-depth QC measure detects local neighborhoods on a template of aligned chromatograms that are enriched by divergences between intensity profiles of replicate samples. These neighborhoods are determined through a segmentation algorithm. The retention time (RT)−m/z positions of the neighborhoods with local divergences are indicative of either: incorrect alignment of chromatographic features, technical problems in the chromatograms, or to a true biological discrepancy between replicates for particular metabolites. We expect this method to aid in the accurate analysis of metabolomics data and in the development of new peak-picking/alignment procedures.

  9. Fortnite Player Performance

    • kaggle.com
    zip
    Updated Dec 6, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2022). Fortnite Player Performance [Dataset]. https://www.kaggle.com/datasets/thedevastator/unlocking-fortnite-player-performance-with-88-ga
    Explore at:
    zip(2654 bytes)Available download formats
    Dataset updated
    Dec 6, 2022
    Authors
    The Devastator
    Description

    Fortnite Player Performance

    Understanding Player Performance with Granular Data

    By Kristian Reynolds [source]

    About this dataset

    This dataset contains 88 end-game Fortnite statistics, giving a comprehensive look at player performance over the course of 80 games. Discover the time of day, date, mental state and more that contribute to winning strategies! Measure success across eliminations, assists, revives, accuracy percentage, hits scored and head shots landed. Explore distance traveled and materials gathered or used to gauge efficiency while playing. Examine damage taken versus damage dealt to other players and structures alike. Use this data to reveal peak performance trends in Fortnite gameplay

    More Datasets

    For more datasets, click here.

    Featured Notebooks

    • 🚨 Your notebook can be here! 🚨!

    How to use the dataset

    This dataset is a great resource for analyzing and tracking the performance of Fortnite players. It contains 88 end game stats that provide insights into player performance, such as eliminations, assists and revives. This dataset can help you gain a better understanding of your own performance or another player’s overall effectiveness in the game.

    • Analyzing Performance: This dataset can be used to analyze your own or other players’ overall performance in Fortnite across multiple games by looking at statistics like eliminations, assists, revives and head shots (by looking at comparisons between different games).
    • Tracking Performance: The dataset also has valuable data that enables you to track any changes in performance over time since it includes data on when the games were played (Date) as well as when they ended (Time of Day). This can be used to measure progress or stagnation in your play over time by comparing different stats like accuracy and distance traveled per game.
    • Improving Performance: By combining this data with other information about gear and character builds, one can use this information to look for patterns between successful playstyles across multiple matches or build an optimal loadout for their particular playstyle preferences or intentions see what works best their intended approach

    Research Ideas

    • Using this dataset to develop player performance indicators that can be used to compare players across games. The indicators can measure each player's ability in terms of eliminations, assists, headshots accuracy and other data points.
    • Establishing correlations between the mental state and performance level of a player by analyzing how their stats vary before and after playing under different mental states.
    • Analyzing the relationship between overall game performance (such as placement) and specific statistics (such as materials gathered or damage taken). This could provide useful insights into what aspects of gameplay are more important for high-level play in Fortnite

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: Dataset copyright by authors - You are free to: - Share - copy and redistribute the material in any medium or format for any purpose, even commercially. - Adapt - remix, transform, and build upon the material for any purpose, even commercially. - You must: - Give appropriate credit - Provide a link to the license, and indicate if changes were made. - ShareAlike - You must distribute your contributions under the same license as the original. - Keep intact - all notices that refer to this license, including copyright notices.

    Columns

    File: Fortnite Statistics.csv | Column name | Description | |:-------------------------|:--------------------------------------------------------------| | Date | Date of the game. (Date) | | Time of Day | Time of day the game was played. (Time) | | Placed | Player's placement in the game. (Integer) | | Mental State | Player's mental state during the game. (String) | | Eliminations | Number of eliminations the player achieved. (Integer) | | Assists | Number of assists the player achieved. (Integer) | | Revives | Number of revives the player achieved. (Integer) | | Accuracy | Player's accuracy in the game. (Float) | | Hits ...

  10. P

    Precision Source Measure Unit Report

    • datainsightsmarket.com
    doc, pdf, ppt
    Updated Feb 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Data Insights Market (2025). Precision Source Measure Unit Report [Dataset]. https://www.datainsightsmarket.com/reports/precision-source-measure-unit-1441871
    Explore at:
    ppt, doc, pdfAvailable download formats
    Dataset updated
    Feb 13, 2025
    Dataset authored and provided by
    Data Insights Market
    License

    https://www.datainsightsmarket.com/privacy-policyhttps://www.datainsightsmarket.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The size of the Precision Source Measure Unit market was valued at USD XXX million in 2024 and is projected to reach USD XXX million by 2033, with an expected CAGR of XX% during the forecast period.

  11. D

    Healthcare Data Quality Tools Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Healthcare Data Quality Tools Market Research Report 2033 [Dataset]. https://dataintelo.com/report/healthcare-data-quality-tools-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Healthcare Data Quality Tools Market Outlook



    According to our latest research, the global healthcare data quality tools market size reached USD 1.82 billion in 2024. The market is expected to exhibit a strong compound annual growth rate (CAGR) of 16.9% from 2025 to 2033, driven by the increasing digitization of healthcare systems, regulatory mandates, and the rising emphasis on data-driven decision-making in healthcare. By 2033, the market is forecasted to achieve a value of USD 7.13 billion. This robust expansion is primarily fueled by the growing need for accurate, complete, and reliable health data to improve patient outcomes, streamline operations, and ensure compliance with evolving healthcare regulations.




    The healthcare data quality tools market is experiencing significant growth due to the surging adoption of electronic health records (EHRs) and the rapid digital transformation within the healthcare sector. As healthcare organizations increasingly transition from paper-based systems to digital platforms, the volume and complexity of healthcare data have grown exponentially. This shift has amplified the need for data quality tools that can cleanse, standardize, and validate large datasets, ensuring that critical clinical and administrative decisions are based on accurate and consistent information. The integration of advanced analytics and artificial intelligence (AI) in healthcare data management further accelerates the demand for robust data quality solutions, enabling organizations to unlock actionable insights from their data assets.




    Another key growth factor for the healthcare data quality tools market is the stringent regulatory environment governing healthcare data management. Regulatory bodies such as HIPAA in the United States and GDPR in Europe have established strict guidelines for data privacy, security, and accuracy, compelling healthcare organizations to invest in tools that ensure compliance. Non-compliance can result in severe penalties and reputational damage, making data quality management a top priority. Additionally, the increasing adoption of value-based care models and the emphasis on population health management require high-quality data to track patient outcomes, measure performance, and optimize resource allocation. This regulatory and operational landscape is driving sustained investments in healthcare data quality tools globally.




    The proliferation of connected medical devices, telemedicine platforms, and health information exchanges has further contributed to the complexity of healthcare data ecosystems. These advancements generate vast amounts of structured and unstructured data from diverse sources, including patient records, imaging systems, wearable devices, and administrative databases. Ensuring the interoperability and consistency of such heterogeneous data is a significant challenge, necessitating advanced data quality tools that can handle multiple data types and formats. As healthcare organizations strive to harness the full potential of big data and predictive analytics, the importance of data quality tools in enabling reliable and actionable insights cannot be overstated.




    From a regional perspective, North America currently dominates the healthcare data quality tools market, accounting for the largest revenue share in 2024. The region’s leadership is attributed to its advanced healthcare IT infrastructure, high adoption of EHRs, and strong regulatory frameworks. However, Asia Pacific is expected to register the fastest growth during the forecast period, supported by increasing healthcare digitization, government initiatives to modernize healthcare systems, and rising investments in health IT. Europe also remains a significant market, driven by stringent data protection regulations and the widespread implementation of digital health initiatives across the region.



    Component Analysis



    The healthcare data quality tools market by component is broadly segmented into software and services. The software segment comprises standalone and integrated solutions designed to automate data cleansing, profiling, integration, enrichment, and monitoring processes within healthcare organizations. These solutions are increasingly incorporating advanced technologies such as artificial intelligence, machine learning, and natural language processing to enhance data accuracy and streamline workflows. The growing need to manage large volumes of healthcare data efficiently and the rising

  12. Good Growth Plan 2014-2019 - Kenya

    • microdata.worldbank.org
    • datacatalog.ihsn.org
    • +1more
    Updated Jan 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syngenta (2023). Good Growth Plan 2014-2019 - Kenya [Dataset]. https://microdata.worldbank.org/index.php/catalog/5635
    Explore at:
    Dataset updated
    Jan 27, 2023
    Dataset authored and provided by
    Syngenta
    Time period covered
    2014 - 2019
    Area covered
    Kenya
    Description

    Abstract

    Syngenta is committed to increasing crop productivity and to using limited resources such as land, water and inputs more efficiently. Since 2014, Syngenta has been measuring trends in agricultural input efficiency on a global network of real farms. The Good Growth Plan dataset shows aggregated productivity and resource efficiency indicators by harvest year. The data has been collected from more than 4,000 farms and covers more than 20 different crops in 46 countries. The data (except USA data and for Barley in UK, Germany, Poland, Czech Republic, France and Spain) was collected, consolidated and reported by Kynetec (previously Market Probe), an independent market research agency. It can be used as benchmarks for crop yield and input efficiency.

    Geographic coverage

    National coverage

    Analysis unit

    Agricultural holdings

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    A. Sample design Farms are grouped in clusters, which represent a crop grown in an area with homogenous agro- ecological conditions and include comparable types of farms. The sample includes reference and benchmark farms. The reference farms were selected by Syngenta and the benchmark farms were randomly selected by Kynetec within the same cluster.

    B. Sample size Sample sizes for each cluster are determined with the aim to measure statistically significant increases in crop efficiency over time. This is done by Kynetec based on target productivity increases and assumptions regarding the variability of farm metrics in each cluster. The smaller the expected increase, the larger the sample size needed to measure significant differences over time. Variability within clusters is assumed based on public research and expert opinion. In addition, growers are also grouped in clusters as a means of keeping variances under control, as well as distinguishing between growers in terms of crop size, region and technological level. A minimum sample size of 20 interviews per cluster is needed. The minimum number of reference farms is 5 of 20. The optimal number of reference farms is 10 of 20 (balanced sample).

    C. Selection procedure The respondents were picked randomly using a “quota based random sampling” procedure. Growers were first randomly selected and then checked if they complied with the quotas for crops, region, farm size etc. To avoid clustering high number of interviews at one sampling point, interviewers were instructed to do a maximum of 5 interviews in one village.

    BF Screened from Kenya were selected based on the following criterion: (a) Smallholder potato growers Location: Gwakiongo, Ol njororok, Wanjohi, Molo BACKGROUND: Open field potatoes RF: Flood or drip irrigation BF: No irrigation
    Ploughing with a tractor or manually (e.g. with a hoe)
    Usage of chemical and/or organic fertilizers
    Selling the harvest is the main after harvest activity

    (b) Smallholder tomato growers Location: Kitengela BACKGROUND: Open field tomatoes Flood or drip irrigation
    Ploughing with a tractor or manually (e.g. with a hoe, a slasher)
    Usage of chemical and/or organic fertilizers
    Selling the harvest is the main after harvest activity

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    Data collection tool for 2019 covered the following information:

    (A) PRE- HARVEST INFORMATION

    PART I: Screening PART II: Contact Information PART III: Farm Characteristics a. Biodiversity conservation b. Soil conservation c. Soil erosion d. Description of growing area e. Training on crop cultivation and safety measures PART IV: Farming Practices - Before Harvest a. Planting and fruit development - Field crops b. Planting and fruit development - Tree crops c. Planting and fruit development - Sugarcane d. Planting and fruit development - Cauliflower e. Seed treatment

    (B) HARVEST INFORMATION

    PART V: Farming Practices - After Harvest a. Fertilizer usage b. Crop protection products c. Harvest timing & quality per crop - Field crops d. Harvest timing & quality per crop - Tree crops e. Harvest timing & quality per crop - Sugarcane f. Harvest timing & quality per crop - Banana g. After harvest PART VI - Other inputs - After Harvest a. Input costs b. Abiotic stress c. Irrigation

    See all questionnaires in external materials tab

    Cleaning operations

    Data processing:

    Kynetec uses SPSS (Statistical Package for the Social Sciences) for data entry, cleaning, analysis, and reporting. After collection, the farm data is entered into a local database, reviewed, and quality-checked by the local Kynetec agency. In the case of missing values or inconsistencies, farmers are re-contacted. In some cases, grower data is verified with local experts (e.g. retailers) to ensure data accuracy and validity. After country-level cleaning, the farm-level data is submitted to the global Kynetec headquarters for processing. In the case of missing values or inconsistences, the local Kynetec office was re-contacted to clarify and solve issues.

    Quality assurance Various consistency checks and internal controls are implemented throughout the entire data collection and reporting process in order to ensure unbiased, high quality data.

    • Screening: Each grower is screened and selected by Kynetec based on cluster-specific criteria to ensure a comparable group of growers within each cluster. This helps keeping variability low.

    • Evaluation of the questionnaire: The questionnaire aligns with the global objective of the project and is adapted to the local context (e.g. interviewers and growers should understand what is asked). Each year the questionnaire is evaluated based on several criteria, and updated where needed.

    • Briefing of interviewers: Each year, local interviewers - familiar with the local context of farming -are thoroughly briefed to fully comprehend the questionnaire to obtain unbiased, accurate answers from respondents.

    • Cross-validation of the answers: o Kynetec captures all growers' responses through a digital data-entry tool. Various logical and consistency checks are automated in this tool (e.g. total crop size in hectares cannot be larger than farm size) o Kynetec cross validates the answers of the growers in three different ways: 1. Within the grower (check if growers respond consistently during the interview) 2. Across years (check if growers respond consistently throughout the years) 3. Within cluster (compare a grower's responses with those of others in the group) o All the above mentioned inconsistencies are followed up by contacting the growers and asking them to verify their answers. The data is updated after verification. All updates are tracked.

    • Check and discuss evolutions and patterns: Global evolutions are calculated, discussed and reviewed on a monthly basis jointly by Kynetec and Syngenta.

    • Sensitivity analysis: sensitivity analysis is conducted to evaluate the global results in terms of outliers, retention rates and overall statistical robustness. The results of the sensitivity analysis are discussed jointly by Kynetec and Syngenta.

    • It is recommended that users interested in using the administrative level 1 variable in the location dataset use this variable with care and crosscheck it with the postal code variable.

    Data appraisal

    Due to the above mentioned checks, irregularities in fertilizer usage data were discovered which had to be corrected:

    For data collection wave 2014, respondents were asked to give a total estimate of the fertilizer NPK-rates that were applied in the fields. From 2015 onwards, the questionnaire was redesigned to be more precise and obtain data by individual fertilizer products. The new method of measuring fertilizer inputs leads to more accurate results, but also makes a year-on-year comparison difficult. After evaluating several solutions to this problems, 2014 fertilizer usage (NPK input) was re-estimated by calculating a weighted average of fertilizer usage in the following years.

  13. G

    Clean-Room Ad Measurement Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Clean-Room Ad Measurement Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/clean-room-ad-measurement-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Aug 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Clean-Room Ad Measurement Market Outlook



    According to our latest research, the global Clean-Room Ad Measurement market size stood at USD 1.24 billion in 2024, with a robust compound annual growth rate (CAGR) of 18.7% anticipated through the forecast period. By 2033, the market is projected to reach an impressive USD 6.47 billion, reflecting the surging demand for privacy-compliant advertising analytics and the evolving digital marketing landscape. The market’s growth is primarily fueled by stringent data privacy regulations, the phasing out of third-party cookies, and the increasing sophistication of cross-platform marketing strategies.




    One of the pivotal growth drivers for the Clean-Room Ad Measurement market is the global shift towards enhanced data privacy and compliance with regulations such as GDPR in Europe and CCPA in the United States. As digital advertising ecosystems become more complex and fragmented, advertisers and publishers are under mounting pressure to ensure user privacy while still deriving actionable insights from campaign data. Clean-room solutions offer a secure environment where multiple parties can collaborate on aggregated, anonymized data, enabling accurate measurement without exposing personally identifiable information. This capability is especially critical as major browsers and platforms eliminate third-party cookies, forcing marketers to seek new ways to measure campaign performance and audience engagement in a privacy-first world.




    Another significant factor propelling market expansion is the increasing demand for advanced analytics and attribution models. Brands and agencies are investing heavily in cross-platform measurement tools to understand the holistic impact of their campaigns across digital, mobile, and traditional media channels. Clean-room ad measurement platforms facilitate granular audience segmentation, campaign optimization, and multi-touch attribution analysis, empowering marketers to maximize return on advertising spend (ROAS) while complying with privacy mandates. The integration of artificial intelligence and machine learning into clean-room environments further enhances the accuracy and scalability of these insights, driving adoption among large enterprises and small to medium-sized businesses alike.




    Moreover, the proliferation of walled gardens—closed ecosystems operated by major digital platforms such as Google, Facebook, and Amazon—has heightened the need for neutral, interoperable measurement solutions. Clean-room ad measurement bridges the gap between disparate data sources, allowing advertisers, publishers, and agencies to collaborate securely and transparently. This collaborative approach not only improves trust and accountability in ad measurement but also supports the development of industry-wide standards for data sharing and analytics. As the digital advertising sector continues to evolve, the demand for interoperable, privacy-centric measurement solutions is expected to accelerate, further bolstering the growth of the clean-room ad measurement market.




    From a regional perspective, North America remains the dominant market, accounting for the largest share of global revenues in 2024, driven by the presence of leading technology providers, a mature digital advertising ecosystem, and proactive regulatory frameworks. Europe is rapidly catching up, spurred by strict privacy laws and increasing adoption among media and entertainment companies. The Asia Pacific region is poised for the fastest growth, fueled by burgeoning digital economies, rising internet penetration, and the rapid digital transformation of retail, BFSI, and healthcare sectors. Latin America and the Middle East & Africa are also witnessing steady adoption, although market maturity and infrastructure challenges persist. Overall, the global clean-room ad measurement market is characterized by dynamic regional trends, with each geography presenting unique opportunities and challenges for stakeholders.





    Component Analysis



    The C

  14. Precision Values for PSI 90 and Component Measures

    • johnsnowlabs.com
    csv
    Updated Jan 20, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    John Snow Labs (2021). Precision Values for PSI 90 and Component Measures [Dataset]. https://www.johnsnowlabs.com/marketplace/precision-values-for-psi-90-and-component-measures/
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jan 20, 2021
    Dataset authored and provided by
    John Snow Labs
    Time period covered
    2019 - 2023
    Area covered
    United States
    Description

    This dataset includes the Patient Safety and Adverse Events measure (PSI-90) and the individual patient safety indicators. PSI-90 is a composite surgical complication measure composed of ten patient safety indicators. The measure provides an overview of hospital-level quality as it relates to a set of potentially preventable hospital-related events associated with harmful outcomes for patients. This data set includes six digit precision.

  15. d

    Revenue Generated by Measure ULA

    • catalog.data.gov
    • data.lacity.org
    Updated Nov 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.lacity.org (2025). Revenue Generated by Measure ULA [Dataset]. https://catalog.data.gov/dataset/revenue-generated-by-measure-ula
    Explore at:
    Dataset updated
    Nov 8, 2025
    Dataset provided by
    data.lacity.org
    Description

    Disclaimer: PLEASE READ THIS AGREEMENT CAREFULLY BEFORE USING THIS DATA SET. BY USING THIS DATA SET, YOU ARE CONSENTING TO BE OBLIGATED AND BECOME A PARTY TO THIS AGREEMENT. IF YOU DO NOT AGREE TO THE TERMS AND CONDITIONS BELOW YOU SHOULD NOT ACCESS OR USE THIS DATA SET. This data set is presented as a public service that provides Internet accessibility to information provided by the City of Los Angeles and to other City, State, and Federal information. Due to the dynamic nature of the information contained within this data set and the data set’s reliance on information from outside sources, the City of Los Angeles does not guarantee the accuracy or reliability of the information transmitted from this data set. This data set and all materials contained on it are distributed and transmitted on an “as is” and “as available” basis without any warranties of any kind, whether expressed or implied, including without limitation, warranties of title or implied warranties of merchantability or fitness for a particular purpose. The City of Los Angeles is not responsible for any special, indirect, incidental, punitive, or consequential damages that may arise from the use of, or the inability to use the data set and/or materials contained on the data set, or that result from mistakes, omissions, interruptions, deletion of files, errors, defects, delays in operation, or transmission, or any failure of performance, whether the material is provided by the City of Los Angeles or a third-party. The City of Los Angeles reserves the right to modify, update, or alter these Terms and Conditions of use at any time. Your continued use of this Site constitutes your agreement to comply with such modifications. The information provided on this data set, and its links to other related web sites, are provided as a courtesy to our web site visitors only, and are in no manner an endorsement, recommendation, or approval of any person, any product, or any service contained on any other web site. Description: Monthly revenue generated by conveyances of real property over $5 million, from when applicable transfer tax collection began on April 1, 2023 to present. Consistent with the ULA ordinance, the property sale value thresholds and their corresponding tax rates will be adjusted annually based on the Bureau of Labor Statistics Chained Consumer Price Index.

  16. d

    Data from: The accuracy of length measurements made using imaging SONAR is...

    • datadryad.org
    zip
    Updated Mar 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iain Parnum; Benjamin Saunders; Melanie Stott; Travis Elsdon; Michael Marnane; Euan Harvey (2024). The accuracy of length measurements made using imaging SONAR is inversely proportional to the beam width [Dataset]. http://doi.org/10.5061/dryad.dfn2z358w
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 18, 2024
    Dataset provided by
    Dryad
    Authors
    Iain Parnum; Benjamin Saunders; Melanie Stott; Travis Elsdon; Michael Marnane; Euan Harvey
    Time period covered
    Feb 21, 2024
    Description

    The accuracy of length measurements made using imaging SONAR is inversely proportional to the beam width

    Iain M. Parnum1*, Benjamin J. Saunders 2, Melanie Stott2, Travis S Elsdon3,2, Michael J Marnane3, Euan S. Harvey2

    1 Centre for Marine Science and Technology, Curtin University, Bentley, 6102, Western Australia, Australia

    2 School of Molecular and Life Sciences, Curtin University, Bentley, 6102, Western Australia, Australia

    3 Chevron Technical Centre, 250 St Georges Tce, Perth, 6000, Western Australia

    Citation for this dataset: https://doi.org/10.5061/dryad.dfn2z358w

    Abstract

    In this study, Blueprint Oculus imaging SONAR systems, with four different (centre) frequencies (750 kHz, 1.2 MHz, 2.1 MHz, and 3 MHz), were used to measure underwater the length of three targets: 10.5 cm, 19.5 cm, and 100.5 cm, at ranges between 1 m and 15.5 m. This was to investigate the effect of beam geometry on measurement error.

    Description of the data and file structure

    The Bluepr...

  17. Good Growth Plan 2014-2019 - Indonesia

    • microdata.worldbank.org
    • catalog.ihsn.org
    Updated Jan 27, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syngenta (2023). Good Growth Plan 2014-2019 - Indonesia [Dataset]. https://microdata.worldbank.org/index.php/catalog/5630
    Explore at:
    Dataset updated
    Jan 27, 2023
    Dataset authored and provided by
    Syngenta
    Time period covered
    2014 - 2019
    Area covered
    Indonesia
    Description

    Abstract

    Syngenta is committed to increasing crop productivity and to using limited resources such as land, water and inputs more efficiently. Since 2014, Syngenta has been measuring trends in agricultural input efficiency on a global network of real farms. The Good Growth Plan dataset shows aggregated productivity and resource efficiency indicators by harvest year. The data has been collected from more than 4,000 farms and covers more than 20 different crops in 46 countries. The data (except USA data and for Barley in UK, Germany, Poland, Czech Republic, France and Spain) was collected, consolidated and reported by Kynetec (previously Market Probe), an independent market research agency. It can be used as benchmarks for crop yield and input efficiency.

    Geographic coverage

    National coverage

    Analysis unit

    Agricultural holdings

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    A. Sample design Farms are grouped in clusters, which represent a crop grown in an area with homogenous agro- ecological conditions and include comparable types of farms. The sample includes reference and benchmark farms. The reference farms were selected by Syngenta and the benchmark farms were randomly selected by Kynetec within the same cluster.

    B. Sample size Sample sizes for each cluster are determined with the aim to measure statistically significant increases in crop efficiency over time. This is done by Kynetec based on target productivity increases and assumptions regarding the variability of farm metrics in each cluster. The smaller the expected increase, the larger the sample size needed to measure significant differences over time. Variability within clusters is assumed based on public research and expert opinion. In addition, growers are also grouped in clusters as a means of keeping variances under control, as well as distinguishing between growers in terms of crop size, region and technological level. A minimum sample size of 20 interviews per cluster is needed. The minimum number of reference farms is 5 of 20. The optimal number of reference farms is 10 of 20 (balanced sample).

    C. Selection procedure The respondents were picked randomly using a “quota based random sampling” procedure. Growers were first randomly selected and then checked if they complied with the quotas for crops, region, farm size etc. To avoid clustering high number of interviews at one sampling point, interviewers were instructed to do a maximum of 5 interviews in one village.

    BF Screened from Indonesia were selected based on the following criterion: (a) Corn growers in East Java - Location: East Java (Kediri and Probolinggo) and Aceh
    - Innovative (early adopter); Progressive (keen to learn about agronomy and pests; willing to try new technology); Loyal (loyal to technology that can help them)
    - making of technical drain (having irrigation system)
    - marketing network for corn: post-harvest access to market (generally they sell 80% of their harvest)
    - mid-tier (sub-optimal CP/SE use)
    - influenced by fellow farmers and retailers
    - may need longer credit

    (b) Rice growers in West and East Java - Location: West Java (Tasikmalaya), East Java (Kediri), Central Java (Blora, Cilacap, Kebumen), South Lampung
    - The growers are progressive (keen to learn about agronomy and pests; willing to try new technology)
    - Accustomed in using farming equipment and pesticide. (keen to learn about agronomy and pests; willing to try new technology) - A long rice cultivating experience in his area (lots of experience in cultivating rice)
    - willing to move forward in order to increase his productivity (same as progressive)
    - have a soil that broad enough for the upcoming project
    - have influence in his group (ability to influence others) - mid-tier (sub-optimal CP/SE use)
    - may need longer credit

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    Data collection tool for 2019 covered the following information:

    (A) PRE- HARVEST INFORMATION

    PART I: Screening PART II: Contact Information PART III: Farm Characteristics a. Biodiversity conservation b. Soil conservation c. Soil erosion d. Description of growing area e. Training on crop cultivation and safety measures PART IV: Farming Practices - Before Harvest a. Planting and fruit development - Field crops b. Planting and fruit development - Tree crops c. Planting and fruit development - Sugarcane d. Planting and fruit development - Cauliflower e. Seed treatment

    (B) HARVEST INFORMATION

    PART V: Farming Practices - After Harvest a. Fertilizer usage b. Crop protection products c. Harvest timing & quality per crop - Field crops d. Harvest timing & quality per crop - Tree crops e. Harvest timing & quality per crop - Sugarcane f. Harvest timing & quality per crop - Banana g. After harvest PART VI - Other inputs - After Harvest a. Input costs b. Abiotic stress c. Irrigation

    See all questionnaires in external materials tab

    Cleaning operations

    Data processing:

    Kynetec uses SPSS (Statistical Package for the Social Sciences) for data entry, cleaning, analysis, and reporting. After collection, the farm data is entered into a local database, reviewed, and quality-checked by the local Kynetec agency. In the case of missing values or inconsistencies, farmers are re-contacted. In some cases, grower data is verified with local experts (e.g. retailers) to ensure data accuracy and validity. After country-level cleaning, the farm-level data is submitted to the global Kynetec headquarters for processing. In the case of missing values or inconsistences, the local Kynetec office was re-contacted to clarify and solve issues.

    Quality assurance Various consistency checks and internal controls are implemented throughout the entire data collection and reporting process in order to ensure unbiased, high quality data.

    • Screening: Each grower is screened and selected by Kynetec based on cluster-specific criteria to ensure a comparable group of growers within each cluster. This helps keeping variability low.

    • Evaluation of the questionnaire: The questionnaire aligns with the global objective of the project and is adapted to the local context (e.g. interviewers and growers should understand what is asked). Each year the questionnaire is evaluated based on several criteria, and updated where needed.

    • Briefing of interviewers: Each year, local interviewers - familiar with the local context of farming -are thoroughly briefed to fully comprehend the questionnaire to obtain unbiased, accurate answers from respondents.

    • Cross-validation of the answers: o Kynetec captures all growers' responses through a digital data-entry tool. Various logical and consistency checks are automated in this tool (e.g. total crop size in hectares cannot be larger than farm size) o Kynetec cross validates the answers of the growers in three different ways: 1. Within the grower (check if growers respond consistently during the interview) 2. Across years (check if growers respond consistently throughout the years) 3. Within cluster (compare a grower's responses with those of others in the group) o All the above mentioned inconsistencies are followed up by contacting the growers and asking them to verify their answers. The data is updated after verification. All updates are tracked.

    • Check and discuss evolutions and patterns: Global evolutions are calculated, discussed and reviewed on a monthly basis jointly by Kynetec and Syngenta.

    • Sensitivity analysis: sensitivity analysis is conducted to evaluate the global results in terms of outliers, retention rates and overall statistical robustness. The results of the sensitivity analysis are discussed jointly by Kynetec and Syngenta.

    • It is recommended that users interested in using the administrative level 1 variable in the location dataset use this variable with care and crosscheck it with the postal code variable.

    Data appraisal

    Due to the above mentioned checks, irregularities in fertilizer usage data were discovered which had to be corrected:

    For data collection wave 2014, respondents were asked to give a total estimate of the fertilizer NPK-rates that were applied in the fields. From 2015 onwards, the questionnaire was redesigned to be more precise and obtain data by individual fertilizer products. The new method of measuring fertilizer inputs leads to more accurate results, but also makes a year-on-year comparison difficult. After evaluating several solutions to this problems, 2014 fertilizer usage (NPK input) was re-estimated by calculating a weighted average of fertilizer usage in the following years.

  18. G

    Data Quality Scorecards Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Data Quality Scorecards Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/data-quality-scorecards-market
    Explore at:
    csv, pptx, pdfAvailable download formats
    Dataset updated
    Oct 4, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Data Quality Scorecards Market Outlook



    According to our latest research, the global Data Quality Scorecards market size in 2024 stands at USD 1.42 billion, reflecting robust demand across diverse sectors. The market is projected to expand at a CAGR of 14.8% from 2025 to 2033, reaching an estimated USD 4.45 billion by the end of the forecast period. Key growth drivers include the escalating need for reliable data-driven decision-making, stringent regulatory compliance requirements, and the proliferation of digital transformation initiatives across enterprises of all sizes. As per our latest research, organizations are increasingly recognizing the significance of maintaining high data quality standards to fuel analytics, artificial intelligence, and business intelligence capabilities.




    One of the primary growth factors for the Data Quality Scorecards market is the exponential rise in data volumes generated by organizations worldwide. The digital economy has led to a surge in data collection from various sources, including customer interactions, IoT devices, and transactional systems. This data explosion has heightened the complexity of managing and ensuring data accuracy, completeness, and consistency. As a result, businesses are investing in comprehensive data quality management solutions, such as scorecards, to monitor, measure, and improve the quality of their data assets. These tools provide actionable insights, enabling organizations to proactively address data quality issues and maintain data integrity across their operations. The growing reliance on advanced analytics and artificial intelligence further amplifies the demand for high-quality data, making data quality scorecards an indispensable component of modern data management strategies.




    Another significant growth driver is the increasing regulatory scrutiny and compliance requirements imposed on organizations, particularly in industries such as BFSI, healthcare, and government. Regulatory frameworks such as GDPR, HIPAA, and CCPA mandate stringent controls over data accuracy, privacy, and security. Non-compliance can result in severe financial penalties and reputational damage, compelling organizations to adopt robust data quality management practices. Data quality scorecards help organizations monitor compliance by providing real-time visibility into data quality metrics and highlighting areas that require remediation. This proactive approach to compliance not only mitigates regulatory risks but also enhances stakeholder trust and confidence in organizational data assets. The integration of data quality scorecards into enterprise data governance frameworks is becoming a best practice for organizations aiming to achieve continuous compliance and data excellence.




    The rapid adoption of cloud computing and digital transformation initiatives across industries is also fueling the growth of the Data Quality Scorecards market. As organizations migrate their data infrastructure to the cloud and embrace hybrid IT environments, the complexity of managing data quality across disparate systems increases. Cloud-based data quality scorecards offer scalability, flexibility, and ease of deployment, making them an attractive option for organizations seeking to modernize their data management practices. Moreover, the proliferation of self-service analytics and business intelligence tools has democratized data access, necessitating robust data quality monitoring to ensure that decision-makers are working with accurate and reliable information. The convergence of cloud, AI, and data quality management is expected to create new opportunities for innovation and value creation in the market.




    From a regional perspective, North America continues to dominate the Data Quality Scorecards market, driven by the presence of leading technology vendors, high adoption rates of advanced analytics, and stringent regulatory frameworks. However, the Asia Pacific region is expected to witness the fastest growth during the forecast period, fueled by rapid digitalization, increasing investments in IT infrastructure, and growing awareness of data quality management among enterprises. Europe also represents a significant market, characterized by strong regulatory compliance requirements and a mature data management ecosystem. Latin America and the Middle East & Africa are emerging markets, with increasing adoption of data quality solutions in sectors such as BFSI, healthcare, and government. The global market landscape is evolving rapidly, with regional

  19. Good Growth Plan, 2014-2019 - Paraguay

    • microdata.fao.org
    Updated Feb 17, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Syngenta (2021). Good Growth Plan, 2014-2019 - Paraguay [Dataset]. https://microdata.fao.org/index.php/catalog/1813
    Explore at:
    Dataset updated
    Feb 17, 2021
    Dataset authored and provided by
    Syngenta
    Time period covered
    2014 - 2019
    Area covered
    Paraguay
    Description

    Abstract

    Syngenta is committed to increasing crop productivity and to using limited resources such as land, water and inputs more efficiently. Since 2014, Syngenta has been measuring trends in agricultural input efficiency on a global network of real farms. The Good Growth Plan dataset shows aggregated productivity and resource efficiency indicators by harvest year. The data has been collected from more than 4,000 farms and covers more than 20 different crops in 46 countries. The data (except USA data and for Barley in UK, Germany, Poland, Czech Republic, France and Spain) was collected, consolidated and reported by Kynetec (previously Market Probe), an independent market research agency. It can be used as benchmarks for crop yield and input efficiency.

    Geographic coverage

    National Coverage

    Analysis unit

    Agricultural holdings

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    A. Sample design Farms are grouped in clusters, which represent a crop grown in an area with homogenous agro- ecological conditions and include comparable types of farms. The sample includes reference and benchmark farms. The reference farms were selected by Syngenta and the benchmark farms were randomly selected by Kynetec within the same cluster.

    B. Sample size Sample sizes for each cluster are determined with the aim to measure statistically significant increases in crop efficiency over time. This is done by Kynetec based on target productivity increases and assumptions regarding the variability of farm metrics in each cluster. The smaller the expected increase, the larger the sample size needed to measure significant differences over time. Variability within clusters is assumed based on public research and expert opinion. In addition, growers are also grouped in clusters as a means of keeping variances under control, as well as distinguishing between growers in terms of crop size, region and technological level. A minimum sample size of 20 interviews per cluster is needed. The minimum number of reference farms is 5 of 20. The optimal number of reference farms is 10 of 20 (balanced sample).

    C. Selection procedure The respondents were picked randomly using a “quota based random sampling” procedure. Growers were first randomly selected and then checked if they complied with the quotas for crops, region, farm size etc. To avoid clustering high number of interviews at one sampling point, interviewers were instructed to do a maximum of 5 interviews in one village.

    BF Screened from Paraguay were selected based on the following criterion:

    (a) smallholder soybean growers Medium to high technology farms Regions: - Hohenau (Itapúa) - Edelira (Itapúa) - Pirapó (Itapúa) - La Paz (Itapúa) - Naranjal (Alto Paraná) - San Cristóbal (Alto Paraná)
    corn and soybean in rotation
    first grow corn and soybean secondly

    (b) smallholder maize growers
    Medium to high technology farms Regions: - Hohenau (Itapúa) - Edelira (Itapúa) - Pirapó (Itapúa) - La Paz (Itapúa) - Naranjal (Alto Paraná) - San Cristóbal (Alto Paraná)
    corn and soybean in rotation
    first grow corn and soybean secondly

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    Data collection tool for 2019 covered the following information:

    (A) PRE- HARVEST INFORMATION

    PART I: Screening PART II: Contact Information PART III: Farm Characteristics a. Biodiversity conservation b. Soil conservation c. Soil erosion d. Description of growing area e. Training on crop cultivation and safety measures PART IV: Farming Practices - Before Harvest a. Planting and fruit development - Field crops b. Planting and fruit development - Tree crops c. Planting and fruit development - Sugarcane d. Planting and fruit development - Cauliflower e. Seed treatment

    (B) HARVEST INFORMATION

    PART V: Farming Practices - After Harvest a. Fertilizer usage b. Crop protection products c. Harvest timing & quality per crop - Field crops d. Harvest timing & quality per crop - Tree crops e. Harvest timing & quality per crop - Sugarcane f. Harvest timing & quality per crop - Banana g. After harvest PART VI - Other inputs - After Harvest a. Input costs b. Abiotic stress c. Irrigation

    See all questionnaires in external materials tab

    Cleaning operations

    Data processing:

    Kynetec uses SPSS (Statistical Package for the Social Sciences) for data entry, cleaning, analysis, and reporting. After collection, the farm data is entered into a local database, reviewed, and quality-checked by the local Kynetec agency. In the case of missing values or inconsistencies, farmers are re-contacted. In some cases, grower data is verified with local experts (e.g. retailers) to ensure data accuracy and validity. After country-level cleaning, the farm-level data is submitted to the global Kynetec headquarters for processing. In the case of missing values or inconsistences, the local Kynetec office was re-contacted to clarify and solve issues.

    B. Quality assurance Various consistency checks and internal controls are implemented throughout the entire data collection and reporting process in order to ensure unbiased, high quality data.

    • Screening: Each grower is screened and selected by Kynetec based on cluster-specific criteria to ensure a comparable group of growers within each cluster. This helps keeping variability low.

    • Evaluation of the questionnaire: The questionnaire aligns with the global objective of the project and is adapted to the local context (e.g. interviewers and growers should understand what is asked). Each year the questionnaire is evaluated based on several criteria, and updated where needed.

    • Briefing of interviewers: Each year, local interviewers - familiar with the local context of farming -are thoroughly briefed to fully comprehend the questionnaire to obtain unbiased, accurate answers from respondents.

    • Cross-validation of the answers:

    o Kynetec captures all growers' responses through a digital data-entry tool. Various logical and consistency checks are automated in this tool (e.g. total crop size in hectares cannot be larger than farm size) 
    o Kynetec cross validates the answers of the growers in three different ways: 
      1. Within the grower (check if growers respond consistently during the interview) 
      2. Across years (check if growers respond consistently throughout the years) 
      3. Within cluster (compare a grower's responses with those of others in the group) 
    

    o All the above mentioned inconsistencies are followed up by contacting the growers and asking them to verify their answers. The data is updated after verification. All updates are tracked.

    • Check and discuss evolutions and patterns: Global evolutions are calculated, discussed and reviewed on a monthly basis jointly by Kynetec and Syngenta.

    • Sensitivity analysis: sensitivity analysis is conducted to evaluate the global results in terms of outliers, retention rates and overall statistical robustness. The results of the sensitivity analysis are discussed jointly by Kynetec and Syngenta.

    • It is recommended that users interested in using the administrative level 1 variable in the location dataset use this variable with care and crosscheck it with the postal code variable.

    Data appraisal

    Due to the above mentioned checks, irregularities in fertilizer usage data were discovered which had to be corrected:

    For data collection wave 2014, respondents were asked to give a total estimate of the fertilizer NPK-rates that were applied in the fields. From 2015 onwards, the questionnaire was redesigned to be more precise and obtain data by individual fertilizer products. The new method of measuring fertilizer inputs leads to more accurate results, but also makes a year-on-year comparison difficult. After evaluating several solutions to this problems, 2014 fertilizer usage (NPK input) was re-estimated by calculating a weighted average of fertilizer usage in the following years.

  20. u

    Measuring the Fear of Crime with Greater Accuracy, 2002

    • datacatalogue.ukdataservice.ac.uk
    Updated May 15, 2003
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Farrall, S., Keele University, Department of Criminology (2003). Measuring the Fear of Crime with Greater Accuracy, 2002 [Dataset]. http://doi.org/10.5255/UKDA-SN-4665-1
    Explore at:
    Dataset updated
    May 15, 2003
    Dataset provided by
    UK Data Servicehttps://ukdataservice.ac.uk/
    Authors
    Farrall, S., Keele University, Department of Criminology
    Area covered
    United Kingdom
    Description

    This research project was an attempt to improve upon the current measures of the fear of crime via the design of new survey questions. The fear of crime is an increasingly important measure of citizens' quality of life. The Department of Transport, Local Government and the Regions has adopted the fear of crime as a 'Best Value Performance Indicator' and many police services and local community safety partnerships aim to reduce the fear of crime, so need reliable measures of fear. Concerns remain, however, as to the most accurate way of measuring fear of crime in surveys of citizens and residents in local areas. A range of methodological issues have been identified by previous research which cumulatively raise the possibility that the fear of crime has been significantly misrepresented. Many commentators suspect that the fear of crime is being exaggerated by survey research, and this project aimed to develop questions that would redress that. The questions developed were piloted and tested in a survey of British citizens, which revealed that far fewer people frequently experienced crime-related anxieties than had previously been thought to be the case.

    A later study on the fear of crime by the same Principal Investigator, based on British Crime Survey data and titled Experience and Expression in the Fear of Crime, 2003-2004, is held at the UK Data Archive under SN 5822.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Employment and Training Administration (2024). Unemployment Insurance Benefit Accuracy Measurement (BAM) Data [Dataset]. https://catalog.data.gov/dataset/unemployment-insurance-benefit-accuracy-measurement-bam-data
Organization logo

Unemployment Insurance Benefit Accuracy Measurement (BAM) Data

Explore at:
Dataset updated
Apr 18, 2024
Dataset provided by
Employment and Training Administrationhttps://www.dol.gov/agencies/eta
Description

This dataset includes the historical series of sample Unemployment Insurance (UI) data collected through the benefit accuracy measurement (BAM) program. BAM is a statistical survey used to identify and support resolutions of deficiencies in the state’s (UI) system as well as to estimate state UI improper payments to be reported to DOL as required by the Improper Payments Information Act (IPIA) and the Elimination and Recovery Act (IPERA). BAM is also used to identify the root causes of improper payments and supports other analyses conducted by DOL to highlight improper payment prevention strategies and measure progress in meeting improper payment reduction targets.

Search
Clear search
Close search
Google apps
Main menu