100+ datasets found
  1. Confidence Interval Examples

    • figshare.com
    application/cdfv2
    Updated Jun 28, 2016
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emily Rollinson (2016). Confidence Interval Examples [Dataset]. http://doi.org/10.6084/m9.figshare.3466364.v2
    Explore at:
    application/cdfv2Available download formats
    Dataset updated
    Jun 28, 2016
    Dataset provided by
    Figsharehttp://figshare.com/
    figshare
    Authors
    Emily Rollinson
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Examples demonstrating how confidence intervals change depending on the level of confidence (90% versus 95% versus 99%) and on the size of the sample (CI for n=20 versus n=10 versus n=2). Developed for BIO211 (Statistics and Data Analysis: A Conceptual Approach) at Stony Brook University in Fall 2015.

  2. d

    EMS - Response Interval Performance by Fiscal Year

    • catalog.data.gov
    • data.austintexas.gov
    • +3more
    Updated Oct 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.austintexas.gov (2025). EMS - Response Interval Performance by Fiscal Year [Dataset]. https://catalog.data.gov/dataset/ems-response-interval-performance-by-fiscal-year
    Explore at:
    Dataset updated
    Oct 25, 2025
    Dataset provided by
    data.austintexas.gov
    Description

    This table shows overall ATCEMS response interval performance for entire fiscal years. Data in the table is broken out by incident response priority and service area (City of Austin or Travis County).

  3. f

    Data from: A Statistical Inference Course Based on p-Values

    • figshare.com
    • tandf.figshare.com
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ryan Martin (2023). A Statistical Inference Course Based on p-Values [Dataset]. http://doi.org/10.6084/m9.figshare.3494549.v2
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Ryan Martin
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Introductory statistical inference texts and courses treat the point estimation, hypothesis testing, and interval estimation problems separately, with primary emphasis on large-sample approximations. Here, I present an alternative approach to teaching this course, built around p-values, emphasizing provably valid inference for all sample sizes. Details about computation and marginalization are also provided, with several illustrative examples, along with a course outline. Supplementary materials for this article are available online.

  4. f

    Data from: Nonparametric inference for interval data using kernel methods

    • tandf.figshare.com
    png
    Updated Aug 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hoyoung Park; Ji Meng Loh; Woncheol Jang (2023). Nonparametric inference for interval data using kernel methods [Dataset]. http://doi.org/10.6084/m9.figshare.21806966.v1
    Explore at:
    pngAvailable download formats
    Dataset updated
    Aug 31, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Hoyoung Park; Ji Meng Loh; Woncheol Jang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Symbolic data have become increasingly popular in the era of big data. In this paper, we consider density estimation and regression for interval-valued data, a special type of symbolic data, common in astronomy and official statistics. We propose kernel estimators with adaptive bandwidths to account for variability of each interval. Specifically, we derive cross-validation bandwidth selectors for density estimation and extend the Nadaraya–Watson estimator for regression with interval data. We assess the performance of the proposed methods in comparison with existing kernel methods by extensive simulation studies and real data analysis.

  5. f

    Descriptive statistics for the interval-scaled predictors and dependent...

    • datasetcatalog.nlm.nih.gov
    Updated Aug 17, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Linek, Stephanie B.; Fecher, Benedikt; Friesike, Sascha; Hebing, Marcel (2017). Descriptive statistics for the interval-scaled predictors and dependent variables. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001782823
    Explore at:
    Dataset updated
    Aug 17, 2017
    Authors
    Linek, Stephanie B.; Fecher, Benedikt; Friesike, Sascha; Hebing, Marcel
    Description

    Descriptive statistics for the interval-scaled predictors and dependent variables.

  6. R

    Interval Data Validation and Estimation Tools Market Research Report 2033

    • researchintelo.com
    csv, pdf, pptx
    Updated Oct 1, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Research Intelo (2025). Interval Data Validation and Estimation Tools Market Research Report 2033 [Dataset]. https://researchintelo.com/report/interval-data-validation-and-estimation-tools-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Oct 1, 2025
    Dataset authored and provided by
    Research Intelo
    License

    https://researchintelo.com/privacy-and-policyhttps://researchintelo.com/privacy-and-policy

    Time period covered
    2024 - 2033
    Area covered
    Global
    Description

    Interval Data Validation and Estimation Tools Market Outlook



    According to our latest research, the Global Interval Data Validation and Estimation Tools market size was valued at $1.42 billion in 2024 and is projected to reach $4.98 billion by 2033, expanding at a robust CAGR of 14.7% during the forecast period of 2025–2033. The primary factor fueling this significant growth is the increasing demand for high-quality, reliable data across industries, driven by the proliferation of big data analytics, regulatory compliance requirements, and the digital transformation of core business processes. As organizations continue to digitize their operations, the need for advanced interval data validation and estimation tools that can ensure data accuracy, integrity, and actionable insights has never been more critical.



    Regional Outlook



    North America currently dominates the global interval data validation and estimation tools market, accounting for the largest share of global revenue in 2024. The region’s leadership can be attributed to its mature IT infrastructure, high adoption rates of advanced analytics, and a strong regulatory environment that prioritizes data integrity and compliance. Major industries such as BFSI, healthcare, and IT & telecommunications in the United States and Canada are heavily investing in sophisticated data validation and estimation solutions to mitigate risks associated with inaccurate or incomplete data. Furthermore, the presence of leading technology vendors and an innovation-driven business ecosystem have accelerated the deployment of both on-premises and cloud-based solutions, solidifying North America’s market dominance.



    In contrast, the Asia Pacific region is emerging as the fastest-growing market, projected to register the highest CAGR of 17.2% during the forecast period. This rapid growth is fueled by substantial investments in digital infrastructure, expanding IT and telecom sectors, and increasing regulatory scrutiny regarding data management in countries such as China, India, and Japan. Governments and enterprises in Asia Pacific are actively adopting interval data validation and estimation tools to enhance data-driven decision-making, improve operational efficiency, and comply with evolving data privacy laws. The influx of global technology providers, coupled with the rise of local solution developers, is further catalyzing market expansion in this region.



    Meanwhile, emerging economies in Latin America, the Middle East, and Africa are gradually embracing interval data validation and estimation tools, albeit at a slower pace due to challenges such as limited digital infrastructure, budget constraints, and varying regulatory frameworks. However, growing awareness about the importance of data quality for business competitiveness and increasing investments in digital transformation are expected to drive adoption over the coming years. Localized solutions tailored to address specific regulatory and operational requirements are gaining traction, particularly in sectors like government, healthcare, and retail, where data accuracy is increasingly critical.



    Report Scope






    </tr&

    Attributes Details
    Report Title Interval Data Validation and Estimation Tools Market Research Report 2033
    By Component Software, Services
    By Deployment Mode On-Premises, Cloud-Based
    By Application Data Quality Assessment, Statistical Analysis, Forecasting, Risk Management, Compliance, Others
    By End-User BFSI, Healthcare, Manufacturing, IT and Telecommunications, Government, Retail, Others
    Regions Covered North America, Europe, Asia Pacific, Latin America and Middle East & Africa
  7. f

    Inter-Visit-Interval Descriptive Statistics for Each Group.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated Oct 4, 2012
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sokolowski, Michel B. C.; Craig, David Philip Arthur; Gibson, B.; Grice, James W.; Abramson, Charles I.; Varnon, Chris A. (2012). Inter-Visit-Interval Descriptive Statistics for Each Group. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001128392
    Explore at:
    Dataset updated
    Oct 4, 2012
    Authors
    Sokolowski, Michel B. C.; Craig, David Philip Arthur; Gibson, B.; Grice, James W.; Abramson, Charles I.; Varnon, Chris A.
    Description

    Inter-Visit-Interval Descriptive Statistics for Each Group.

  8. D

    Interval Data Validation And Estimation Tools Market Research Report 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Sep 30, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2025). Interval Data Validation And Estimation Tools Market Research Report 2033 [Dataset]. https://dataintelo.com/report/interval-data-validation-and-estimation-tools-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Sep 30, 2025
    Dataset authored and provided by
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Interval Data Validation and Estimation Tools Market Outlook



    According to our latest research, the global Interval Data Validation and Estimation Tools market size reached USD 1.38 billion in 2024, reflecting robust adoption across diverse industries. The market is expected to expand at a CAGR of 12.4% from 2025 to 2033, reaching a forecasted market size of USD 4.01 billion by 2033. This growth is fueled by the increasing need for accurate and reliable data validation and estimation solutions, particularly as organizations worldwide embrace digital transformation and advanced analytics to drive business intelligence and regulatory compliance.




    A key growth factor for the Interval Data Validation and Estimation Tools market is the escalating volume and complexity of data generated by enterprises. As organizations across sectors such as BFSI, healthcare, IT, and manufacturing accelerate their digital initiatives, the influx of interval-based data from IoT devices, transactional systems, and operational technologies has surged. This trend necessitates robust validation and estimation tools to ensure data integrity, minimize errors, and improve decision-making. The adoption of advanced analytics and artificial intelligence within these tools further enhances their capability to identify anomalies, estimate missing values, and maintain high-quality datasets, thereby supporting organizations in achieving regulatory compliance and operational excellence.




    Another significant driver is the growing emphasis on regulatory compliance and risk management. Industries such as banking, financial services, insurance, and healthcare are subject to stringent data governance and reporting requirements. Interval Data Validation and Estimation Tools play a pivotal role in ensuring that organizations adhere to these regulations by providing accurate, validated, and auditable data records. The integration of these tools into enterprise workflows helps mitigate operational risks, reduce the likelihood of costly data breaches, and streamline audit processes. As regulatory frameworks evolve and data privacy concerns intensify, the demand for sophisticated validation and estimation solutions is anticipated to rise steadily, further propelling market growth.




    The proliferation of cloud computing and the shift toward cloud-based deployment models also significantly contribute to market expansion. Cloud-based Interval Data Validation and Estimation Tools offer scalability, flexibility, and cost-efficiency, making them attractive to organizations of all sizes. These solutions enable seamless integration with existing data infrastructure, facilitate real-time data validation, and support remote access for distributed teams. Additionally, advancements in cloud security and the availability of managed services have addressed many of the concerns associated with cloud adoption, encouraging more enterprises to transition from on-premises to cloud-based solutions. This paradigm shift is expected to open new avenues for market players and drive sustained growth over the forecast period.




    From a regional perspective, North America currently dominates the Interval Data Validation and Estimation Tools market, accounting for the largest share in 2024. This leadership is attributed to the region's advanced technology landscape, high adoption rates of digital solutions, and a strong presence of major market players. Europe follows closely, driven by robust regulatory frameworks and increasing investment in data analytics. Meanwhile, the Asia Pacific region is witnessing the fastest growth, fueled by rapid digitalization, expanding industrial sectors, and government initiatives promoting data-driven decision-making. Emerging markets in Latin America and the Middle East & Africa are also showing promising potential, supported by growing awareness of data quality and the need for efficient risk management tools.



    Component Analysis



    The Interval Data Validation and Estimation Tools market is segmented by component into software and services, each playing a critical role in the overall ecosystem. The software segment comprises standalone solutions and integrated platforms designed to automate the validation and estimation of interval data. These tools leverage advanced algorithms, artificial intelligence, and machine learning models to detect inconsistencies, fill data gaps, and generate reliable estimates. The increasing sophistication of software solutions, including

  9. G

    Interval Data Validation and Estimation Tools Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Interval Data Validation and Estimation Tools Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/interval-data-validation-and-estimation-tools-market
    Explore at:
    csv, pdf, pptxAvailable download formats
    Dataset updated
    Oct 3, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Interval Data Validation and Estimation Tools Market Outlook




    According to our latest research, the global Interval Data Validation and Estimation Tools market size reached USD 1.46 billion in 2024. With a robust compound annual growth rate (CAGR) of 11.2% projected over the forecast period, the market is expected to reach USD 3.73 billion by 2033. This growth is primarily driven by the rising demand for advanced data quality assurance and analytics solutions across sectors such as BFSI, healthcare, manufacturing, and IT & telecommunications. As organizations increasingly rely on accurate interval data for operational efficiency and regulatory compliance, the adoption of validation and estimation tools continues to surge.




    A key factor propelling the growth of the Interval Data Validation and Estimation Tools market is the exponential rise in data generation from connected devices, IoT sensors, and digital platforms. Businesses today are inundated with massive volumes of interval data, which, if not validated and accurately estimated, can lead to significant operational inefficiencies and decision-making errors. These tools play a crucial role in ensuring the integrity, accuracy, and completeness of interval data, thereby enabling organizations to derive actionable insights and maintain competitive advantage. Furthermore, the growing emphasis on automation and digital transformation initiatives is pushing enterprises to invest in sophisticated data validation and estimation solutions, further accelerating market growth.




    Another major growth driver is the increasing stringency of regulatory requirements across industries, particularly in sectors such as BFSI, healthcare, and utilities. Regulations related to data governance, privacy, and reporting demand organizations to maintain high standards of data quality and compliance. Interval Data Validation and Estimation Tools help organizations adhere to these regulatory mandates by providing automated checks, anomaly detection, and robust audit trails. The integration of artificial intelligence and machine learning into these tools is further enhancing their capabilities, enabling real-time data validation and predictive estimation, which is critical in fast-paced business environments.




    Additionally, the surge in cloud adoption and the proliferation of cloud-based data management platforms are significantly contributing to the market’s expansion. Cloud-based deployment models offer scalability, flexibility, and cost-efficiency, making advanced validation and estimation tools accessible to small and medium-sized enterprises as well as large organizations. The ability to seamlessly integrate with existing data architectures and third-party applications is also a key factor driving the adoption of both on-premises and cloud-based solutions. As data ecosystems become increasingly complex and distributed, the demand for interval data validation and estimation tools is expected to witness sustained growth through 2033.




    From a regional perspective, North America currently holds the largest share of the Interval Data Validation and Estimation Tools market, driven by early technology adoption, a strong focus on data-driven decision-making, and a mature regulatory landscape. However, Asia Pacific is anticipated to register the fastest CAGR of 13.5% during the forecast period, fueled by rapid digitalization, expanding industrialization, and increasing investments in smart infrastructure. Europe and Latin America are also witnessing steady growth, supported by government initiatives and the rising importance of data quality management in emerging economies. The Middle East & Africa region, though comparatively nascent, is expected to demonstrate significant potential as digital transformation initiatives gain momentum.





    Component Analysis




    The Interval Data Validation and Estimation Tools market by component is broadly segmented into Software and Servic

  10. m

    Clustering Interval Time Series by Elizabeth Ann Maharaj, Paulo Teles, Paula...

    • bridges.monash.edu
    • researchdata.edu.au
    • +1more
    pdf
    Updated Jan 10, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elizabeth Ann Maharaj; Paulo Teles; Paula Brito (2019). Clustering Interval Time Series by Elizabeth Ann Maharaj, Paulo Teles, Paula Brito [Dataset]. http://doi.org/10.26180/5c372a47334a6
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jan 10, 2019
    Dataset provided by
    Monash University
    Authors
    Elizabeth Ann Maharaj; Paulo Teles; Paula Brito
    License

    Public Domain Mark 1.0https://creativecommons.org/publicdomain/mark/1.0/
    License information was derived automatically

    Description

    Supplementary MaterialData filesFigures A1 - A8: Simulations BoxplotsFigures B1: B16: Application Dendrograms Software: R and Matlab

  11. FCpy: Feldman-Cousins Confidence Interval Calculator

    • catalog.data.gov
    • s.cnmilf.com
    Updated Dec 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2022). FCpy: Feldman-Cousins Confidence Interval Calculator [Dataset]. https://catalog.data.gov/dataset/fcpy-feldman-cousins-confidence-interval-calculator
    Explore at:
    Dataset updated
    Dec 3, 2022
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    Python scripts and Python+Qt graphical user interface for calculating Feldman-Cousins confidence intervals for low-count Poisson processes in the presence of a known background and for Gaussian processes with a physical lower limit of 0.

  12. G

    Interval Data Analytics Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Interval Data Analytics Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/interval-data-analytics-market
    Explore at:
    pdf, csv, pptxAvailable download formats
    Dataset updated
    Oct 6, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Interval Data Analytics Market Outlook



    According to our latest research, the global Interval Data Analytics market size reached USD 3.42 billion in 2024, demonstrating robust growth across key verticals. The market is expected to advance at a CAGR of 13.8% from 2025 to 2033, leading to a projected value of USD 10.13 billion by 2033. This impressive expansion is primarily driven by rising demand for advanced analytics solutions capable of processing time-stamped and interval-based data, especially as organizations seek to optimize operations, enhance predictive capabilities, and comply with evolving regulatory requirements.



    One of the most significant growth factors propelling the Interval Data Analytics market is the exponential increase in data generation from IoT devices, smart meters, and connected infrastructure across industries. Organizations in sectors such as utilities, manufacturing, and healthcare are increasingly reliant on interval data for resource optimization, real-time monitoring, and predictive maintenance. The ability of interval data analytics to handle vast amounts of granular, time-series data enables businesses to uncover actionable insights, reduce operational costs, and improve asset utilization. Additionally, the growing adoption of smart grids and intelligent energy management systems further amplifies the need for sophisticated interval data analytics solutions that support real-time decision-making and regulatory compliance.



    Another pivotal driver for the Interval Data Analytics market is the rapid digital transformation and integration of artificial intelligence (AI) and machine learning (ML) technologies into analytics platforms. These advancements allow for more accurate forecasting, anomaly detection, and automated response mechanisms, which are critical in sectors like finance, healthcare, and telecommunications. As organizations continue to prioritize data-driven strategies, the demand for interval data analytics tools that can seamlessly integrate with existing IT ecosystems and provide scalable, cloud-based solutions is accelerating. Furthermore, the shift towards cloud computing and the proliferation of big data platforms are making it easier for enterprises of all sizes to deploy and scale interval data analytics capabilities, thus broadening the market's reach and potential.



    Regulatory pressures and the increasing need for transparency and accountability in data handling are also fueling the growth of the Interval Data Analytics market. Industries such as banking and financial services, healthcare, and energy are subject to stringent compliance requirements that necessitate precise monitoring and reporting of interval data. The ability of interval data analytics platforms to provide auditable, time-stamped records and support regulatory reporting is becoming a critical differentiator for vendors in this space. Moreover, as data privacy laws evolve and enforcement intensifies, organizations are investing in analytics solutions that offer robust security features, data lineage tracking, and comprehensive audit trails, further boosting market adoption.



    From a regional perspective, North America continues to lead the Interval Data Analytics market, driven by early technology adoption, a strong presence of leading analytics vendors, and substantial investments in digital infrastructure. However, the Asia Pacific region is rapidly emerging as a key growth engine, fueled by large-scale digitalization initiatives, expanding industrial automation, and increasing penetration of IoT devices. Europe also represents a significant market, underpinned by regulatory mandates and a mature industrial base. Latin America and the Middle East & Africa, while currently smaller in market share, are witnessing accelerated adoption as organizations in these regions recognize the value of interval data analytics in enhancing operational efficiency and competitiveness.





    Component Analysis



    The Interval Data Analytics market is segmented by component into software and services, each playing a distinct role in

  13. Estimating Confidence Intervals for 2020 Census Statistics Using Approximate...

    • registry.opendata.aws
    Updated Sep 17, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States Census Bureau (2024). Estimating Confidence Intervals for 2020 Census Statistics Using Approximate Monte Carlo Simulation (2020 Census Production Run) [Dataset]. https://registry.opendata.aws/census-2020-amc-mdf-replicates/
    Explore at:
    Dataset updated
    Sep 17, 2024
    Dataset provided by
    United States Census Bureauhttp://census.gov/
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The 2020 Census Production Settings Demographic and Housing Characteristics (DHC) Approximate Monte Carlo (AMC) method seed Privacy Protected Microdata File (PPMF0) and PPMF replicates (PPMF1, PPMF2, ..., PPMF50) are a set of microdata files intended for use in estimating the magnitude of error(s) introduced by the 2020 Census Disclosure Avoidance System (DAS) into the 2020 Census Redistricting Data Summary File (P.L. 94-171), the Demographic and Housing Characteristics File, and the Demographic Profile.

    The PPMF0 was the source of the publicly released, official 2020 Census data products referenced above, and was created by executing the 2020 DAS TopDown Algorithm (TDA) using the confidential 2020 Census Edited File (CEF) as the initial input; the official location for the PPMF0 is on the United States Census Bureau FTP server, but we also include a copy of it here for convenience. The replicates were then created by executing the 2020 DAS TDA repeatedly with the PPMF0 as its initial input.

    Inspired by analogy to the use of bootstrap methods in non-private contexts, U.S. Census Bureau (USCB) researchers explored whether simple calculations based on comparing each PPMFi to the PPMF0 could be used to reliably estimate the scale of errors introduced by the 2020 DAS, and generally found this approach worked well.

    The PPMF0 and PPMFi files contained here are provided so that external researchers can estimate properties of DAS-introduced error without privileged access to internal USCB-curated data sets; further information on the estimation methodology can be found in Ashmead et. al 2024.

    The 2020 DHC AMC seed PPMF0 and PPMF replicates have been cleared for public dissemination by the USCB Disclosure Review Board (CBDRB-FY22-DSEP-004). The PPMF0 and PPMF replicates contain all Person and Units attributes necessary to produce the 2020 Census Redistricting Data Summary File (P.L. 94-171), the Demographic and Housing Characteristics File, and the Demographic Profile for both the United States and Puerto Rico, and include geographic detail down to the Census Block level. They do not include attributes specific to either the Detailed DHC-A or Detailed DHC-B products; in particular, data on Major Race (e.g., White Alone) is included, but data on Detailed Race (e.g., Cambodian) is not included in the PPMF0 and replicates.

  14. f

    Data Inter-training interval

    • datasetcatalog.nlm.nih.gov
    • figshare.com
    Updated Feb 3, 2015
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Romkema, Sietske (2015). Data Inter-training interval [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001879728
    Explore at:
    Dataset updated
    Feb 3, 2015
    Authors
    Romkema, Sietske
    Description

    These data show the results of four tests, one pretest and three posttest. It consist of three variables. Each task is performed three times (three trials). The movement times, the time it took to perform three different functional tasks. The duration of the maximal handopening during one of these tasks. And the deviation of the grip force control, in a task where a handle needed to be grasped with the correct amount of force.

  15. Melodic Intervals Size Statistics for the most commonly occurring intervals....

    • plos.figshare.com
    xls
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shui' er Han; Janani Sundararajan; Daniel Liu Bowling; Jessica Lake; Dale Purves (2023). Melodic Intervals Size Statistics for the most commonly occurring intervals. (Independent – samples t-tests). [Dataset]. http://doi.org/10.1371/journal.pone.0020160.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Shui' er Han; Janani Sundararajan; Daniel Liu Bowling; Jessica Lake; Dale Purves
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Statistics for the comparisons of the most commonly occurring melodic interval sizes in tone and non-tone language music databases; n1 and n2 refer to the sample sizes of tone and non-tone language music databases. (All comparisons were made with the two-tailed independent samples t-test, α-level adjusted using the Bonferroni method).

  16. Wind Generation Time Interval Exploration Data

    • data.ca.gov
    • data.cnra.ca.gov
    • +3more
    Updated Jan 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    California Energy Commission (2024). Wind Generation Time Interval Exploration Data [Dataset]. https://data.ca.gov/dataset/wind-generation-time-interval-exploration-data
    Explore at:
    zip, gpkg, gdb, arcgis geoservices rest api, kml, geojson, csv, html, xlsx, txtAvailable download formats
    Dataset updated
    Jan 19, 2024
    Dataset authored and provided by
    California Energy Commissionhttp://www.energy.ca.gov/
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the data set behind the Wind Generation Interactive Query Tool created by the CEC. The visualization tool interactively displays wind generation over different time intervals in three-dimensional space. The viewer can look across the state to understand generation patterns of regions with concentrations of wind power plants. The tool aids in understanding high and low periods of generation. Operation of the electric grid requires that generation and demand are balanced in each period.



    The height and color of columns at wind generation areas are scaled and shaded to represent capacity factors (CFs) of the areas in a specific time interval. Capacity factor is the ratio of the energy produced to the amount of energy that could ideally have been produced in the same period using the rated nameplate capacity. Due to natural variations in wind speeds, higher factors tend to be seen over short time periods, with lower factors over longer periods. The capacity used is the reported nameplate capacity from the Quarterly Fuel and Energy Report, CEC-1304A. CFs are based on wind plants in service in the wind generation areas.

    Renewable energy resources like wind facilities vary in size and geographic distribution within each state. Resource planning, land use constraints, climate zones, and weather patterns limit availability of these resources and where they can be developed. National, state, and local policies also set limits on energy generation and use. An example of resource planning in California is the Desert Renewable Energy Conservation Plan.

    By exploring the visualization, a viewer can gain a three-dimensional understanding of temporal variation in generation CFs, along with how the wind generation areas compare to one another. The viewer can observe that areas peak in generation in different periods. The large range in CFs is also visible.



  17. Number of affiliates of foreign companies by size interval (employed)

    • ine.es
    csv, html, json +4
    Updated Aug 24, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    INE - Instituto Nacional de Estadística (2015). Number of affiliates of foreign companies by size interval (employed) [Dataset]. https://ine.es/jaxi/Tabla.htm?path=/t37/p227/p01/a2013/l0/&file=01002.px&L=1
    Explore at:
    html, json, xls, text/pc-axis, csv, txt, xlsxAvailable download formats
    Dataset updated
    Aug 24, 2015
    Dataset provided by
    National Statistics Institutehttp://www.ine.es/
    Authors
    INE - Instituto Nacional de Estadística
    License

    https://www.ine.es/aviso_legalhttps://www.ine.es/aviso_legal

    Variables measured
    Value/percentage, Size interval (employed)
    Description

    Statistics on Affiliates of Foreign Companies in Spain: Number of affiliates of foreign companies by size interval (employed). National.

  18. Winkler Interval score metric

    • kaggle.com
    Updated Dec 7, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Carl McBride Ellis (2023). Winkler Interval score metric [Dataset]. https://www.kaggle.com/datasets/carlmcbrideellis/winkler-interval-score-metric
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Dec 7, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Carl McBride Ellis
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    Model performance evaluation: The Mean Winkler Interval score (MWIS)

    We can assess the overall performance of a regression model that produces prediction intervals by using the mean Winkler Interval score [1,2,3] which, for an individual interval, is given by:

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F4051350%2Fe3bd94c6047815c0304b3851fc325a7c%2FWinkler_Interval_Score.png?generation=1700042360776825&alt=media" alt="">

    where \(y\) is the true value, \(u\) it the upper prediction interval, \(l\) is the lower prediction interval, and \(\alpha\) is (1-coverage). For example, for 90% coverage, \(\alpha = 0.1\). Note that the Winkler Interval score constitutes a proper scoring rule [2,3].

    Python code: Usage example

    Attach this dataset to a notebook, then:

    import sys
    sys.path.append('/kaggle/input/winkler-interval-score-metric/')
    import MWIS_metric
    help(MWIS_metric.score)
    
    MWIS,coverage = MWIS_metric.score(predictions["y_true"],predictions["lower"],predictions["upper"],alpha)
    print(f"Local MWI score   ",round(MWIS,3))
    print("Predictions coverage  ", round(coverage*100,1),"%")
    
  19. d

    Data from: On the variety of methods for calculating confidence intervals by...

    • search.dataone.org
    • datadryad.org
    Updated Jul 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marie-Therese Puth; Markus Neuhäuser; Graeme D. Ruxton (2025). On the variety of methods for calculating confidence intervals by bootstrapping [Dataset]. http://doi.org/10.5061/dryad.r390f
    Explore at:
    Dataset updated
    Jul 6, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Marie-Therese Puth; Markus Neuhäuser; Graeme D. Ruxton
    Time period covered
    Apr 20, 2016
    Description
    1. Researchers often want to place a confidence interval around estimated parameter values calculated from a sample. This is commonly implemented by bootstrapping. There are several different frequently used bootstrapping methods for this purpose. 2. Here we demonstrate that authors of recent papers frequently do not specify the method they have used and that different methods can produce markedly different confidence intervals for the same sample and parameter estimate. 3. We encourage authors to be more explicit about the method they use (and number of bootstrap resamples used). 4. We recommend the bias corrected and accelerated method as giving generally good performance; although researchers should be warned that coverage of bootstrap confidence intervals is characteristically less than the specified nominal level, and confidence interval evaluation by any method can be unreliable for small samples in some situations.
  20. f

    Data from: Additive Hazards Regression Analysis of Massive Interval-Censored...

    • tandf.figshare.com
    • datasetcatalog.nlm.nih.gov
    pdf
    Updated May 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peiyao Huang; Shuwei Li; Xinyuan Song (2025). Additive Hazards Regression Analysis of Massive Interval-Censored Data via Data Splitting [Dataset]. http://doi.org/10.6084/m9.figshare.27103243.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 12, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Peiyao Huang; Shuwei Li; Xinyuan Song
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    With the rapid development of data acquisition and storage space, massive datasets exhibited with large sample size emerge increasingly and make more advanced statistical tools urgently need. To accommodate such big volume in the analysis, a variety of methods have been proposed in the circumstances of complete or right censored survival data. However, existing development of big data methodology has not attended to interval-censored outcomes, which are ubiquitous in cross-sectional or periodical follow-up studies. In this work, we propose an easily implemented divide-and-combine approach for analyzing massive interval-censored survival data under the additive hazards model. We establish the asymptotic properties of the proposed estimator, including the consistency and asymptotic normality. In addition, the divide-and-combine estimator is shown to be asymptotically equivalent to the full-data-based estimator obtained from analyzing all data together. Simulation studies suggest that, relative to the full-data-based approach, the proposed divide-and-combine approach has desirable advantage in terms of computation time, making it more applicable to large-scale data analysis. An application to a set of interval-censored data also demonstrates the practical utility of the proposed method.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Emily Rollinson (2016). Confidence Interval Examples [Dataset]. http://doi.org/10.6084/m9.figshare.3466364.v2
Organization logoOrganization logo

Confidence Interval Examples

Explore at:
62 scholarly articles cite this dataset (View in Google Scholar)
application/cdfv2Available download formats
Dataset updated
Jun 28, 2016
Dataset provided by
Figsharehttp://figshare.com/
figshare
Authors
Emily Rollinson
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Examples demonstrating how confidence intervals change depending on the level of confidence (90% versus 95% versus 99%) and on the size of the sample (CI for n=20 versus n=10 versus n=2). Developed for BIO211 (Statistics and Data Analysis: A Conceptual Approach) at Stony Brook University in Fall 2015.

Search
Clear search
Close search
Google apps
Main menu