100+ datasets found
  1. Variance Analysis Project

    • kaggle.com
    zip
    Updated Jul 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sanjana Murthy (2024). Variance Analysis Project [Dataset]. https://www.kaggle.com/datasets/sanjanamurthy392/variance-analysis-in-excel
    Explore at:
    zip(40666 bytes)Available download formats
    Dataset updated
    Jul 9, 2024
    Authors
    Sanjana Murthy
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    About Datasets:

    Domain : Finance Project: Variance Analysis Datasets: Budget vs Actuals Dataset Type: Excel Data Dataset Size: 482 records

    KPI's: 1. Total Income 2. Total Expenses 3. Total Savings 4. Budget vs Actual Income 5. Actual Expenses Breakdown

    Process: 1. Understanding the problem 2. Data Collection 3. Exploring and analyzing the data 4. Interpreting the results

    This data contains dynamic dashboard, data validation, index match, SUMIFS, conditional formatting, if conditions, column chart, pie chart.

  2. w

    Development Standard Variance

    • data.wu.ac.at
    • data.montgomerycountymd.gov
    • +2more
    csv, json, xml
    Updated Mar 21, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Montgomery County, MD (2017). Development Standard Variance [Dataset]. https://data.wu.ac.at/schema/data_montgomerycountymd_gov/YmE3eS1hN2l5
    Explore at:
    json, xml, csvAvailable download formats
    Dataset updated
    Mar 21, 2017
    Dataset provided by
    Montgomery County, MD
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    A variance is required when an application has submitted a proposed project to the Department of Permitting Services and it is determined that the construction, alteration or extension does not conform to the development standards (in the zoning ordinance) for the zone in which the subject property is located. A variance may be required in any zone and includes accessory structures as well as primary buildings or dwellings. Update Frequency : Daily

  3. DEMANDE Dataset

    • zenodo.org
    • researchdiscovery.drexel.edu
    zip
    Updated Apr 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Joseph A. Gallego-Mejia; Joseph A. Gallego-Mejia; Fabio A Gonzalez; Fabio A Gonzalez (2023). DEMANDE Dataset [Dataset]. http://doi.org/10.5281/zenodo.7822851
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 13, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Joseph A. Gallego-Mejia; Joseph A. Gallego-Mejia; Fabio A Gonzalez; Fabio A Gonzalez
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains the features and probabilites of ten different functions. Each dataset is saved using numpy arrays. \item The data set \textit{Arc} corresponds to a two-dimensional random sample drawn from a random vector $$X=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)=\mathcal{N}(x_2|0,4)\mathcal{N}(x_1|0.25x_2^2,1)$$ where $$\mathcal{N}(u|\mu,\sigma^2)$$ denotes the density function of a normal distribution with mean $$\mu$$ and variance $$\sigma^2$$. \cite{Papamakarios2017} used this data set to evaluate his neural density estimation methods. \item The data set \textit{Potential 1} corresponds to a two-dimensional random sample drawn from a random vector $$X=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)=\frac{1}{2}\left(\frac{||x||-2}{0.4}\right)^2 - \ln{\left(\exp\left\{-\frac{1}{2}\left[\frac{x_1-2}{0.6}\right]^2\right\}+\exp\left\{-\frac{1}{2}\left[\frac{x_1+2}{0.6}\right]^2\right\}\right)}$$ with a normalizing constant of approximately 6.52 calculated by Monte Carlo integration. \item The data set \textit{Potential 2} corresponds to a two-dimensional random sample drawn from a random vector $$X=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)=\frac{1}{2}\left[ \frac{x_2-w_1(x)}{0.4}\right]^2$$ where $$w_1(x)=\sin{(\frac{2\pi x_1}{4})}$$ with a normalizing constant of approximately 8 calculated by Monte Carlo integration. \item The data set \textit{Potential 3} corresponds to a two-dimensional random sample drawn from a random vector $$x=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)= - \ln{\left(\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)}{0.35}\right]^2\right\}+\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)+w_2(x)}{0.35}^2\right]\right\}\right)}$$ where $$w_1(x)=\sin{(\frac{2\pi x_1}{4})}$$ and $$w_2(x)=3 \exp \left\{-\frac{1}{2}\left[ \frac{x_1-1}{0.6}\right]^2\right\}$$ with a normalizing constant of approximately 13.9 calculated by Monte Carlo integration. \item The data set \textit{Potential 4} corresponds to a two-dimensional random sample drawn from a random vector $$x=(X_1,X_2)$$ with probability density function given by $$f(x_1,x_2)= - \ln{\left(\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)}{0.4}\right]^2\right\}+\exp\left\{-\frac{1}{2}\left[\frac{x_2-w_1(x)+w_3(x)}{0.35}^2\right]\right\}\right)}$$ where $$w_1(x)=\sin{(\frac{2\pi x_1}{4})}$$, $$w_3(x)=3 \sigma \left(\left[ \frac{x_1-1}{0.3}\right]^2\right)$$, and $$\sigma(x)= \frac{1}{1+\exp(x)}$$ with a normalizing constant of approximately 13.9 calculated by Monte Carlo integration. \item The data set \textit{2D mixture} corresponds to a two-dimensional random sample drawn from the random vector $$x=(X_1, X_2)$$ with a probability density function given by $$f(x) = \frac{1}{2}\mathcal{N}(x|\mu_1,\Sigma_1) + \frac{1}{2}\mathcal{N}(x|\mu_2,\Sigma_2)$$ with means and covariance matrices $$\mu_1 = [1, -1]^T$$, $$\mu_2 = [-2, 2]^T$$, $$\Sigma_1=\left[\begin{array}{cc} 1 & 0 \\ 0 & 2 \end{array}\right]$$, and $$\Sigma_1=\left[\begin{array}{cc} 2 & 0 \\ 0 & 1 \end{array}\right]$$ \item The data set \textit{10D-mixture} corresponds to a 10-dimensional random sample drawn from the random vector $$x=(X_1,\cdots,X_{10})$$ with a mixture of four diagonal normal probability density functions $$\mathcal{N}(X_i|\mu_i, \sigma_i)$$, where each $$\mu_i$$ is drawn uniformly in the interval $$[-0.5,0.5]$$, and the $$\sigma_i$$ is drawn uniformly in the interval $$[-0.01, 0.5]$$. Each diagonal normal probability density has the same probability of being drawn $$1/4$$.

  4. FP&A Variance Analysis Model

    • kaggle.com
    zip
    Updated Jun 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ameer Nassar (2025). FP&A Variance Analysis Model [Dataset]. https://www.kaggle.com/datasets/ameernassar/fp-and-a-variance-analysis
    Explore at:
    zip(921888 bytes)Available download formats
    Dataset updated
    Jun 27, 2025
    Authors
    Ameer Nassar
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This project analyzes monthly budget vs actual financial performance across 7 departments using FP&A best practices. Key metrics include:

    • Total Budget: $28,221,717
    • Total Actual: $28,171,672
    • Total Variance: -$50,045 USD (under budget by -0.18%)

    The analysis was done in Python with Pandas and Seaborn, and visualized using various plots. It includes:

    • Variance bar charts by department
    • Average deviation per department
    • Monthly forecast error trends
    • Visualizations of budget vs actual using box plots and line charts

    The dataset is 100% manually created to showcase real-world finance analytics.

  5. f

    Proportion of variance explained by each factor.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated May 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Palme, Rupert; von Kortzfleisch, Vanessa Tabea; Würbel, Hanno; Rosso, Marianna; Meyer, Neele; Sachser, Norbert; Karp, Natasha A.; Touma, Chadi; Ambrée, Oliver; Richter, S. Helene; Novak, Janja; Kaiser, Sylvia (2022). Proportion of variance explained by each factor. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0000376198
    Explore at:
    Dataset updated
    May 5, 2022
    Authors
    Palme, Rupert; von Kortzfleisch, Vanessa Tabea; Würbel, Hanno; Rosso, Marianna; Meyer, Neele; Sachser, Norbert; Karp, Natasha A.; Touma, Chadi; Ambrée, Oliver; Richter, S. Helene; Novak, Janja; Kaiser, Sylvia
    Description

    Presented are point estimates of the component of variance analysis on the full data set for all 10 selected outcome measures. For the random factors in the LMM also confidence intervals (CI95) of the point estimates are presented in square brackets. Please note: confidence intervals were limited to the maximum possible range (i.e., [0,1]). LMM, linear mixed model. (XLSX)

  6. Dataset for: Robust versus consistent variance estimators in marginal...

    • wiley.figshare.com
    pdf
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dirk Enders; Susanne Engel; Roland Linder; Iris Pigeot (2023). Dataset for: Robust versus consistent variance estimators in marginal structural Cox models [Dataset]. http://doi.org/10.6084/m9.figshare.6203456.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Wileyhttps://www.wiley.com/
    Authors
    Dirk Enders; Susanne Engel; Roland Linder; Iris Pigeot
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models (Cox MSMs) are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and the consistent variance estimator in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the two estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.

  7. NIST Statistical Reference Datasets - SRD 140

    • datasets.ai
    • gimi9.com
    • +4more
    21
    Updated Mar 11, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2021). NIST Statistical Reference Datasets - SRD 140 [Dataset]. https://datasets.ai/datasets/nist-statistical-reference-datasets-srd-140-df30c
    Explore at:
    21Available download formats
    Dataset updated
    Mar 11, 2021
    Dataset authored and provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software. Currently datasets and certified values are provided for assessing the accuracy of software for univariate statistics, linear regression, nonlinear regression, and analysis of variance. The collection includes both generated and 'real-world' data of varying levels of difficulty. Generated datasets are designed to challenge specific computations. These include the classic Wampler datasets for testing linear regression algorithms and the Simon & Lesage datasets for testing analysis of variance algorithms. Real-world data include challenging datasets such as the Longley data for linear regression, and more benign datasets such as the Daniel & Wood data for nonlinear regression. Certified values are 'best-available' solutions. The certification procedure is described in the web pages for each statistical method. Datasets are ordered by level of difficulty (lower, average, and higher). Strictly speaking the level of difficulty of a dataset depends on the algorithm. These levels are merely provided as rough guidance for the user. Producing correct results on all datasets of higher difficulty does not imply that your software will pass all datasets of average or even lower difficulty. Similarly, producing correct results for all datasets in this collection does not imply that your software will do the same for your particular dataset. It will, however, provide some degree of assurance, in the sense that your package provides correct results for datasets known to yield incorrect results for some software. The Statistical Reference Datasets is also supported by the Standard Reference Data Program.

  8. w

    Dataset of books series that contain Advanced analysis of variance

    • workwithdata.com
    Updated Nov 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). Dataset of books series that contain Advanced analysis of variance [Dataset]. https://www.workwithdata.com/datasets/book-series?f=1&fcol0=j0-book&fop0=%3D&fval0=Advanced+analysis+of+variance&j=1&j0=books
    Explore at:
    Dataset updated
    Nov 25, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about book series. It has 1 row and is filtered where the books is Advanced analysis of variance. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.

  9. d

    DPS - Sign Variance Permit

    • catalog.data.gov
    • data.montgomerycountymd.gov
    • +2more
    Updated Nov 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.montgomerycountymd.gov (2025). DPS - Sign Variance Permit [Dataset]. https://catalog.data.gov/dataset/dps-sign-variance-permit
    Explore at:
    Dataset updated
    Nov 29, 2025
    Dataset provided by
    data.montgomerycountymd.gov
    Description

    A sign variance is required when a proposed sign does not conform to the requirements of the zoning ordinance pertaining to the size, height or its location. DPS processes the sign variance application and the Sign Review Board provides an approval decision. Update Frequency : Daily

  10. Spatial Statistical Data Fusion (SSDF) Level 3: CONUS Near-Surface Vapor...

    • data.nasa.gov
    Updated Apr 1, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Spatial Statistical Data Fusion (SSDF) Level 3: CONUS Near-Surface Vapor Pressure Deficit from Aqua AIRS, V2 (SNDRAQIL3SSDFCVPD) - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/spatial-statistical-data-fusion-ssdf-level-3-conus-near-surface-vapor-pressure-deficit-fro-659eb
    Explore at:
    Dataset updated
    Apr 1, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This data set provides an estimate of the vapor pressure deficit. It infers a value for each grid point based on nearby and distant values of the input Level-2 datasets and estimates of the variance of those values, with lower variances given higher weight.The Spatial Statistical Data Fusion (SSDF) surface continental United States (CONUS) products, fuse data from the Atmospheric InfraRed Sounder (AIRS) instrument on the EOS-Aqua spacecraft with data from the Cross-track Infrared and Microwave Sounding Suite (CrIMSS) instruments on the Suomi-NPP spacecraft. The CrIMSS instrument suite consists of the Cross-track Infrared Sounder (CrIS) infrared sounder and the Advanced Technology Microwave Sounder (ATMS) microwave sounder. These are all daily products on a ¼ x ¼ degree latitude/longitude grid covering the continental United States (CONUS).The SSDF algorithm infers a value for each grid point based on nearby and distant values of the input Level-2 datasets and estimates of the variance of those values, with lower variances given higher weight. Performing the data fusion of two (or more) remote sensing datasets that estimate the same physical state involves four major steps: (1) Filtering input data; (2) Matching the remote sensing datasets to an in situ dataset, taken as a truth estimate; (3) Using these matchups to characterize the input datasets via estimation of their bias and variance relative to the truth estimate; (4) Performing the spatial statistical data fusion. We note that SSDF can also be performed on a single remote sensing input dataset. The SSDF algorithm only ingests the bias-corrected estimates, their latitudes and longitudes, and their estimated variances; the algorithm is agnostic as to which dataset or datasets those estimates, latitudes, longitudes, and variances originated from.

  11. Data from: Two tests of variance homogeneity for clustered data where group...

    • tandf.figshare.com
    text/x-tex
    Updated Jan 26, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mary Gregg; Adam Creuziger; Somnath Datta; Douglas Lorenz (2025). Two tests of variance homogeneity for clustered data where group size is informative [Dataset]. http://doi.org/10.6084/m9.figshare.27956711.v1
    Explore at:
    text/x-texAvailable download formats
    Dataset updated
    Jan 26, 2025
    Dataset provided by
    Taylor & Francishttps://taylorandfrancis.com/
    Authors
    Mary Gregg; Adam Creuziger; Somnath Datta; Douglas Lorenz
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    To evaluate variance homogeneity among groups for clustered data, Iachine et al. (Robust tests for the equality of variances for clustered data. J Stat Comput Simul 2010;80(4):365–377) introduced an extension of the well-known Levene test. However, this method does not account for informative cluster size (ICS) or informative within-cluster group size (IWCGS), which can occur in clustered data when cluster and group sizes are random variables. This article introduces two tests of variance homogeneity that are appropriate for data with ICS and IWCGS, one extending the Levene-style transformation method and one based on a direct comparison of estimates of variance. We demonstrate the properties of our tests in a detailed simulation study and show that they are resistant to the potentially biasing effects of ICS and IWCGS. We illustrate the use of these tests by applying them to a data set of x-ray diffraction measurements collected from a specimen of duplex steel.

  12. A Comparison of Variance Estimation Methods for Regression Analyses with the...

    • data.virginia.gov
    • gimi9.com
    • +1more
    html
    Updated Sep 6, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Substance Abuse and Mental Health Services Administration (2025). A Comparison of Variance Estimation Methods for Regression Analyses with the Mental Health Surveillance Study Clinical Sample [Dataset]. https://data.virginia.gov/dataset/a-comparison-of-variance-estimation-methods-for-regression-analyses-with-the-mental-health-surv
    Explore at:
    htmlAvailable download formats
    Dataset updated
    Sep 6, 2025
    Dataset provided by
    Substance Abuse and Mental Health Services Administrationhttps://www.samhsa.gov/
    Description

    The purpose of this report is to compare alternative methods for producing measures of SEs for regression models for the MHSS clinical sample with the goal of producing more accurate and potentially smaller SEs.

  13. d

    5.10 Revenue Forecast Variance (detail)

    • catalog.data.gov
    • open.tempe.gov
    • +7more
    Updated Aug 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2025). 5.10 Revenue Forecast Variance (detail) [Dataset]. https://catalog.data.gov/dataset/5-10-revenue-forecast-variance-detail-803f4
    Explore at:
    Dataset updated
    Aug 11, 2025
    Dataset provided by
    City of Tempe
    Description

    Tracking local taxes and intergovernmental revenue and evaluating the credibility of revenue forecasts greatly assists with sound financial planning efforts; it allows policymakers the ability to make informed decisions, build a fiscally responsible budget, and support the City's priority to maintain financial stability and vitality. This page provides data for the Revenue Forecast performance measure. The performance measure dashboard is available at 5.10 Revenue Forecast Variance. Additional Information Source: PeopleSoft 400 Report, ExcelContact: Benicia BensonContact E-Mail: Benicia_Benson@tempe.govData Source Type: TabularPreparation Method: Metrics are based on actual revenue collected for local taxes and intergovernmental revenue in the City's PeopleSoft 400 Report. Total Local Taxes include city sales tax, sales tax rebate, sales tax penalty and interest, sales tax to be rebated, temporary PLT tax, sales tax interest, refund, and temporary PLT tax to be rebated. Total intergovernmental revenue includes State Sales Tax, State Income Tax, and State Auto Lieu Tax. Many of the estimates are provided by the League of Arizona Cities and Towns. Another principal source includes our participation as a sponsor of the Forecasting Project developed by the University of Arizona Eller College of Management and Economic Research Center in Tucson, AZ.Publish Frequency: Annually, based on a fiscal yearPublish Method: Manually retrieved and calculatedData Dictionary

  14. Additional file 1 of The multivariate analysis of variance as a powerful...

    • springernature.figshare.com
    • datasetcatalog.nlm.nih.gov
    zip
    Updated Jun 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lukas Landler; Graeme D. Ruxton; E. Pascal Malkemper (2023). Additional file 1 of The multivariate analysis of variance as a powerful approach for circular data [Dataset]. http://doi.org/10.6084/m9.figshare.19670032.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 15, 2023
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Lukas Landler; Graeme D. Ruxton; E. Pascal Malkemper
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Additional file 1. R-code and example data to perform the statistical tests described in the manuscript.

  15. d

    5.10 Revenue Forecast Variance (dashboard - history and target)

    • catalog.data.gov
    • data.tempe.gov
    • +1more
    Updated Mar 24, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    City of Tempe (2023). 5.10 Revenue Forecast Variance (dashboard - history and target) [Dataset]. https://catalog.data.gov/dataset/5-10-revenue-forecast-variance-dashboard-history-and-target-b0c88
    Explore at:
    Dataset updated
    Mar 24, 2023
    Dataset provided by
    City of Tempe
    Description

    This operations dashboard shows historic and current data related to this performance measure. The performance measure page is available at 5.10 Revenue Forecast Variance.Data Dictionary

  16. f

    Data from: Bootstrap-based inference for multiple mean-variance changepoint...

    • tandf.figshare.com
    • datasetcatalog.nlm.nih.gov
    text/x-tex
    Updated Jan 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yang Li; Mixia Wu; Wenxin Ding (2025). Bootstrap-based inference for multiple mean-variance changepoint models [Dataset]. http://doi.org/10.6084/m9.figshare.27934521.v1
    Explore at:
    text/x-texAvailable download formats
    Dataset updated
    Jan 2, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Yang Li; Mixia Wu; Wenxin Ding
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Identifying multiple change points in the mean and/or variance is crucial across various fields, including finance and quality control. We introduce a novel technique that detects change points for the mean and/or variance of a noisy sequence and constructs confidence intervals for both the mean and variance of the sequence. This method integrates the weighted bootstrap with the Sequential Binary Segmentation (SBS) algorithm. Not only does our technique pinpoint the location and number of change points, but it also determines the type of change for each estimated point, specifying whether the change occurred in the mean, variance, or both. Our simulations show that our method outperforms other approaches in most scenarios, clearly demonstrating its superiority. Finally, we apply our technique to three datasets, including DNA copy number variation, stock volume, and traffic flow data, further validating its practical utility and wide-ranging applicability.

  17. Low Variance Dataset

    • kaggle.com
    zip
    Updated Jun 17, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ezgi Turalı (2021). Low Variance Dataset [Dataset]. https://www.kaggle.com/ezgitural/low-variance-dataset
    Explore at:
    zip(422949 bytes)Available download formats
    Dataset updated
    Jun 17, 2021
    Authors
    Ezgi Turalı
    Description

    Context

    I needed a low variance dataset for my project to make a point. I could not find it in here. So, I got it somehow and there you go!

  18. Financial Dataset [expenses]: Budget VS Actual

    • kaggle.com
    zip
    Updated Feb 1, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sahar Jamal (2024). Financial Dataset [expenses]: Budget VS Actual [Dataset]. https://www.kaggle.com/datasets/saharsyed/financial-dataset-expenses-budget-vs-actual/code
    Explore at:
    zip(11937 bytes)Available download formats
    Dataset updated
    Feb 1, 2024
    Authors
    Sahar Jamal
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This is a small budget vs actual expenses dataset based on 12-months.

  19. f

    Subfunctions for calculating variance.

    • datasetcatalog.nlm.nih.gov
    • plos.figshare.com
    Updated May 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jia, Xiaoyan; Zhang, Qinghui; Zhang, Meilin; Ding, Yang; LI, Junqiu; Jin, Yiting (2024). Subfunctions for calculating variance. [Dataset]. https://datasetcatalog.nlm.nih.gov/dataset?q=0001274866
    Explore at:
    Dataset updated
    May 16, 2024
    Authors
    Jia, Xiaoyan; Zhang, Qinghui; Zhang, Meilin; Ding, Yang; LI, Junqiu; Jin, Yiting
    Description

    The analysis of critical states during fracture of wood materials is crucial for wood building safety monitoring, wood processing, etc. In this paper, beech and camphor pine are selected as the research objects, and the acoustic emission signals during the fracture process of the specimens are analyzed by three-point bending load experiments. On the one hand, the critical state interval of a complex acoustic emission signal system is determined by selecting characteristic parameters in the natural time domain. On the other hand, an improved method of b_value analysis in the natural time domain is proposed based on the characteristics of the acoustic emission signal. The K-value, which represents the beginning of the critical state of a complex acoustic emission signal system, is further defined by the improved method of b_value in the natural time domain. For beech, the analysis of critical state time based on characteristic parameters can predict the “collapse” time 8.01 s in advance, while for camphor pines, 3.74 s in advance. K-value can be analyzed at least 3 s in advance of the system “crash” time for beech and 4 s in advance of the system “crash” time for camphor pine. The results show that compared with traditional time-domain acoustic emission signal analysis, natural time-domain acoustic emission signal analysis can discover more available feature information to characterize the state of the signal. Both the characteristic parameters and Natural_Time_b_value analysis in the natural time domain can effectively characterize the time when the complex acoustic emission signal system enters the critical state. Critical state analysis can provide new ideas for wood health monitoring and complex signal processing, etc.

  20. O

    Information Technology - Project Schedule and Budget Variance

    • data.mesaaz.gov
    • citydata.mesaaz.gov
    csv, xlsx, xml
    Updated Jul 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Information Technology (2024). Information Technology - Project Schedule and Budget Variance [Dataset]. https://data.mesaaz.gov/Information-Technology/Information-Technology-Project-Schedule-and-Budget/24pf-5hpg
    Explore at:
    xlsx, xml, csvAvailable download formats
    Dataset updated
    Jul 11, 2024
    Dataset authored and provided by
    Information Technology
    Description

    New dataset replacing https://citydata.mesaaz.gov/Information-Technology/Information-Technology-Project-Schedule-and-Budget/spka-r4fd.

    This data set lists projects currently in process and managed by the Department of Information Technology. Projects are noted if they are in an implementation phase, which would make their schedule and budget adherence applicable. Budget is listed and determined by project manager if they are within budget, at or under budget, to date. Schedule is determined by project start date, project manager original go live estimate and current go live estimate and /or actual go live date.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Sanjana Murthy (2024). Variance Analysis Project [Dataset]. https://www.kaggle.com/datasets/sanjanamurthy392/variance-analysis-in-excel
Organization logo

Variance Analysis Project

Variance Analysis Project in Excel

Explore at:
37 scholarly articles cite this dataset (View in Google Scholar)
zip(40666 bytes)Available download formats
Dataset updated
Jul 9, 2024
Authors
Sanjana Murthy
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

About Datasets:

Domain : Finance Project: Variance Analysis Datasets: Budget vs Actuals Dataset Type: Excel Data Dataset Size: 482 records

KPI's: 1. Total Income 2. Total Expenses 3. Total Savings 4. Budget vs Actual Income 5. Actual Expenses Breakdown

Process: 1. Understanding the problem 2. Data Collection 3. Exploring and analyzing the data 4. Interpreting the results

This data contains dynamic dashboard, data validation, index match, SUMIFS, conditional formatting, if conditions, column chart, pie chart.

Search
Clear search
Close search
Google apps
Main menu