A variance is required when an application has submitted a proposed project to the Department of Permitting Services and it is determined that the construction, alteration or extension does not conform to the development standards (in the zoning ordinance) for the zone in which the subject property is located. A variance may be required in any zone and includes accessory structures as well as primary buildings or dwellings. Update Frequency : Daily
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Variances are not waivers or exceptions. They are specifically listed in the code and are alternative ways to meet intent of the code when hardships exist. Staff and developers coordinate daily to comply with the code (preliminary meetings, technical reviews, staff review of potential variance requests by applicant). Common variances include: trees, zoning, signs, utility infrastructure (sewer, gas, water, electric); storm water, water quality (detention & water quality ponds or structures); topography (elevation, slopes, grading requirements); FHA requirements for minimum/maximum slopes for housing; infill development.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Social science commonly studies relationships among variables by employing survey questions. Answers to these questions will contain some degree of measurement error, distorting the relationships of interest. Such distortions can be removed by standard statistical methods, when these are provided knowledge of a question’s measurement error variance. However, acquiring this information routinely necessitates additional experimentation, which is infeasible in practice. We use three decades’ worth of survey experiments combined with machine learning methods to show that survey measurement error variance can be predicted from the way a question was asked. By predicting experimentally obtained estimates of survey measurement error variance from question characteristics, we enable researchers to obtain estimates of the extent of measurement error in a survey question without requiring additional data collection. Our results suggest only some commonly accepted best practices in survey design have a noticeable impact on study quality, and that predicting measurement error variance is a useful approach to removing this impact in future social surveys. This repository accompanies the full paper, and allows users to reproduce all results.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the dataset for the manuscript
Pierzyna, M, et al. "Intercomparison of flux, gradient, and variance-based optical turbulence (Cn2) parameterizations." Applied Optics, 2024. https://doi.org/10.1364/AO.519942
The data is organized in the following structure:
met_cn2_*_10m.nc
: netCDF files containing Cn2 estimated from meteorological data obtained at the CESAR site using the flux-based and gradient-based methods at the 10 m level.
wrf_cn2_*.nc
: netCDF files containing Cn2 estimated from WRF model output using the variance-based method (80m) and flux, gradient, and variance-based methods (10m).
wrf_meteo_*.nc
: netCDF files containing a cross-section of CESAR site extracted from WRF model output. This data serves as input for wrf_cn2_*.nc
files.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about book series. It has 1 row and is filtered where the books is Advanced analysis of variance. It features 10 columns including number of authors, number of books, earliest publication date, and latest publication date.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models (Cox MSMs) are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and the consistent variance estimator in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the two estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.
The purpose of this project is to improve the accuracy of statistical software by providing reference datasets with certified computational results that enable the objective evaluation of statistical software. Currently datasets and certified values are provided for assessing the accuracy of software for univariate statistics, linear regression, nonlinear regression, and analysis of variance. The collection includes both generated and 'real-world' data of varying levels of difficulty. Generated datasets are designed to challenge specific computations. These include the classic Wampler datasets for testing linear regression algorithms and the Simon & Lesage datasets for testing analysis of variance algorithms. Real-world data include challenging datasets such as the Longley data for linear regression, and more benign datasets such as the Daniel & Wood data for nonlinear regression. Certified values are 'best-available' solutions. The certification procedure is described in the web pages for each statistical method. Datasets are ordered by level of difficulty (lower, average, and higher). Strictly speaking the level of difficulty of a dataset depends on the algorithm. These levels are merely provided as rough guidance for the user. Producing correct results on all datasets of higher difficulty does not imply that your software will pass all datasets of average or even lower difficulty. Similarly, producing correct results for all datasets in this collection does not imply that your software will do the same for your particular dataset. It will, however, provide some degree of assurance, in the sense that your package provides correct results for datasets known to yield incorrect results for some software. The Statistical Reference Datasets is also supported by the Standard Reference Data Program.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains the data used in the paper: "The Impact of Altitude Training on NCAA Division I Female Swimmers’ Performance" being submitted to the International Journal of Performance Analysis in Sport.
A sign variance is required when a proposed sign does not conform to the requirements of the zoning ordinance pertaining to the size, height or its location. DPS processes the sign variance application and the Sign Review Board provides an approval decision. Update Frequency : Daily
Tracking local taxes and intergovernmental revenue and evaluating the credibility of revenue forecasts greatly assists with sound financial planning efforts; it allows policymakers the ability to make informed decisions, build a fiscally responsible budget, and support the City's priority to maintain financial stability and vitality. This page provides data for the Revenue Forecast performance measure. The performance measure dashboard is available at 5.10 Revenue Forecast Variance. Additional Information Source: PeopleSoft 400 Report, ExcelContact: Benicia BensonContact E-Mail: Benicia_Benson@tempe.govData Source Type: TabularPreparation Method: Metrics are based on actual revenue collected for local taxes and intergovernmental revenue in the City's PeopleSoft 400 Report. Total Local Taxes include city sales tax, sales tax rebate, sales tax penalty and interest, sales tax to be rebated, temporary PLT tax, sales tax interest, refund, and temporary PLT tax to be rebated. Total intergovernmental revenue includes State Sales Tax, State Income Tax, and State Auto Lieu Tax. Many of the estimates are provided by the League of Arizona Cities and Towns. Another principal source includes our participation as a sponsor of the Forecasting Project developed by the University of Arizona Eller College of Management and Economic Research Center in Tucson, AZ.Publish Frequency: Annually, based on a fiscal yearPublish Method: Manually retrieved and calculatedData Dictionary
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Identifying multiple change points in the mean and/or variance is crucial across various fields, including finance and quality control. We introduce a novel technique that detects change points for the mean and/or variance of a noisy sequence and constructs confidence intervals for both the mean and variance of the sequence. This method integrates the weighted bootstrap with the Sequential Binary Segmentation (SBS) algorithm. Not only does our technique pinpoint the location and number of change points, but it also determines the type of change for each estimated point, specifying whether the change occurred in the mean, variance, or both. Our simulations show that our method outperforms other approaches in most scenarios, clearly demonstrating its superiority. Finally, we apply our technique to three datasets, including DNA copy number variation, stock volume, and traffic flow data, further validating its practical utility and wide-ranging applicability.
The purpose of this report is to compare alternative methods for producing measures of SEs for regression models for the MHSS clinical sample with the goal of producing more accurate and potentially smaller SEs.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Collinearity discovery through diagnostic tools is an important analysis step when performing linear regression. Despite their wide-spread use, collinearity indices such as the variance inflation factor and the condition number have limitations and may not be effective in some applications. In this article, we will contribute to the study of conventional collinearity indices through theoretical and empirical work. We will present mcvis, a new framework that uses resampling techniques to repeatedly learn from these conventional collinearity indices to better understand the causes of collinearity. Our framework is made available in R through the mcvis package which includes new collinearity measures and visualizations, in particular a bipartite plot that informs on the degree and structure of collinearity. Supplementary materials for this article are available online.
This is the history all FMS reported budget-and-spend by budget line item and reported Year-Month. Each row is the snapshot of data recorded in each reported Year-Month. The earliest snapshot represents the 'original budget' of a project. The 'budget variance' is the difference between 'total budget' of each row and its prior reported Year-Month. This dataset is part of the Capital Projects Dashboard.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Analysis of variance.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for: Bedding scale correlation on Mars in western Arabia Terra
A.M. Annex et al.
Data Product Overview
This repository contains all source data for the publication. Below is a description of each general data product type, software that can load the data, and a list of the file names along with the short description of the data product.
HiRISE Digital Elevation Models (DEMs).
HiRISE DEMs produced using the Ames Stereo Pipeline are in geotiff format ending with ‘*X_0_DEM-adj.tif’, the “X” prefix denotes the spatial resolution of the data product in meters. Geotiff files are able to be read by free GIS software like QGIS.
HiRISE map-projected imagery (DRGs).
Map-projected HiRISE images produced using the Ames Stereo Pipeline are in geotiff format ending with ‘*0_Y_DRG-cog.tif’, the “Y” prefix denotes the spatial resolution of the data product in centimeters. Geotiff files are able to be read by free GIS software like QGIS. The DRG files are formatted as COG-geotiffs for enhanced compression and ease of use.
3D Topography files (.ply).
Traingular Mesh versions of the HiRISE/CTX topography data used for 3D figures in “.ply” format. Meshes are greatly geometrically simplified from source files. Topography files can be loaded in a variety of open source tools like ParaView and Meshlab. Textures can be applied using embedded texture coordinates.
3D Geological Model outputs (.vtk)
VTK 3D file format files of model output over the spatial domain of each study site. VTK files can be loaded by ParaView open source software. The “block” files contain the model evaluation over a regular grid over the model extent. The “surfaces” files contain just the bedding surfaces as interpolated from the “block” files using the marching cubes algorithm.
Geological Model geologic maps (geologic_map.tif).
Geologic maps from geological models are standard geotiffs readable by conventional GIS software. The maximum value for each geologic map is the “no-data” value for the map. Geologic maps are calculated at a lower resolution than the topography data for storage efficiency.
Beds Geopackage File (.gpkg).
Geopackage vector data file containing all mapped layers and associated metadata including dip corrected bed thickness as well as WKB encoded 3D linestrings representing the sampled topography data to which the bedding orientations were fit. Geopackage files can be read using GIS software like QGIS and ArcGIS as well as the OGR/GDAL suite. A full description of each column in the file is provided below.
Column | Type | Description |
---|---|---|
uuid | String | unique identifier |
stratum_order | Real | 0-indexed bed order |
section | Real | section number |
layer_id | Real | bed number/index |
layer_id_bk | Real | unused backup bed number/index |
source_raster | String | dem file path used |
raster | String | dem file name |
gsd | Real | ground sampling distant for dem |
wkn | String | well known name for dem |
rtype | String | raster type |
minx | Real | minimum x position of trace in dem crs |
miny | Real | minimum y position of trace in dem crs |
maxx | Real | maximum x position of trace in dem crs |
maxy | Real | maximum y position of trace in dem crs |
method | String | internal interpolation method |
sl | Real | slope in degrees |
az | Real | azimuth in degrees |
error | Real | maximum error ellipse angle |
stdr | Real | standard deviation of the residuals |
semr | Real | standard error of the residuals |
X | Real | mean x position in CRS |
Y | Real | mean y position in CRS |
Z | Real | mean z position in CRS |
b1 | Real | plane coefficient 1 |
b2 | Real | plane coefficient 2 |
b3 | Real | plane coefficient 3 |
b1_se | Real | standard error plane coefficient 1 |
b2_se | Real | standard error plane coefficient 2 |
b3_se | Real | standard error plane coefficient 3 |
b1_ci_low | Real | plane coefficient 1 95% confidence interval low |
b1_ci_high | Real | plane coefficient 1 95% confidence interval high |
b2_ci_low | Real | plane coefficient 2 95% confidence interval low |
b2_ci_high | Real | plane coefficient 2 95% confidence interval high |
b3_ci_low | Real | plane coefficient 3 95% confidence interval low |
b3_ci_high | Real | plane coefficient 3 95% confidence interval high |
pca_ev_1 | Real | pca explained variance ratio pc 1 |
pca_ev_2 | Real | pca explained variance ratio pc 2 |
pca_ev_3 | Real | pca explained variance ratio pc 3 |
condition_number | Real | condition number for regression |
n | Integer64 | number of data points used in regression |
rls | Integer(Boolean) | unused flag |
demeaned_regressions | Integer(Boolean) | centering indicator |
meansl | Real | mean section slope |
meanaz | Real | mean section azimuth |
angular_error | Real | angular error for section |
mB_1 | Real | mean plane coefficient 1 for section |
mB_2 | Real | mean plane coefficient 2 for section |
mB_3 | Real | mean plane coefficient 3 for section |
R | Real | mean plane normal orientation vector magnitude |
num_valid | Integer64 | number of valid planes in section |
meanc | Real | mean stratigraphic position |
medianc | Real | median stratigraphic position |
stdc | Real | standard deviation of stratigraphic index |
stec | Real | standard error of stratigraphic index |
was_monotonic_increasing_layer_id | Integer(Boolean) | monotonic layer_id after projection to stratigraphic index |
was_monotonic_increasing_meanc | Integer(Boolean) | monotonic meanc after projection to stratigraphic index |
was_monotonic_increasing_z | Integer(Boolean) | monotonic z increasing after projection to stratigraphic index |
meanc_l3sigma_std | Real | lower 3-sigma meanc standard deviation |
meanc_u3sigma_std | Real | upper 3-sigma meanc standard deviation |
meanc_l2sigma_sem | Real | lower 3-sigma meanc standard error |
meanc_u2sigma_sem | Real | upper 3-sigma meanc standard error |
thickness | Real | difference in meanc |
thickness_fromz | Real | difference in Z value |
dip_cor | Real | dip correction |
dc_thick | Real | thickness after dip correction |
dc_thick_fromz | Real | z thickness after dip correction |
dc_thick_dev | Integer(Boolean) | dc_thick <= total mean dc_thick |
dc_thick_fromz_dev | Integer(Boolean) | dc_thick <= total mean dc_thick_fromz |
thickness_fromz_dev | Integer(Boolean) | dc_thick <= total mean thickness_fromz |
dc_thick_dev_bg | Integer(Boolean) | dc_thick <= section mean dc_thick |
dc_thick_fromz_dev_bg | Integer(Boolean) | dc_thick <= section mean |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is about books. It has 1 row and is filtered where the book is Variance in Arabic manuscripts : Arabic didactic poems from the eleventh to the seventeenth centuries : analysis of textual variance and its control in the manuscripts. It features 7 columns including author, publication date, language, and book publisher.
New dataset replacing https://citydata.mesaaz.gov/Information-Technology/Information-Technology-Project-Schedule-and-Budget/spka-r4fd.
This data set lists projects currently in process and managed by the Department of Information Technology. Projects are noted if they are in an implementation phase, which would make their schedule and budget adherence applicable. Budget is listed and determined by project manager if they are within budget, at or under budget, to date. Schedule is determined by project start date, project manager original go live estimate and current go live estimate and /or actual go live date.
List of all after-hours variances issued in DOB NOW
Data underlying Fig 3A. Columns include bootstrap, model component, and the estimated variance. (CSV)
A variance is required when an application has submitted a proposed project to the Department of Permitting Services and it is determined that the construction, alteration or extension does not conform to the development standards (in the zoning ordinance) for the zone in which the subject property is located. A variance may be required in any zone and includes accessory structures as well as primary buildings or dwellings. Update Frequency : Daily