This dataset contains information about the percent variance between the actual and budgeted revenue (SD23 measure GTW.A.8). The City of Austin has numerous revenue sources, including charges for services/goods, taxes, and more. This measure helps provide insight about whether the City is receiving as much revenue as anticipated. For each revenue type and year, this dataset provides the budgeted revenue, actual revenue, and percent variance. This data comes from the City of Austin's Open Budget (Revenue Budget) application. View more details and insights related to this dataset on the story page: https://data.austintexas.gov/stories/s/Percent-Variance-Between-Actual-and-Budgeted-Reven/wmvj-b5er/
ISEE-1 Weimer propagated solar wind data and linearly interpolated time delay, cosine angle, and goodness information of propagated data at 1 min Resolution. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
R Core Team. (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing.
Supplement to Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness (https://philpapers.org/rec/PEROAL-2).
Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness move from the features of the ERP characterized in Occipital and Left Temporal EEG Correlates of Phenomenal Consciousness (Pereira, 2015, https://doi.org/10.1016/b978-0-12-802508-6.00018-1, https://philpapers.org/rec/PEROAL) towards the instantaneous amplitude and frequency of event-related changes correlated with a contrast in access and in phenomenology.
Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness proceed as following.
In the first section, empirical mode decomposition (EMD) with post processing (Xie, G., Guo, Y., Tong, S., and Ma, L., 2014. Calculate excess mortality during heatwaves using Hilbert-Huang transform algorithm. BMC medical research methodology, 14, 35) Ensemble Empirical Mode Decomposition (postEEMD) and Hilbert-Huang Transform (HHT).
In the second section, calculated the variance inflation factor (VIF).
In the third section, partial least squares regression (PLSR): the minimal root mean squared error of prediction (RMSEP).
In the last section, partial least squares regression (PLSR): significance multivariate correlation (sMC) statistic.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In power analysis for multivariable Cox regression models, variance of the estimated log-hazard ratio for the treatment effect is usually approximated by inverting the expected null information matrix. Because in many typical power analysis settings assumed true values of the hazard ratios are not necessarily close to unity, the accuracy of this approximation is not theoretically guaranteed. To address this problem, the null variance expression in power calculations can be replaced with one of alternative expressions derived under the assumed true value of the hazard ratio for the treatment effect. This approach is explored analytically and by simulations in the present paper. We consider several alternative variance expressions, and compare their performance to that of the traditional null variance expression. Theoretical analysis and simulations demonstrate that while the null variance expression performs well in many non-null settings, it can also be very inaccurate, substantially underestimating or overestimating the true variance in a wide range of realistic scenarios, particularly those where the numbers of treated and control subjects are very different and the true hazard ratio is not close to one. The alternative variance expressions have much better theoretical properties, confirmed in simulations. The most accurate of these expressions has a relatively simple form - it is the sum of inverse expected event counts under treatment and under control scaled up by a variance inflation factor.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The archive contains datasets, run scripts, time series and plotting scripts used when preparing the paper: P. Zmijewski, P. Dziekan and H. Pawlowska "Modeling Collision-Coalescence in Particle Microphysics: Numerical Convergence of Mean and Variance of Precipitation in Cloud Simulations Using University of Warsaw Lagrangian Cloud Model (UWLCM) 2.1 " submitted to Geoscientific Model Development in March 2023.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Additional file 1. QTL detected using the whole dataset to determine the genetic architecture of egg production and egg quality traits. This file gives the position of all the QTL detected using the whole dataset, with the top SNP corresponding to the SNP with the highest effect in the QTL region. The QTL is defined by the first (left) and last (right) SNPs that are 1 % significant at the chromosome level, respectively. Var (%) is the percentage of variance explained by the top SNP in the analysis with the whole dataset. Var LE(%) is the percentage of variance explained by the top SNP in the analysis with data for the low-energy diet only. Var HE(%) is the percentage of variance explained by the top SNP in the analysis with data for the high-energy diet only. Var 50(%) is the percentage of variance explained by the top SNP in the analysis with data for egg collection at 50 weeks only. Var 70(%) is the percentage of variance explained by the top SNP in the analysis with data for egg collection at 70 weeks only. Z Diet is the Z test statistics used to compare the two estimates calculated from the data for LE and HE diets. Z Age is the Z test statistics used to compare the two estimates calculated from the data for egg collection at 50 and 70 weeks of age. The difference was significant when P
We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray unobscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10ks in pointed observations, which is the largest sample used so far to study AGN X-ray variability on time scales less than a day. Recently it has been suggested that the same engine might be at work in the core of every Black Hole (BH) accreting object. In this hypothesis, the same variability should be observed in all AGN, once rescaled by the MBH (MBH) and accretion rate (dm/dt). We systematically compute the excess variance for all AGN, on different time-scales (10, 20, 40 and 80ks) and in different energy bands (0.3-0.7, 0.7-2 and 2-10keV). Cone search capability for table J/A+A/542/A83/AGNs (List of all the excess variance computed, in the 2-10keV band, with 10, 20, 40 and 80ks intervals)
ISEE-3 Weimer propagated solar wind data and linearly interpolated time delay, cosine angle, and goodness information of propagated data at 1 min Resolution. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
Environmental processes resolved at a sufficiently small scale in space and time inevitably display nonstationary behavior. Such processes are both challenging to model and computationally expensive when the data size is large. Instead of modeling the global non-stationarity explicitly, local models can be applied to disjoint regions of the domain. The choice of the size of these regions is dictated by a bias-variance trade-off; large regions will have smaller variance and larger bias, whereas small regions will have higher variance and smaller bias. From both the modeling and computational point of view, small regions are preferable to better accommodate the non-stationarity. However, in practice, large regions are necessary to control the variance. We propose a novel Bayesian three-step approach that allows for smaller regions without compromising the increase of the variance that would follow. We are able to propagate the uncertainty from one step to the next without issues caused by reusing the data. The improvement in inference also results in improved prediction, as our simulated example shows. We illustrate this new approach on a dataset of simulated high-resolution wind speed data over Saudi Arabia. Supplemental files for this article are available online.
Entropy production is the hallmark of nonequilibrium physics, quantifying irreversibility, dissipation, and the efficiency of energy transduction processes. Despite many efforts, its measurement at the nanoscale remains challenging. We introduce a variance sum rule for displacement and force variances that permits us to measure the entropy production rate in nonequilibrium steady states. We first illustrate it for directly measurable forces, such as an active Brownian particle in an optical trap. Data for this analysis can be found in the repository (1) described below. We then apply the variance sum rule to flickering experiments in human red blood cells (repositories (2-4)). We find that the entropy production rate is spatially heterogeneous with a finite correlation length (in particular, data in the repository (4)) and its average value agrees with calorimetry measurements. The dataset is composed of 4 repositories: 1) SwitchingTrap.zip, containing data from Optical-tweezer experiments and used in Fig. 2 and 3 in the main paper, all data are three-column files featuring time (s), position (nm), and force (pN); 2) OpticalStretching.zip, containing data from Optical-tweezer experiments shown in Fig. 4a in the main paper, all data are two-column files featuring time (s) and position (nm) traces; 3) OpticalSensing.zip, containing data from Optical-tweezer experiments shown in Fig. 4b in the main paper, all data are one-column files featuring position (m) traces, sampling frequency 25kHz; 4) OpticalMicroscopy.rar, containing data from Optical-microscopy experiments shown in Fig. 4c in the main paper, all data are one column files featuring position (nm) traces, sampling frequency 2kHz.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The provided dataset contains results from Monte Carlo simulations related to variance swaps. The data is organized into multiple sheets, each focusing on different parameters and scenarios.Figure 1:Monte Carlo Simulations: This section presents the results of Monte Carlo simulations for both discretely-sampled and continuously-sampled variance swaps. The values are reported for different sample sizes (N=12 to N=322), showing how the estimated variance swap values converge as the number of samples increases.Sample 1 and Sample 2: These represent two different sets of simulation results, each showing the impact of varying sample sizes on the variance swap values.Figure 2:κθ (Kappa Theta): This section explores the impact of different values of κθ on the variance swap values. θ̃ (Theta Tilde): This part examines the effect of varying θ̃ on the variance swap values .σθ (Sigma Theta): This section analyzes the influence of σθ on the variance swap values .θ₀ (Theta Zero): This part investigates the impact of different initial volatility levels (θ₀) on the variance swap values .Sheet 3:λ (Lambda): This section studies the effect of varying λ on the variance swap values .η (Eta): This part examines the influence of η on the variance swap values .v (Nu): This section analyzes the impact of v on the variance swap values .δ (Delta): This part investigates the effect of varying δ on the variance swap values .Overall, the dataset provides a comprehensive analysis of how different parameters and sampling methods affect the valuation of variance swaps, offering insights into the sensitivity and convergence behavior of these financial instruments under various conditions.
Raw phenotypic datadryad_univariate.txtLine meansLine means for the raw phenotypic datadryad_line_means.txt
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Purity results on all the data sets.
The dataset originates from 10-year mid-range values (2013-2022) of sea temperature in the surface, the water masses (intervals for 5 m, 15 m, 30 m, 50 m, 100 m, 150 m, 200 m and 250 m), as well as at the seabed. The distance from the seabed goes with the current model layout from a few cm on shallow water and up to 1.5 m when the total depth is 100 m or more. Temperatures are given in degrees celcius. The data set is available as WMS and WCS services, as well as for download via the Institute of Marine Research’s Geoserver https://kart.hi.no/data – select Layer preview and search for the data set for multiple download options. The coastal model Norkyst (version 3) is a calculation model that simulates e.g. current, salinity and temperature with 800 meters spatial resolution, in several vertical levels and with high resolution in time for the entire Norwegian coast, based on the model system ROMS (Regional Ocean Modeling System, http://myroms.org). NorKyst is being developed by the Institute of Marine Research in collaboration with the Norwegian Meteorological Institute. https://imr.brage.unit.no/imr-xmlui/handle/11250/116053
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
NMI results on all the data sets.
Supplementary File1_phenotypesThe txt file contains the phenotypes assessed in our study for all mice under control (CTRL) or enriched (conditions). The file is a txt file, comma delimited. Eache mouse is one line. All abbreviations and the phenotypes are explained in the article.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This provides replication code and data for the paper "Randomization Inference with Rainfall Data: Using Historical Weather Patterns for Variance Estimation."
This dataset contains the following: 1. ANOVA table (variate:protein content)2. Table of effects3. Table of means4. Standard errors of differences of means 5. Tukey's 95% confidence intervals
ACE Weimer propagated solar wind data and linearly interpolated time delay, cosine angle, and goodness information of propagated data at 1 min Resolution. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
http://guides.library.uq.edu.au/deposit_your_data/terms_and_conditionshttp://guides.library.uq.edu.au/deposit_your_data/terms_and_conditions
Each file contains a 60x3385 data matrix of log10 expression measurements, scaled to unit variance within traits.NOTE: This dataset has been superseded by a more up to date version. View the current version here: https://doi.org/10.48610/a3c5652
This dataset contains information about the percent variance between the actual and budgeted revenue (SD23 measure GTW.A.8). The City of Austin has numerous revenue sources, including charges for services/goods, taxes, and more. This measure helps provide insight about whether the City is receiving as much revenue as anticipated. For each revenue type and year, this dataset provides the budgeted revenue, actual revenue, and percent variance. This data comes from the City of Austin's Open Budget (Revenue Budget) application. View more details and insights related to this dataset on the story page: https://data.austintexas.gov/stories/s/Percent-Variance-Between-Actual-and-Budgeted-Reven/wmvj-b5er/