Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models (Cox MSMs) are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and the consistent variance estimator in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the two estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Script for calculate variance partition method and hierarchical partition method for scales regional and local
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
R Core Team. (2016). R: A language and environment for statistical computing. R Foundation for Statistical Computing.
Supplement to Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness (https://philpapers.org/rec/PEROAL-2).
Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness move from the features of the ERP characterized in Occipital and Left Temporal EEG Correlates of Phenomenal Consciousness (Pereira, 2015, https://doi.org/10.1016/b978-0-12-802508-6.00018-1, https://philpapers.org/rec/PEROAL) towards the instantaneous amplitude and frequency of event-related changes correlated with a contrast in access and in phenomenology.
Occipital and left temporal instantaneous amplitude and frequency oscillations correlated with access and phenomenal consciousness proceed as following.
In the first section, empirical mode decomposition (EMD) with post processing (Xie, G., Guo, Y., Tong, S., and Ma, L., 2014. Calculate excess mortality during heatwaves using Hilbert-Huang transform algorithm. BMC medical research methodology, 14, 35) Ensemble Empirical Mode Decomposition (postEEMD) and Hilbert-Huang Transform (HHT).
In the second section, calculated the variance inflation factor (VIF).
In the third section, partial least squares regression (PLSR): the minimal root mean squared error of prediction (RMSEP).
In the last section, partial least squares regression (PLSR): significance multivariate correlation (sMC) statistic.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
A noniterative sample size procedure is proposed for a general hypothesis test based on the t distribution by modifying and extending Guenther’s (1981) approach for the one sample and two sample t tests. The generalized procedure is employed to determine the sample size for treatment comparisons using the analysis of covariance (ANCOVA) and the mixed effects model for repeated measures (MMRM) in randomized clinical trials. The sample size is calculated by adding a few simple correction terms to the sample size from the normal approximation to account for the nonnormality of the t statistic and lower order variance terms, which are functions of the covariates in the model. But it does not require specifying the covariate distribution. The noniterative procedure is suitable for superiority tests, noninferiority tests and a special case of the tests for equivalence or bioequivalence, and generally yields the exact or nearly exact sample size estimate after rounding to an integer. The method for calculating the exact power of the two sample t test with unequal variance in superiority trials is extended to equivalence trials. We also derive accurate power formulae for ANCOVA and MMRM, and the formula for ANCOVA is exact for normally distributed covariates. Numerical examples demonstrate the accuracy of the proposed methods particularly in small samples.
Facebook
TwitterI needed a low variance dataset for my project to make a point. I could not find it in here. So, I got it somehow and there you go!
Facebook
TwitterRaw dataData includes both infected and non-infected hosts. Note that, except for the analysis of susceptibility, only infected individuals were included in the analyses. These data are a subset of a larger dataset published previously in Vale PF, Wilson AJ, Best A, Boots M, Little TJ. (2011) Epidemiological, evolutionary, and coevolutionary implications of context-dependent parasitism. The American Naturalist 177:510-521.
Facebook
TwitterThe OCTO-Twin Study aims to investigate the etiology of individual differences among twin-pairs age 80 and older, on a range of domains including health and functional capacity, cognitive functioning, psychological well-being, personality and personal control. In the study, twin pairs were withdrawn from the Swedish Twin Registry. At the first wave, the twins had to be born 1913 or earlier and both partners in the pair had to accept participation. At baseline in 1991-94, 351 twin pairs (149 monozygotic and 202 like-sex dizygotic pairs) were investigated (mean age: 83.6 years and 67% were female). The two-year longitudinal follow-ups were conducted on all twins who were alive and agreed to participate. Data have been collected at five waves over a total of eight years.
In wave 5, 43 twin pairs participated, with a total of 222 individuals. Refer to the description of wave 1/the base line and the individual datasets in the NEAR portal for more details on variable groups and individual variables.
Facebook
TwitterWe provide the generated dataset used for supervised machine learning in the related article. The data are in tabular format and contain all principal components and ground truth labels per tissue type. Tissue type codes used are; C1 for kidney, C2 for skin, and C3 for colon. 'PC' stands for the principal component. For feature extraction specifications, please see the original design in the related article. Features have been extracted independently for each tissue type.
Facebook
TwitterThis dataset has been meticulously curated to assist investment analysts, like you, in performing mean-variance optimization for constructing efficient portfolios. The dataset contains historical financial data for a selection of assets, enabling the calculation of risk and return characteristics necessary for portfolio optimization. The goal is to help you determine the most effective allocation of assets to achieve optimal risk-return trade-offs.
Facebook
TwitterSupplementary File1_phenotypesThe txt file contains the phenotypes assessed in our study for all mice under control (CTRL) or enriched (conditions). The file is a txt file, comma delimited. Eache mouse is one line. All abbreviations and the phenotypes are explained in the article.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Theories demand much of data, often more than a single data collection can provide. For example, many important research questions are set in the past and must rely on data collected at that time and for other purposes. As a result, we often find that the data lack crucial variables. Another common problem arises when we wish to estimate the relationship between variables that are measured in different data sets. A variation of this occurs with a split half sample design in which one or more important variables appear on the "wrong" half. Finally, we may need panel data but have only cross sections available. In each of these cases our ability to estimate the theoretically determined equation is limited by the data that are available. In many cases there is simply no solution, and theory must await new opportunities for testing. Under certain circumstances, however, we may still be able to estimate relationships between variables even though they are not measured on the same set of observations. This technique, which I call two-stage auxiliary instrumental variables (2SAIV), provides some new leverage on such problems and offers the opportunity to test hypotheses that were previously out of reach. T his article develops the 2SAIV estimator, proves its consistency and derives its asymptotic variance. A set of simulations illustrates the performance of the estimator in finite samples and several applications are sketched out.
Facebook
TwitterData PackageThis is the data package to accompany Senior et al. 2016: Meta-analysis of variance: an illustration comparing the effects of two dietary interventions on variability in weight. The zip file contains three R files. 1) "Analysis.R" is the script to perform the analysis and is heavily commented. 2) "Data.Objects.Rdata" is a R datafile containing all of the necessary to replicate the analysis (see comments in "Analysis.R"). 3) "EffectSizeFunctions.R" contains various functions to calculate effect sizes, and is called in "Analysis.R".
Facebook
TwitterIMP-8 Weimer propagated solar wind data and linearly interpolated time delay, cosine angle, and goodness information of propagated data at 1 min Resolution. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
Facebook
TwitterSubmerged macrophytes are important foundation species that can strongly influence the structure and functioning of aquatic ecosystems, but only little is known about the temporal variation and the timescales of these effects (i.e. from hourly, daily, to monthly).
Here, we conducted an outdoor experiment in replicated mesocosms (1000 L) where we manipulated the presence and absence of macrophytes to investigate the temporal variability of their ecosystem effects. We measured several parameters (chlorophyll-a, phycocyanin, dissolved organic matter [DOM], and oxygen) with high-resolution sensors (15 min intervals) over several months (94 days from spring to fall), and modelled metabolic rates of each replicate ecosystem in a Bayesian framework. We also implemented a simple model to explore competitive interactions between phytoplankton and macrophytes as a driver of variability in chlorophyll-a.
Over the entire experiment, macrophytes had a positive effect on mean DOM concentra...
Facebook
TwitterThe analysis of critical states during fracture of wood materials is crucial for wood building safety monitoring, wood processing, etc. In this paper, beech and camphor pine are selected as the research objects, and the acoustic emission signals during the fracture process of the specimens are analyzed by three-point bending load experiments. On the one hand, the critical state interval of a complex acoustic emission signal system is determined by selecting characteristic parameters in the natural time domain. On the other hand, an improved method of b_value analysis in the natural time domain is proposed based on the characteristics of the acoustic emission signal. The K-value, which represents the beginning of the critical state of a complex acoustic emission signal system, is further defined by the improved method of b_value in the natural time domain. For beech, the analysis of critical state time based on characteristic parameters can predict the “collapse” time 8.01 s in advance, while for camphor pines, 3.74 s in advance. K-value can be analyzed at least 3 s in advance of the system “crash” time for beech and 4 s in advance of the system “crash” time for camphor pine. The results show that compared with traditional time-domain acoustic emission signal analysis, natural time-domain acoustic emission signal analysis can discover more available feature information to characterize the state of the signal. Both the characteristic parameters and Natural_Time_b_value analysis in the natural time domain can effectively characterize the time when the complex acoustic emission signal system enters the critical state. Critical state analysis can provide new ideas for wood health monitoring and complex signal processing, etc.
Facebook
TwitterACE Weimer propagated solar wind data and linearly interpolated time delay, cosine angle, and goodness information of propagated data at 1 min Resolution. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
Facebook
TwitterISEE-1 Weimer propagated solar wind data and linearly interpolated time delay, cosine angle, and goodness information of propagated data at 1 min Resolution. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
Facebook
TwitterGeotail Weimer propagated solar wind data and linearly interpolated to have the measurements on the minute at 60 s resolution CPI data in GSE coordinates. This data set consists of propagated solar wind data that has first been propagated to a position just outside of the nominal bow shock (about 17, 0, 0 Re) and then linearly interpolated to 1 min resolution using the interp1.m function in MATLAB. The input data for this data set is a 1 min resolution processed solar wind data constructed by Dr. J.M. Weygand. The method of propagation is similar to the minimum variance technique and is outlined in Dan Weimer et al. [2003; 2004]. The basic method is to find the minimum variance direction of the magnetic field in the plane orthogonal to the mean magnetic field direction. This minimum variance direction is then dotted with the difference between final position vector minus the original position vector and the quantity is divided by the minimum variance dotted with the solar wind velocity vector, which gives the propagation time. This method does not work well for shocks and minimum variance directions with tilts greater than 70 degrees of the sun-earth line. This data set was originally constructed by Dr. J.M. Weygand for Prof. R.L. McPherron, who was the principle investigator of two National Science Foundation studies: GEM Grant ATM 02-1798 and a Space Weather Grant ATM 02-08501. These data were primarily used in superposed epoch studies References: Weimer, D. R. (2004), Correction to ‘‘Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique,’’ J. Geophys. Res., 109, A12104, doi:10.1029/2004JA010691. Weimer, D.R., D.M. Ober, N.C. Maynard, M.R. Collier, D.J. McComas, N.F. Ness, C. W. Smith, and J. Watermann (2003), Predicting interplanetary magnetic field (IMF) propagation delay times using the minimum variance technique, J. Geophys. Res., 108, 1026, doi:10.1029/2002JA009405.
Facebook
TwitterThis dataset contains the following: 1. ANOVA table (variate:protein content)2. Table of effects3. Table of means4. Standard errors of differences of means 5. Tukey's 95% confidence intervals
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In power analysis for multivariable Cox regression models, variance of the estimated log-hazard ratio for the treatment effect is usually approximated by inverting the expected null information matrix. Because in many typical power analysis settings assumed true values of the hazard ratios are not necessarily close to unity, the accuracy of this approximation is not theoretically guaranteed. To address this problem, the null variance expression in power calculations can be replaced with one of alternative expressions derived under the assumed true value of the hazard ratio for the treatment effect. This approach is explored analytically and by simulations in the present paper. We consider several alternative variance expressions, and compare their performance to that of the traditional null variance expression. Theoretical analysis and simulations demonstrate that while the null variance expression performs well in many non-null settings, it can also be very inaccurate, substantially underestimating or overestimating the true variance in a wide range of realistic scenarios, particularly those where the numbers of treated and control subjects are very different and the true hazard ratio is not close to one. The alternative variance expressions have much better theoretical properties, confirmed in simulations. The most accurate of these expressions has a relatively simple form - it is the sum of inverse expected event counts under treatment and under control scaled up by a variance inflation factor.
Facebook
TwitterCC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
In survival analyses, inverse-probability-of-treatment (IPT) and inverse-probability-of-censoring (IPC) weighted estimators of parameters in marginal structural Cox models (Cox MSMs) are often used to estimate treatment effects in the presence of time-dependent confounding and censoring. In most applications, a robust variance estimator of the IPT and IPC weighted estimator is calculated leading to conservative confidence intervals. This estimator assumes that the weights are known rather than estimated from the data. Although a consistent estimator of the asymptotic variance of the IPT and IPC weighted estimator is generally available, applications and thus information on the performance of the consistent estimator are lacking. Reasons might be a cumbersome implementation in statistical software, which is further complicated by missing details on the variance formula. In this paper, we therefore provide a detailed derivation of the variance of the asymptotic distribution of the IPT and IPC weighted estimator and explicitly state the necessary terms to calculate a consistent estimator of this variance. We compare the performance of the robust and the consistent variance estimator in an application based on routine health care data and in a simulation study. The simulation reveals no substantial differences between the two estimators in medium and large data sets with no unmeasured confounding, but the consistent variance estimator performs poorly in small samples or under unmeasured confounding, if the number of confounders is large. We thus conclude that the robust estimator is more appropriate for all practical purposes.