https://www.gesis.org/en/institute/data-usage-termshttps://www.gesis.org/en/institute/data-usage-terms
Stata module that implements Potter's (1990) weight distribution approach to trim extreme sampling weights. The basic idea is that the sampling weights are assumed to follow a beta distribution. The parameters of the distribution are estimated from the moments of the observed sampling weights and the resulting quantiles are used as cut-off points for extreme sampling weights. The process is repeated a specified number of times (10 by default) or until no sampling weights are more extreme than the specified quantiles.
This is an auto-generated index table corresponding to a folder of files in this dataset with the same name. This table can be used to extract a subset of files based on their metadata, which can then be used for further analysis. You can view the contents of specific files by navigating to the "cells" tab and clicking on an individual file_id.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Replication materials for the forthcoming publication entitled "Worth Weighting? How to Think About and Use Weights in Survey Experiments."
This report describes the person-level sampling weight calibration procedures used on the 2012 National Survey on Drug Use and Health (NSDUH). The report describes the practical aspects of implementing generalized exponential model (GEM) for the NSDUH.
Weighting for individual scale items.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Perceptual decisions are thought to be mediated by a mechanism of sequential sampling and integration of noisy evidence whose temporal weighting profile affects the decision quality. To examine temporal weighting, participants were presented with two brightness-fluctuating disks for 1, 2 or 3 seconds and were requested to choose the overall brighter disk at the end of each trial. By employing a signal-perturbation method, which deploys across trials a set of systematically controlled temporal dispersions of the same overall signal, we were able to quantify the participants’ temporal weighting profile. Results indicate that, for intervals of 1 or 2 sec, participants exhibit a primacy-bias. However, for longer stimuli (3-sec) the temporal weighting profile is non-monotonic, with concurrent primacy and recency, which is inconsistent with the predictions of previously suggested computational models of perceptual decision-making (drift-diffusion and Ornstein-Uhlenbeck processes). We propose a novel, dynamic variant of the leaky-competing accumulator model as a potential account for this finding, and we discuss potential neural mechanisms.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Competition Weights is a dataset for object detection tasks - it contains All annotations for 5,176 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Inverse Distance Weighting (IDW) is a spatial interpolation technique used to estimate values at unsampled locations based on known values at nearby points. The method assumes that points closer to the location of interest have a greater influence on the predicted value than those farther away. IDW calculates the predicted value by taking a weighted average of the known values, where the weights are inversely proportional to the distances between the known points and the prediction location, raised to a power parameter. This power parameter controls the rate at which the influence of the known points decreases with distance, with higher values giving more weight to closer points. IDW is widely used in fields such as geostatistics, meteorology, and environmental science to interpolate spatial data like rainfall, temperature, and pollution levels.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Precipitation estimation at a global scale is essential for global water cycle simulation and water resources management. The precipitation estimation from gauge-based, satellite retrieval and reanalysis datasets have heterogeneous uncertainties for different areas at global land. Here, the 13 monthly precipitation datasets and the 11 daily precipitation datasets are analyzed to examine the relative uncertainty of individual data based on the developed generalized three-cornered hat (TCH) method. The generalized TCH method can be used to evaluate the uncertainty of multiple (>3) precipitation products in an iterative optimization process. A weighting scheme is designed to merge the individual precipitation datasets to generate a new weighted precipitation using the inverse error variance-covariance matrix of TCH estimated uncertainty. The weighted precipitation is then validated using gauged data with the minimal uncertainty among all the individual products. The merged results indicate the superiority of the weighted precipitation with substantially reduced random errors over individual datasets and a state-of-the-art multi-satellite merged product, i.e. the Integrated Multi-satellitE Retrievals for Global precipitation measurement (IMERG) at validated areas. The weighted dataset can largely reproduce the interannual and seasonal variations of regional precipitation. The TCH-based merging results outperform two other mean-based merging methods at both monthly and daily scales. Overall, the merging scheme based on the generalized TCH method is effective to produce a new precipitation dataset integrating information from multiple products for hydrometeorological applications.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The anthropometric datasets presented here are virtual datasets. The unweighted virtual dataset was generated using a synthesis and subsequent validation algorithm (Ackermann et al., 2023). The underlying original dataset used in the algorithm was collected within a regional epidemiological public health study in northeastern Germany (SHIP, see Völzke et al., 2022). Important details regarding the collection of the anthropometric dataset within SHIP (e.g. sampling strategy, measurement methodology & quality assurance process) are discussed extensively in the study by Bonin et al. (2022). To approximate nationally representative values for the German working-age population, the virtual dataset was weighted with reference data from the first survey wave of the Study on health of adults in Germany (DEGS1, see Scheidt-Nave et al., 2012). Two different algorithms were used for the weighting procedure: (1) iterative proportional fitting (IPF), which is described in more detail in the publication by Bonin et al. (2022), and (2) a nearest neighbor approach (1NN), which is presented in the study by Kumar and Parkinson (2018). Weighting coefficients were calculated for both algorithms and it is left to the practitioner which coefficients are used in practice. Therefore, the weighted virtual dataset has two additional columns containing the calculated weighting coefficients with IPF ("WeightCoef_IPF") or 1NN ("WeightCoef_1NN"). Unfortunately, due to the sparse data basis at the distribution edges of SHIP compared to DEGS1, values underneath the 5th and above the 95th percentile should be considered with caution. In addition, the following characteristics describe the weighted and unweighted virtual datasets: According to ISO 15535, values for "BMI" are in [kg/m2], values for "Body mass" are in [kg], and values for all other measures are in [mm]. Anthropometric measures correspond to measures defined in ISO 7250-1. Offset values were calculated for seven anthropometric measures because there were systematic differences in the measurement methodology between SHIP and ISO 7250-1 regarding the definition of two bony landmarks: the acromion and the olecranon. Since these seven measures rely on one of these bony landmarks, and it was not possible to modify the SHIP methodology regarding landmark definitions, offsets had to be calculated to obtain ISO-compliant values. In the presented datasets, two columns exist for these seven measures. One column contains the measured values with the landmarking definitions from SHIP, and the other column (marked with the suffix "_offs") contains the calculated ISO-compliant values (for more information concerning the offset values see Bonin et al., 2022). The sample size is N = 5000 for the male and female subsets. The original SHIP dataset has a sample size of N = 1152 (women) and N = 1161 (men). Due to this discrepancy between the original SHIP dataset and the virtual datasets, users may get a false sense of comfort when using the virtual data, which should be mentioned at this point. In order to get the best possible representation of the original dataset, a virtual sample size of N = 5000 is advantageous and has been confirmed in pre-tests with varying sample sizes, but it must be kept in mind that the statistical properties of the virtual data are based on an original dataset with a much smaller sample size.
Learn about the techniques used to create weights for the 2022 National Survey on Drug Use and Health (NSDUH) at the person level. The report reviews the generalized exponential model (GEM) used in weighting, discusses potential predictor variables, and details the practical steps used to implement GEM. The report also details the weight calibrations, and presents the evaluation measures of the calibrations, as well as a sensitivity analysis.Chapters:Introduces the survey and the remainder of the report.Reviews the impact of multimode data collection on weighting.Briefly describes of the generalized exponential model.Describes the predictor variables for the model calibration.Defines extreme weights.Discusses control totals for poststratification adjustments.Discusses weight calibration at the dwelling unit level.Discusses weight calibration at the person level.Presents the evaluation measures of calibrated weights and a sensitivity analysis of selected prevalence estimates.Explains the break-off analysis weights.Appendices include technical details about the model and the evaluations that were performed.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The numbers and the averages of most important alleles (fragments) selected by different attribute weighting algorithms.
The "https://addhealth.cpc.unc.edu/" Target="_blank">National Longitudinal Study of Adolescent to Adult Health (Add Health) is a longitudinal study of a nationally representative sample of adolescents in grades seven through 12 in the United States. The Add Health cohort has been followed into adulthood (ages 31-42). Add Health combines longitudinal survey data on respondents' social, economic, psychological and physical well-being with contextual data on the family, neighborhood, community, school, friendships, peer groups, and romantic relationships, providing unique opportunities to study how social environments and behaviors in adolescence are linked to health and achievement outcomes in young adulthood. The fifth wave of data collection includes social and environmental data and continues to include biological data, like the fourth wave. This data file collects information on weights for Wave V.
For more complete information on the Add Health studies, please refer to the "https://addhealth.cpc.unc.edu/documentation/" Target="_blank">study's documentation.
Files for Fidelity weighting publication These files were used to create analysis and plots associated with code and article Fidelity weighting. For code and more information go to https://github.com/sanrou/fidelityweighting. Files consist of 41 subject folders that have forward operator, inverse operator, source parcellation identity, and source fidelity files. File format is .npy (numpy Python file). Funding by Academy of Finland (SA 266402, 303933 to S.P. and SA 253130 and 256472 J.M.P.)
This statistic shows the wholesale sales of home use free weights in the United States from 2007 to 2023. In 2023, wholesale sales of these consumer products reached over 560 million U.S. dollars, a 11.2 percent increase from the previous year.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Geometric means and standard deviations of the weight factors for the three different sound types separately for the two different directions of level change.
https://dbk.gesis.org/dbksearch/sdesc2.asp?no=7467https://dbk.gesis.org/dbksearch/sdesc2.asp?no=7467
Media use related to crime. Weighting of criminal offenses. Perception of safety.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Phylogenomic analyses routinely estimate species trees using methods that account for gene tree discordance. However, the most scalable species tree inference methods, which summarize independently inferred gene trees to obtain a species tree, are sensitive to hard-to-avoid errors introduced in the gene tree estimation step. This dilemma has created much debate on the merits of concatenation versus summary methods and practical obstacles to using summary methods more widely and to the exclusion of concatenation. The most successful attempt at making summary methods resilient to noisy gene trees has been contracting low support branches from the gene trees. Unfortunately, this approach requires arbitrary thresholds and poses new challenges. Here, we introduce threshold-free weighting schemes for the quartet-based species tree inference, the metric used in the popular method ASTRAL. By reducing the impact of quartets with low support or long terminal branches (or both), weighting provides stronger theoretical guarantees and better empirical performance than the unweighted ASTRAL. Our simulations show that weighting improves accuracy across many conditions and reduces the gap with concatenation in conditions with low gene tree discordance and high noise. On empirical data, weighting improves congruence with concatenation and increases support. Together, our results show that weighting, enabled by a new optimization algorithm we introduce, improves the utility of summary methods and can reduce the incongruence often observed across analytical pipelines. Methods - Data are generated using simulations for three out of the four archives; see README and the paper for details - From a previous publication (https://doi.org/10.1016/j.cub.2018.08.041) for the dogs dataset
As of June 2020, the information technology sector increased its weight to **** percent within the global economy and was the riskiest sector for financial investors according to Standard & Poor's index sector weightings. Within the I.T. sector index are companies like Apple Inc., Microsoft Corporation, Amazon.com Inc. and Facebook.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Linguistic variables for rating of alternatives.
https://www.gesis.org/en/institute/data-usage-termshttps://www.gesis.org/en/institute/data-usage-terms
Stata module that implements Potter's (1990) weight distribution approach to trim extreme sampling weights. The basic idea is that the sampling weights are assumed to follow a beta distribution. The parameters of the distribution are estimated from the moments of the observed sampling weights and the resulting quantiles are used as cut-off points for extreme sampling weights. The process is repeated a specified number of times (10 by default) or until no sampling weights are more extreme than the specified quantiles.