Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Importance (weight) of variables influencing grizzly bear abundance in northwestern Montana, USA, in 2004. Only candidate variables for abundance, not detection, are shown. Weights for variables that were in the model ≥50% of iterations are in bold. Data include only cells with both types of sampling. HT = Hair Trap, BR = Bear Rub. See Graves et al. (In Review) for more details on specific variables. We did not include further details to maintain focus on the influence of different detection methods.1Experts assigned a value 1–10 to ownership categories based on efforts to protect bears including 1) attractant storage management, 2) enforcement of food storage regulations, and 3) road density and use management. Glacier National Park = 10, US Forest Service = 7, other public land = 3, and private = 1.
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
The fish market dataset is a collection of data related to various species of fish and their characteristics. This dataset is designed for polynomial regression analysis and contains several columns with specific information. Here's a description of each column in the dataset:
Species: This column represents the species of the fish. It is a categorical variable that categorizes each fish into one of seven species. The species may include names like "Perch," "Bream," "Roach," "Pike," "Smelt," "Parkki," and "Whitefish." This column is the target variable for the polynomial regression analysis, where we aim to predict the fish's weight based on its other attributes.
Weight: This column represents the weight of the fish. It is a numerical variable that is typically measured in grams. The weight is the dependent variable we want to predict using polynomial regression.
Length1: This column represents the first measurement of the fish's length. It is a numerical variable, typically measured in centimetres.
Length2: This column represents the second measurement of the fish's length. It is another numerical variable, typically measured in centimetres.
Length3: This column represents the third measurement of the fish's length. Similar to the previous two columns, it is a numerical variable, usually measured in centimetres.
Height: This column represents the height of the fish. It is a numerical variable, typically measured in centimetres.
Width: This column represents the width of the fish. Like the other numerical variables, it is also typically measured in centimetres.
The dataset is structured in such a way that each row corresponds to a single fish with its species and various physical measurements (lengths, height, and width). The goal of using polynomial regression on this dataset would be to build a predictive model that can estimate the weight of a fish based on its species and the provided physical measurements. Polynomial regression allows for modelling more complex relationships between the independent variables (lengths, height, and width) and the dependent variable (weight), which may be particularly useful if there are non-linear patterns in the data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These files include codes and data repository for GWV.doi:https://doi.org/10.1016/j.jag.2025.104750If you have further interest in having communication with us, you can send an email via the following address:humingxing@seu.edu.cn
Total model weight of variable inclusion for different exponents on body mass.
Learn about the techniques used to create weights for the 2022 National Survey on Drug Use and Health (NSDUH) at the pair and questionnaire dwelling unit (QDU) levels. NSDUH is designed so that some of the sampled households have both an adult and a youth respondent who are paired. Because of this, NSDUH allows for estimating characteristics at the person level, pair level, or QDU level. This report describes pair selection probabilities, the generalized exponential model (including predictor variables used), and the multiple weight components that are used for pair or QDU levels of analysis. An evaluation of the calibration weights is also included.Chapters:Introduces the report.Discusses the probability of selection for pairs and QDUs.Briefly describes of the generalized exponential model.Describes the predictor variables for the model calibration.Defines extreme weights.Discusses weight calibrations.Evaluates the calibration weights.Appendices include technical details about the model and the evaluations that were performed.
Learn about the techniques used to create weights for the 2021 National Survey on Drug Use and Health (NSDUH) at the person level. The report reviews the generalized exponential model (GEM) used in weighting, discusses potential predictor variables, and details the practical steps used to implement GEM. The report also details the weight calibrations, and presents the evaluation measures of the calibrations, as well as a sensitivity analysis.Chapters:Introduces the survey and the remainder of the report.Reviews the impact of multimode data collection on weighting.Briefly describes of the generalized exponential model.Describes the predictor variables for the model calibration.Defines extreme weights.Discusses control totals for poststratification adjustments.Discusses weight calibration at the dwelling unit level.Discusses weight calibration at the person level.Presents the evaluation measures of calibrated weights and a sensitivity analysis of selected prevalence estimates.Explains the break-off analysis weights.Explains the alternative analysis weights.Appendices include technical details about the model and the evaluations that were performed.
Subset of sociodemographic and weight-related variables on the Project EAT surveys.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
This dataset provides a detailed overview of gym members' exercise routines, physical attributes, and fitness metrics. It contains 973 samples of gym data, including key performance indicators such as heart rate, calories burned, and workout duration. Each entry also includes demographic data and experience levels, allowing for comprehensive analysis of fitness patterns, athlete progression, and health trends.
Key Features:
This dataset is ideal for data scientists, health researchers, and fitness enthusiasts interested in studying exercise habits, modeling fitness progression, or analyzing the relationship between demographic and physiological data. With a wide range of variables, it offers insights into how different factors affect workout intensity, endurance, and overall health.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The National Health and Nutrition Examination Survey (NHANES) provides data and have considerable potential to study the health and environmental exposure of the non-institutionalized US population. However, as NHANES data are plagued with multiple inconsistencies, processing these data is required before deriving new insights through large-scale analyses. Thus, we developed a set of curated and unified datasets by merging 614 separate files and harmonizing unrestricted data across NHANES III (1988-1994) and Continuous (1999-2018), totaling 135,310 participants and 5,078 variables. The variables conveydemographics (281 variables),dietary consumption (324 variables),physiological functions (1,040 variables),occupation (61 variables),questionnaires (1444 variables, e.g., physical activity, medical conditions, diabetes, reproductive health, blood pressure and cholesterol, early childhood),medications (29 variables),mortality information linked from the National Death Index (15 variables),survey weights (857 variables),environmental exposure biomarker measurements (598 variables), andchemical comments indicating which measurements are below or above the lower limit of detection (505 variables).csv Data Record: The curated NHANES datasets and the data dictionaries includes 23 .csv files and 1 excel file.The curated NHANES datasets involves 20 .csv formatted files, two for each module with one as the uncleaned version and the other as the cleaned version. The modules are labeled as the following: 1) mortality, 2) dietary, 3) demographics, 4) response, 5) medications, 6) questionnaire, 7) chemicals, 8) occupation, 9) weights, and 10) comments."dictionary_nhanes.csv" is a dictionary that lists the variable name, description, module, category, units, CAS Number, comment use, chemical family, chemical family shortened, number of measurements, and cycles available for all 5,078 variables in NHANES."dictionary_harmonized_categories.csv" contains the harmonized categories for the categorical variables.“dictionary_drug_codes.csv” contains the dictionary for descriptors on the drugs codes.“nhanes_inconsistencies_documentation.xlsx” is an excel file that contains the cleaning documentation, which records all the inconsistencies for all affected variables to help curate each of the NHANES modules.R Data Record: For researchers who want to conduct their analysis in the R programming language, only cleaned NHANES modules and the data dictionaries can be downloaded as a .zip file which include an .RData file and an .R file.“w - nhanes_1988_2018.RData” contains all the aforementioned datasets as R data objects. We make available all R scripts on customized functions that were written to curate the data.“m - nhanes_1988_2018.R” shows how we used the customized functions (i.e. our pipeline) to curate the original NHANES data.Example starter codes: The set of starter code to help users conduct exposome analysis consists of four R markdown files (.Rmd). We recommend going through the tutorials in order.“example_0 - merge_datasets_together.Rmd” demonstrates how to merge the curated NHANES datasets together.“example_1 - account_for_nhanes_design.Rmd” demonstrates how to conduct a linear regression model, a survey-weighted regression model, a Cox proportional hazard model, and a survey-weighted Cox proportional hazard model.“example_2 - calculate_summary_statistics.Rmd” demonstrates how to calculate summary statistics for one variable and multiple variables with and without accounting for the NHANES sampling design.“example_3 - run_multiple_regressions.Rmd” demonstrates how run multiple regression models with and without adjusting for the sampling design.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset contains simulated datasets, empirical data, and R scripts described in the paper: "Li, Q. and Kou, X. (2021) WiBB: An integrated method for quantifying the relative importance of predictive variables. Ecography (DOI: 10.1111/ecog.05651)".
A fundamental goal of scientific research is to identify the underlying variables that govern crucial processes of a system. Here we proposed a new index, WiBB, which integrates the merits of several existing methods: a model-weighting method from information theory (Wi), a standardized regression coefficient method measured by ß* (B), and bootstrap resampling technique (B). We applied the WiBB in simulated datasets with known correlation structures, for both linear models (LM) and generalized linear models (GLM), to evaluate its performance. We also applied two other methods, relative sum of wight (SWi), and standardized beta (ß*), to evaluate their performance in comparison with the WiBB method on ranking predictor importances under various scenarios. We also applied it to an empirical dataset in a plant genus Mimulus to select bioclimatic predictors of species' presence across the landscape. Results in the simulated datasets showed that the WiBB method outperformed the ß* and SWi methods in scenarios with small and large sample sizes, respectively, and that the bootstrap resampling technique significantly improved the discriminant ability. When testing WiBB in the empirical dataset with GLM, it sensibly identified four important predictors with high credibility out of six candidates in modeling geographical distributions of 71 Mimulus species. This integrated index has great advantages in evaluating predictor importance and hence reducing the dimensionality of data, without losing interpretive power. The simplicity of calculation of the new metric over more sophisticated statistical procedures, makes it a handy method in the statistical toolbox.
Abstract copyright UK Data Service and data collection copyright owner.BackgroundThe Labour Force Survey (LFS) is a unique source of information using international definitions of employment and unemployment and economic inactivity, together with a wide range of related topics such as occupation, training, hours of work and personal characteristics of household members aged 16 years and over. It is used to inform social, economic and employment policy. The Annual Population Survey, also held at the UK Data Archive, is derived from the LFS.The LFS was first conducted biennially from 1973-1983, then annually between 1984 and 1991, comprising a quarterly survey conducted throughout the year and a 'boost' survey in the spring quarter. From 1992 it moved to a quarterly cycle with a sample size approximately equivalent to that of the previous annual data. Northern Ireland was also included in the survey from December 1994. Further information on the background to the QLFS may be found in the documentation.The UK Data Service also holds a Secure Access version of the QLFS (see below); household datasets; two-quarter and five-quarter longitudinal datasets; LFS datasets compiled for Eurostat; and some additional annual Northern Ireland datasets.LFS DocumentationThe documentation available from the Archive to accompany LFS datasets largely consists of the latest version of each user guide volume alongside the appropriate questionnaire for the year concerned (the latest questionnaire available covers July-September 2022). Volumes are updated periodically, so users are advised to check the latest documents on the ONS Labour Force Survey - User Guidance pages before commencing analysis. This is especially important for users of older QLFS studies, where information and guidance in the user guide documents may have changed over time.LFS response to COVID-19From April 2020 to May 2022, additional non-calendar quarter LFS microdata were made available to cover the pandemic period. The first additional microdata to be released covered February to April 2020 and the final non-calendar dataset covered March-May 2022. Publication then returned to calendar quarters only. Within the additional non-calendar COVID-19 quarters, pseudonymised variables Casenop and Hserialp may contain a significant number of missing cases (set as -9). These variables may not be available in full for the additional COVID-19 datasets until the next standard calendar quarter is produced. The income weight variable, PIWT, is not available in the non-calendar quarters, although the person weight (PWT) is included. Please consult the documentation for full details.Occupation data for 2021 and 2022 data filesThe ONS has identified an issue with the collection of some occupational data in 2021 and 2022 data files in a number of their surveys. While they estimate any impacts will be small overall, this will affect the accuracy of the breakdowns of some detailed (four-digit Standard Occupational Classification (SOC)) occupations, and data derived from them. Further information can be found in the ONS article published on 11 July 2023: Revision of miscoded occupational data in the ONS Labour Force Survey, UK: January 2021 to September 2022.2024 ReweightingIn February 2024, reweighted person-level data from July-September 2022 onwards were released. Up to July-September 2023, only the person weight was updated (PWT23); the income weight remains at 2022 (PIWT22). The 2023 income weight (PIWT23) was included from the October-December 2023 quarter. Users are encouraged to read the ONS methodological note of 5 February, Impact of reweighting on Labour Force Survey key indicators: 2024, which includes important information on the 2024 reweighting exercise.End User Licence and Secure Access QLFS dataTwo versions of the QLFS are available from UKDS. One is available under the standard End User Licence (EUL) agreement, and the other is a Secure Access version. The EUL version includes country and Government Office Region geography, 3-digit Standard Occupational Classification (SOC) and 3-digit industry group for main, second and last job (from July-September 2015, 4-digit industry class is available for main job only).The Secure Access version contains more detailed variables relating to:age: single year of age, year and month of birth, age completed full-time education and age obtained highest qualification, age of oldest dependent child and age of youngest dependent childfamily unit and household: including a number of variables concerning the number of dependent children in the family according to their ages, relationship to head of household and relationship to head of familynationality and country of originfiner detail geography: including county, unitary/local authority, place of work, Nomenclature of Territorial Units for Statistics 2 (NUTS2) and NUTS3 regions, and whether lives and works in same local authority district, and other categories;health: including main health problem, and current and past health problemseducation and apprenticeship: including numbers and subjects of various qualifications and variables concerning apprenticeshipsindustry: including industry, industry class and industry group for main, second and last job, and industry made redundant fromoccupation: including 5-digit industry subclass and 4-digit SOC for main, second and last job and job made redundant fromsystem variables: including week number when interview took place and number of households at addressother additional detailed variables may also be included.The Secure Access datasets (SNs 6727 and 7674) have more restrictive access conditions than those made available under the standard EUL. Prospective users will need to gain ONS Accredited Researcher status, complete an extra application form and demonstrate to the data owners exactly why they need access to the additional variables. Users are strongly advised to first obtain the standard EUL version of the data to see if they are sufficient for their research requirements. Latest edition informationFor the third edition (April 2024), the variable OMCONT was added to the data. Main Topics:The QLFS questionnaire comprises a 'core' of questions which are included in every survey, together with some 'non-core' questions which vary from quarter to quarter.The questionnaire can be split into two main parts. The first part contains questions on the respondent's household, family structure, basic housing information and demographic details of household members. The second part contains questions covering economic activity, education and health, and also may include a few questions asked on behalf of other government departments (for example the Department for Work and Pensions and the Home Office). Until 1997, the questions on health covered mainly problems which affected the respondent's work. From that quarter onwards, the questions cover all health problems. Detailed questions on income have also been included in each quarter since 1993. The basic questionnaire is revised each year, and a new version published, along with a transitional version that details changes from the previous year's questionnaire. Four sampling frames are used. See documentation for details.
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Partial correlations for psychological and weight history variables (controlling for age, gender and BMI).
Implied weighting, a method for phylogenetic inference that actively seeks to downweight supposed homoplasy, has in recent years begun to be widely utilized in palaeontological datasets. Given the method's purported ability at handling widespread homoplasy/convergence, we investigate the effects of implied weighting on modelled phylogenetic data. We generated 100 character matrices consisting of 55 characters each using a Markov Chain morphology model of evolution based on a known phylogenetic tree. Rates of character evolution in these datasets were variable and generated by pulling from a gamma distribution for each character in the matrix. These matrices were then analysed under equal weighting and four settings of implied weights (k = 1, 3, 5, and 10). Our results show that implied weighting is inconsistent in its ability to retrieve a known phylogenetic tree. Equally weighted analyses are found to generally be more conservative, retrieving higher frequency of polytomies but being l...
Learn about the techniques used to create weights for the 2022 National Survey on Drug Use and Health (NSDUH) at the person level. The report reviews the generalized exponential model (GEM) used in weighting, discusses potential predictor variables, and details the practical steps used to implement GEM. The report also details the weight calibrations, and presents the evaluation measures of the calibrations, as well as a sensitivity analysis.Chapters:Introduces the survey and the remainder of the report.Reviews the impact of multimode data collection on weighting.Briefly describes of the generalized exponential model.Describes the predictor variables for the model calibration.Defines extreme weights.Discusses control totals for poststratification adjustments.Discusses weight calibration at the dwelling unit level.Discusses weight calibration at the person level.Presents the evaluation measures of calibrated weights and a sensitivity analysis of selected prevalence estimates.Explains the break-off analysis weights.Appendices include technical details about the model and the evaluations that were performed.
Learn about the techniques used to create weights for the 2021 National Survey on Drug Use and Health (NSDUH) at the pair and questionnaire dwelling unit (QDU) levels. NSDUH is designed so that some of the sampled households have both an adult and a youth respondent who are paired. Because of this, NSDUH allows for estimating characteristics at the person level, pair level, or QDU level. This report describes pair selection probabilities, the generalized exponential model (including predictor variables used), and the multiple weight components that are used for pair or QDU levels of analysis. An evaluation of the calibration weights is also included.Chapters:Introduces the report.Discusses the probability of selection for pairs and QDUs.Briefly describes of the generalized exponential model.Describes the predictor variables for the model calibration.Defines extreme weights.Discusses weight calibrations.Evaluates the calibration weights.Appendices include technical details about the model and the evaluations that were performed.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundOverweight/ obesity among under-five children is an emerging public health issue of the twenty-first century. Due to the quick nutritional and epidemiological change, non-communicable diseases, premature death, disability, and reproductive disorders have grown in low-income countries. Besides, little attention has been given. Therefore, we aimed to explore spatial variations and predictors of overweight/obesity among under-five children in Ethiopia using a geospatial technique.MethodsA total weighted sample of 3,609 under-five children was included in the study. A cross-sectional study was conducted using a nationally representative sample of the 2019 Ethiopia Mini Demographic and Health Survey data set. ArcGIS version 10.8 was used to explore the spatial variation of obesity. SaTScan version 9.6 software was used to analyze the spatial cluster detection of overweight/obesity. Ordinary least square and geographically weighted regression analysis were employed to assess the association between an outcome variable and explanatory variables. A p-value of less than 0.05 was used to declare it statistically significant.ResultsThe spatial distribution of overweight/obesity among under-five children in Ethiopia was clustered (Global Moran’s I = 0.27, p-value
Abstract copyright UK Data Service and data collection copyright owner.The Health Survey Northern Ireland (HSNI) was commissioned by the Department of Health in Northern Ireland and the Central Survey Unit (CSU) of the Northern Ireland Statistics and Research Agency (NISRA) carried out the survey on their behalf. This survey series has been running on a continuous basis since April 2010 with separate modules for different policy areas included in different financial years. It covers a range of health topics that are important to the lives of people in Northern Ireland. The HSNI replaces the previous Northern Ireland Health and Social Wellbeing Survey (available under SNs 4589, 4590 and 5710).Adult BMI, height and weight measurements, accompanying demographic and derived variables, geography, and a BMI weighting variable, are available in separate datasets for each survey year. Further information is available from the Northern Ireland Statistics and Research Agency and the Department of Health (Northern Ireland) survey webpages. Data gathered in the HSNI 2012-2013. Variables include measured height and weight, calculated BMI including groupings, age, sex and geography. Main Topics: Simple random sample Physical measurements and tests
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Obesity
Obesity, which causes physical and mental problems, is a global health problem with serious consequences. The prevalence of obesity is increasing steadily, and therefore, new research is needed that examines the influencing factors of obesity and how to predict the occurrence of the condition according to these factors.
Dataset Information
This dataset include data for the estimation of obesity levels in individuals from the countries of Mexico, Peru and Colombia, based on their eating habits and physical condition. The data contains 17 attributes and 2111 records, the records are labeled with the class variable NObesity (Obesity Level), that allows classification of the data using the values of Insufficient Weight, Normal Weight, Overweight Level I, Overweight Level II, Obesity Type I, Obesity Type II and Obesity Type III. 77% of the data was generated synthetically using the Weka tool and the SMOTE filter, 23% of the data was collected directly from users through a web platform.
Gender: Feature, Categorical, "Gender"
Age : Feature, Continuous, "Age"
Height: Feature, Continuous
Weight: Feature Continuous
family_history_with_overweight: Feature, Binary, " Has a family member suffered or suffers from overweight? "
FAVC : Feature, Binary, " Do you eat high caloric food frequently? "
FCVC : Feature, Integer, " Do you usually eat vegetables in your meals? "
NCP : Feature, Continuous, " How many main meals do you have daily? "
CAEC : Feature, Categorical, " Do you eat any food between meals? "
SMOKE : Feature, Binary, " Do you smoke? "
CH2O: Feature, Continuous, " How much water do you drink daily? "
SCC: Feature, Binary, " Do you monitor the calories you eat daily? "
FAF: Feature, Continuous, " How often do you have physical activity? "
TUE : Feature, Integer, " How much time do you use technological devices such as cell phone, videogames, television, computer and others? "
CALC : Feature, Categorical, " How often do you drink alcohol? "
MTRANS : Feature, Categorical, " Which transportation do you usually use? "
NObeyesdad : Target, Categorical, "Obesity level"
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Multivariate Regression Model Predicting Weight Change among Recent Quitters (n = 654).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Importance (weight) of variables influencing grizzly bear abundance in northwestern Montana, USA, in 2004. Only candidate variables for abundance, not detection, are shown. Weights for variables that were in the model ≥50% of iterations are in bold. Data include only cells with both types of sampling. HT = Hair Trap, BR = Bear Rub. See Graves et al. (In Review) for more details on specific variables. We did not include further details to maintain focus on the influence of different detection methods.1Experts assigned a value 1–10 to ownership categories based on efforts to protect bears including 1) attractant storage management, 2) enforcement of food storage regulations, and 3) road density and use management. Glacier National Park = 10, US Forest Service = 7, other public land = 3, and private = 1.