The table below lists links to ad hoc statistical analyses on the Taking Part survey that have not been included in our standard publications.
The objective of the survey is to prepare and publish statistical information on the availability of computers in households, Internet access at home; frequency and purposes of Internet usage; use of e-commerce, e-government services; computer literacy; ICT safety, obstacles to ICT and Internet usage. Moreover, respondents’ demographic and social characteristics, enabling the analysis of survey results by respondents’ sex, age, educational attainment, employment status, are surveyed.
Household Individual
Survey population – all residents of Lithuania aged 16–74. Statistical unit – individual aged 16–74. Individuals residing in institutional households (care homes, imprisonment institutions, monasteries, convents, seminaries, etc.) are not surveyed.
Sample survey data [ssd]
A simple random sample of 7 thousand individuals is drawn, using Population Register data as a basis. If the individual sampled does not live at the address specified, another individual living at that address, whose date of birth at the moment of the interview is the closest to that of the individual sampled, is asked to answer the survey questions.
Each individual sampled is informed thereof with an official letter. The letter invites to participate in the survey and fill in an e-questionnaire, having logged in to the electronic survey system on the website of Statistics Lithuania. If the individual sampled has not filled in the e-questionnaire, s/he is questioned by an interviewer at home.
In general, data refer to the first quarter of the reference year or the last 12 months. Data are collected using an annual statistical survey on the use of information technologies in households questionnaire. The questionnaire is provided as related documents under the Documentation tab.
The Participation Survey started in October 2021 and is the key evidence source on engagement for DCMS. It is a continuous push-to-web household survey of adults aged 16 and over in England.
The Participation Survey provides nationally representative estimates of physical and digital engagement with the arts, heritage, museums & galleries, libraries and archives, as well as engagement with tourism, major events, live sports and digital.
The Participation Survey is only asked of adults in England. Currently there is no harmonised survey or set of questions within the administrations of the UK. Data on participation in cultural sectors for the devolved administrations is available in the https://www.gov.scot/collections/scottish-household-survey/" class="govuk-link">Scottish Household Survey, https://gov.wales/national-survey-wales" class="govuk-link">National Survey for Wales and https://www.communities-ni.gov.uk/topics/statistics-and-research/culture-and-heritage-statistics" class="govuk-link">Northern Ireland Continuous Household Survey.
The pre-release access document above contains a list of ministers and officials who have received privileged early access to this release of Participation Survey data. In line with best practice, the list has been kept to a minimum and those given access for briefing purposes had a maximum of 24 hours. Details on the pre-release access arrangements for this dataset are available in the accompanying material.
Our statistical practice is regulated by the OSR. OSR sets the standards of trustworthiness, quality and value in the https://code.statisticsauthority.gov.uk/the-code/" class="govuk-link">Code of Practice for Statistics that all producers of official statistics should adhere to.
You are welcome to contact us directly with any comments about how we meet these standards by emailing evidence@dcms.gov.uk. Alternatively, you can contact OSR by emailing regulation@statistics.gov.uk or via the OSR website.
The responsible statisticians for this release is Oliver Maxwell. For enquiries on this release, contact participationsurvey@dcms.gov.uk.
The latest estimates from the 2010/11 Taking Part adult survey produced by DCMS were released on 30 June 2011 according to the arrangements approved by the UK Statistics Authority.
30 June 2011
**
April 2010 to April 2011
**
National and Regional level data for England.
**
Further analysis of the 2010/11 adult dataset and data for child participation will be published on 18 August 2011.
The latest data from the 2010/11 Taking Part survey provides reliable national estimates of adult engagement with sport, libraries, the arts, heritage and museums & galleries. This release also presents analysis on volunteering and digital participation in our sectors and a look at cycling and swimming proficiency in England. The Taking Part survey is a continuous annual survey of adults and children living in private households in England, and carries the National Statistics badge, meaning that it meets the highest standards of statistical quality.
These spreadsheets contain the data and sample sizes for each sector included in the survey:
The previous Taking Part release was published on 31 March 2011 and can be found online.
This release is published in accordance with the Code of Practice for Official Statistics (2009), as produced by the http://www.statisticsauthority.gov.uk/" class="govuk-link">UK Statistics Authority (UKSA). The UKSA has the overall objective of promoting and safeguarding the production and publication of official statistics that serve the public good. It monitors and reports on all official statistics, and promotes good practice in this area.
The document below contains a list of Ministers and Officials who have received privileged early access to this release of Taking Part data. In line with best practice, the list has been kept to a minimum and those given access for briefing purposes had a maximum of 24 hours.
The responsible statistician for this release is Neil Wilson. For any queries please contact the Taking Part team on 020 7211 6968 or takingpart@culture.gsi.gov.uk.
The Associated Press is sharing data from the COVID Impact Survey, which provides statistics about physical health, mental health, economic security and social dynamics related to the coronavirus pandemic in the United States.
Conducted by NORC at the University of Chicago for the Data Foundation, the probability-based survey provides estimates for the United States as a whole, as well as in 10 states (California, Colorado, Florida, Louisiana, Minnesota, Missouri, Montana, New York, Oregon and Texas) and eight metropolitan areas (Atlanta, Baltimore, Birmingham, Chicago, Cleveland, Columbus, Phoenix and Pittsburgh).
The survey is designed to allow for an ongoing gauge of public perception, health and economic status to see what is shifting during the pandemic. When multiple sets of data are available, it will allow for the tracking of how issues ranging from COVID-19 symptoms to economic status change over time.
The survey is focused on three core areas of research:
Instead, use our queries linked below or statistical software such as R or SPSS to weight the data.
If you'd like to create a table to see how people nationally or in your state or city feel about a topic in the survey, use the survey questionnaire and codebook to match a question (the variable label) to a variable name. For instance, "How often have you felt lonely in the past 7 days?" is variable "soc5c".
Nationally: Go to this query and enter soc5c as the variable. Hit the blue Run Query button in the upper right hand corner.
Local or State: To find figures for that response in a specific state, go to this query and type in a state name and soc5c as the variable, and then hit the blue Run Query button in the upper right hand corner.
The resulting sentence you could write out of these queries is: "People in some states are less likely to report loneliness than others. For example, 66% of Louisianans report feeling lonely on none of the last seven days, compared with 52% of Californians. Nationally, 60% of people said they hadn't felt lonely."
The margin of error for the national and regional surveys is found in the attached methods statement. You will need the margin of error to determine if the comparisons are statistically significant. If the difference is:
The survey data will be provided under embargo in both comma-delimited and statistical formats.
Each set of survey data will be numbered and have the date the embargo lifts in front of it in the format of: 01_April_30_covid_impact_survey. The survey has been organized by the Data Foundation, a non-profit non-partisan think tank, and is sponsored by the Federal Reserve Bank of Minneapolis and the Packard Foundation. It is conducted by NORC at the University of Chicago, a non-partisan research organization. (NORC is not an abbreviation, it part of the organization's formal name.)
Data for the national estimates are collected using the AmeriSpeak Panel, NORC’s probability-based panel designed to be representative of the U.S. household population. Interviews are conducted with adults age 18 and over representing the 50 states and the District of Columbia. Panel members are randomly drawn from AmeriSpeak with a target of achieving 2,000 interviews in each survey. Invited panel members may complete the survey online or by telephone with an NORC telephone interviewer.
Once all the study data have been made final, an iterative raking process is used to adjust for any survey nonresponse as well as any noncoverage or under and oversampling resulting from the study specific sample design. Raking variables include age, gender, census division, race/ethnicity, education, and county groupings based on county level counts of the number of COVID-19 deaths. Demographic weighting variables were obtained from the 2020 Current Population Survey. The count of COVID-19 deaths by county was obtained from USA Facts. The weighted data reflect the U.S. population of adults age 18 and over.
Data for the regional estimates are collected using a multi-mode address-based (ABS) approach that allows residents of each area to complete the interview via web or with an NORC telephone interviewer. All sampled households are mailed a postcard inviting them to complete the survey either online using a unique PIN or via telephone by calling a toll-free number. Interviews are conducted with adults age 18 and over with a target of achieving 400 interviews in each region in each survey.Additional details on the survey methodology and the survey questionnaire are attached below or can be found at https://www.covid-impact.org.
Results should be credited to the COVID Impact Survey, conducted by NORC at the University of Chicago for the Data Foundation.
To learn more about AP's data journalism capabilities for publishers, corporations and financial institutions, go here or email kromano@ap.org.
The Taking Part Survey has run since 2005 and is the key evidence source for DCMS. It is a continuous face to face household survey of adults aged 16 and over in England and children aged 5 to 15 years old.
The child Taking Part report can be found here.
The Taking Part Survey provides reliable national estimates of engagement with the arts, heritage, museums, libraries, digital and social networking. It carries the National Statistics badge, meaning that it meets the highest standards of statistical quality.
The Taking Part Survey provides reliable national estimates of adult engagement with the arts, heritage, museums, libraries, digital and social networking and of barriers to engagement. The latest data cover the period April 2019 to March 2020.
Data tables for the Archive, Charitable Giving and Volunteering estimates can be found here:
Fieldwork for the Taking Part survey was terminated before its intended end date due to the COVID-19 coronavirus pandemic. We do not expect that either the pandemic or reduced fieldwork has affected the accuracy of our estimates. A summary of the analysis of the possible effects of early termination of fieldwork can be found the Taking Part Year 15 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/916246/Taking_Part_Technical_Report_2019_20.pdf" class="govuk-link">technical report
The previous Taking Part release was published on 19 September 2019, covering the period April 2018 to March 2019.
The pre-release access document above contains a list of ministers and officials who have received privileged early access to this release of Taking Part data. In line with best practice, the list has been kept to a minimum and those given access for briefing purposes had a maximum of 24 hours. Details on the pre-release access arrangements for this dataset are available in the accompanying material.
This release is published in accordance with the Code of Practice for Statistics (2018), as produced by the UK Statistics Authority. The Authority has the overall objective of promoting and safeguarding the production and publication of official statistics that serve the public good. It monitors and reports on all official statistics, and promotes good practice in this area.
The responsible statistician for this release is Alistair Rice. For enquiries on this release, contact takingpart@dcms.gov.uk.
Taking Part is a household survey in England that measures engagement with the cultural sectors. The sur
A global survey conducted between November 2022 and January 2023 revealed that seven in ten respondents had taken steps to protect their online identity. Those who enabled multi-factor authentication were 30 percent. Additionally, 28 percent said they changed default settings on devices. On the other hand, 30 percent said they had not done anything to protect their digital identity.
Open Government Licence - Canada 2.0https://open.canada.ca/en/open-government-licence-canada
License information was derived automatically
The public use microdata file (PUMF) from the Canadian Internet Use Survey (CIUS) provides data on the adoption and use of digital technologies and the online behaviors of individuals 15 years of age and older living in the ten provinces of Canada. The survey is built off the previous iteration of the CIUS, last conducted in 2012. While there is some comparability with the 2012 CIUS, the 2018 survey was redesigned in 2018 to reflect the rapid pace at which Internet technology has evolved since the previous survey iteration. The files include information on how individuals use the Internet, smartphones, and social networking websites and apps, including their intensity of use, demand for certain online activities, and interactions through these technologies. It also provides information on the use of online government services, digital skills, online work, and security, privacy and trust as it relates to the Internet.
In 2022, online surveys were by far the most used traditional quantitative methodologies in the market research industry worldwide. During the survey, 85 percent of respondents stated that they regularly used online surveys as one of their three most used methods. Moreover, nine percent of respondents stated that they used online surveys only occasionally.
Attribution-NonCommercial 3.0 (CC BY-NC 3.0)https://creativecommons.org/licenses/by-nc/3.0/
License information was derived automatically
This item contains all data and statistical code to replicate the analysis presented in the preprint entitled "Using rapid online surveys to assess perceptions during infectious disease outbreaks: a cross-sectional survey on Covid-19 among the general public in the United States and United Kingdom".
Background: Given the extensive time needed to conduct a nationally representative household survey and the commonly low response rate in phone surveys, rapid online surveys may be a promising method to assess and track knowledge and perceptions among the general public during fast-moving infectious disease outbreaks. Objective: To apply rapid online surveying to determine knowledge and perceptions of coronavirus disease 2019 (Covid-19) among the general public in the United States (US) and the United Kingdom (UK). Methods: An online questionnaire was administered to 3,000 adults residing in the US and 3,000 adults residing in the UK who had registered with Prolific Academic to participate in online research. Strata by age (18 - 27, 28 - 37, 38 - 47, 48 - 57, or 58 years), sex (male or female), and ethnicity (White, Black or African American, Asian or Asian Indian, Mixed, or “Other”), and all permutations of these strata, were established. The number of participants who could enrol in each of these strata was calculated to reflect the distribution in the US and UK general population. Enrolment into the survey within the strata was on a first-come, first-served basis. Participants completed the questionnaire between February 23 and March 2 2020. Results: 2,986 and 2,988 adults residing in the US and the UK, respectively, completed the questionnaire. 64.4% (1,924/2,986) of US and 51.5% (1,540/2,988) of UK participants had a tertiary education degree. 67.5% (2,015/2,986) of US participants had a total household income between $20,000 and $99,999, and 74.4% (2,223/2,988) of UK participants had a total household income between £15,000 and £74,999. US and UK participants’ median estimate for the probability of a fatal disease course among those infected with SARS-CoV-2 was 5.0% (IQR: 2.0% – 15.0%) and 3.0% (IQR: 2.0% – 10.0%), respectively. Participants generally had good knowledge of the main mode of disease transmission and common symptoms of Covid-19. However, a substantial proportion of participants had misconceptions about how to prevent an infection and the recommended care-seeking behavior. For instance, 37.8% (95% CI: 36.1% – 39.6%) of US and 29.7% (95% CI: 28.1% – 31.4%) of UK participants thought that wearing a common surgical mask was ‘highly effective’ in protecting them from acquiring Covid-19. 25.6% (95% CI: 24.1% – 27.2%) of US and 29.6% (95% CI: 28.0% – 31.3%) of UK participants thought it prudent to refrain from eating at Chinese restaurants. Around half (53.8% [95% CI: 52.1% – 55.6%] of US and 39.1% [95% CI: 37.4% –40.9%] of UK participants) thought that children were at an especially high risk of death when infected with SARS-CoV-2. Conclusions: The distribution of participants by total household income and education followed approximately that of the general population. The findings from this online survey could guide information campaigns by public health authorities, clinicians, and the media. More broadly, rapid online surveys could be an important tool in tracking the public’s knowledge and misperceptions during rapidly moving infectious disease outbreaks.
https://www.icpsr.umich.edu/web/ICPSR/studies/36268/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36268/terms
The American Time Use Survey (ATUS) is the Nation's first federally administered, continuous survey on time use in the United States. This multi-year data collection contains information on the amount of time (in minutes) that people spent doing various activities on a given day, including the arts activities, in the years 2003 through 2023. Data collection for the ATUS began in January 2003. Sample cases for the survey are selected monthly, and interviews are conducted continuously throughout the year. In 2023, approximately 9,000 individuals were interviewed. Estimates are released annually. ATUS sample households are chosen from the households that completed their eighth (final) interview for the Current Population Survey (CPS), the nation's monthly household labor force survey. ATUS sample households are selected to ensure that estimates will be nationally representative. One individual age 15 or over is randomly chosen from each sampled household. This "designated person" is interviewed by telephone once about his or her activities on the day before the interview--the "diary day." The ATUS Activity Coding Lexicon is a 3-tiered classification system with 17 first-tier categories. Each of the first-tier categories has two additional levels of detail. Respondents' reported activities are assigned 6-digit activity codes based on this classification system. Additionally, the study provides demographic information--including sex, age, ethnicity, race, education, employment, and children in the household. IMPORTANT: The 2020 ATUS was greatly affected by the coronavirus (COVID-19) pandemic. Data collection was suspended in 2020 from mid-March to mid-May. ATUS data files for 2020 contain all ATUS data collected in 2020--both before and after data collection was suspended. For more information, please visit BLS's ATUS page. The weighting method was changed for 2020 to account for the suspension of data collection in early 2020 due to the COVID-19 pandemic. Respondents from 2020 will have missing values for the replicate weights on this data file. The Pandemic Replicate weights file for 2019-20 contains 160 replicate final weights for each ATUS final weight created using the 2020 weighting method. Chapter 7 of the ATUS User's Guide provides more information about the 2020 weighting method.
https://www.statcan.gc.ca/en/reference/licencehttps://www.statcan.gc.ca/en/reference/licence
The 2022 CIUS aims to measure the impact of digital technologies on the lives of Canadians. Information gathered will help to better understand how individuals use the Internet, including intensity of use, demand for online activities and online interactions. The CIUS examines, use of online government services, use of social networking websites or apps, smartphone use, digital skills, e-commerce, online work, and security, privacy and trust as it relates to the Internet. The 2022 iteration has been updated to collect data on information sharing online, harmful content online, digital credentials, cryptocurrencies, Artificial Intelligence and working in the Gig Economy. The survey is built off the previous iterations of the CIUS conducted in 2018 and 2020.
https://www.icpsr.umich.edu/web/ICPSR/studies/36998/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36998/terms
The American Community Survey (ACS) is an ongoing statistical survey that samples a small percentage of the population every year -- giving communities the information they need to plan investments and services. The 5-year public use microdata sample (PUMS) for 2012-2016 is a subset of the 2012-2012 ACS sample. It contains the same sample as the combined PUMS 1-year files for 2012, 2013, 2014, 2015 and 2016. This data collection provides a person-level subset of 133,781 respondents whose occupations were coded as arts-related in the 2011-2015 ACS PUMS. The 2012-2016 PUMS is the seventh 5-year file published by the ACS. This data collection contains five years of data for the population from households and the group quarters (GQ) population. The GQ population and population from households are all weighted to agree with the ACS counts which are an average over the five year period (2012-2016). The ACS sample was selected from all counties across the nation. The ACS provides social, housing, and economic characteristics for demographic groups covering a broad spectrum of geographic areas in the United States. For a more detailed list of variables of what these categories include please see the decriptions of variables section.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT In this paper we present the development of a modulated web based statistical system, hereafter MWStat, which shifts the statistical paradigm of analyzing data into a real time structure. The MWStat system is useful for both online storage data and questionnaires analysis, as well as to provide real time disposal of results from analysis related to several statistical methodologies in a customizable fashion. Overall, it can be seem as a useful technical solution that can be applied to a large range of statistical applications, which needs of a scheme of devolution of real time results, accessible to anyone with internet access. We display here the step-by-step instructions for implementing the system. The structure is accessible, built with an easily interpretable language and it can be strategically applied to online statistical applications. We rely on the relationship of several free languages, namely, PHP, R, MySQL database and an Apache HTTP server, and on the use of software tools such as phpMyAdmin. We expose three didactical examples of the MWStat system on institutional evaluation, statistical quality control and multivariate analysis. The methodology is also illustrated in a real example on institutional evaluation. A MWStat module was specifically built for providing a real time poll for teacher evaluation at the Federal University of São Carlos (Brazil).
Household Pulse Survey (HPS): HPS is a rapid-response survey of adults ages ≥18 years led by the U.S. Census Bureau, in partnership with seven other federal statistical agencies, to measure household experiences during the COVID-19 pandemic. Detailed information on probability sampling using the U.S. Census Bureau’s Master Address File, questionnaires, response rates, and bias assessment is available on the Census Bureau website (https://www.census.gov/data/experimental-data-products/household-pulse-survey.html). Data from adults ages ≥18 years and older are collected by a 20-minute online survey from randomly sampled households stratified by state and the top 15 metropolitan statistical areas (MSAs). Data are weighted to represent total persons ages 18 and older living within households and to mitigate possible bias that can result from non-responses and incomplete survey frame. Data from adults ages ≥18 years and older are collected by 20-minute online survey from randomly sampled households stratified by state and the top 15 metropolitan statistical areas (MSAs). For more information on this survey, see https://www.census.gov/programs-surveys/household-pulse-survey.html. Data are weighted to represent total persons ages 18 and older living within households and to mitigate possible bias that can result from non-responses and incomplete survey frame. Responses in the Household Pulse Survey (https://www.census.gov/programs-surveys/household-pulse-survey.html) are self-reported. Estimates of vaccination coverage may differ from vaccine administration data reported at COVID-19 Vaccinations in the United States (https://covid.cdc.gov/covid-data-tracker/#vaccinations).
Within the frame of PCBS' efforts in providing official Palestinian statistics in the different life aspects of Palestinian society and because the wide spread of Computer, Internet and Mobile Phone among the Palestinian people, and the important role they may play in spreading knowledge and culture and contribution in formulating the public opinion, PCBS conducted the Household Survey on Information and Communications Technology, 2014.
The main objective of this survey is to provide statistical data on Information and Communication Technology in the Palestine in addition to providing data on the following: - Prevalence of computers and access to the Internet. - Study the penetration and purpose of Technology use.
Palestine (West Bank and Gaza Strip), type of locality (urban, rural, refugee camps) and governorate.
All Palestinian households and individuals whose usual place of residence in Palestine with focus on persons aged 10 years and over in year 2014.
Sample survey data [ssd]
Sampling Frame The sampling frame consists of a list of enumeration areas adopted in the Population, Housing and Establishments Census of 2007. Each enumeration area has an average size of about 124 households. These were used in the first phase as Preliminary Sampling Units in the process of selecting the survey sample.
Sample Size The total sample size of the survey was 7,268 households, of which 6,000 responded.
Sample Design The sample is a stratified clustered systematic random sample. The design comprised three phases:
Phase I: Random sample of 240 enumeration areas. Phase II: Selection of 25 households from each enumeration area selected in phase one using systematic random selection. Phase III: Selection of an individual (10 years or more) in the field from the selected households; KISH TABLES were used to ensure indiscriminate selection.
Sample Strata Distribution of the sample was stratified by: 1- Governorate (16 governorates, J1). 2- Type of locality (urban, rural and camps).
Face-to-face [f2f]
The survey questionnaire consists of identification data, quality controls and three main sections: Section I: Data on household members that include identification fields, the characteristics of household members (demographic and social) such as the relationship of individuals to the head of household, sex, date of birth and age.
Section II: Household data include information regarding computer processing, access to the Internet, and possession of various media and computer equipment. This section includes information on topics related to the use of computer and Internet, as well as supervision by households of their children (5-17 years old) while using the computer and Internet, and protective measures taken by the household in the home.
Section III: Data on persons (aged 10 years and over) about computer use, access to the Internet and possession of a mobile phone.
Preparation of Data Entry Program: This stage included preparation of the data entry programs using an ACCESS package and defining data entry control rules to avoid errors, plus validation inquiries to examine the data after it had been captured electronically.
Data Entry: The data entry process started on the 8th of May 2014 and ended on the 23rd of June 2014. The data entry took place at the main PCBS office and in field offices using 28 data clerks.
Editing and Cleaning procedures: Several measures were taken to avoid non-sampling errors. These included editing of questionnaires before data entry to check field errors, using a data entry application that does not allow mistakes during the process of data entry, and then examining the data by using frequency and cross tables. This ensured that data were error free; cleaning and inspection of the anomalous values were conducted to ensure harmony between the different questions on the questionnaire.
Response Rates: 79%
There are many aspects of the concept of data quality; this includes the initial planning of the survey to the dissemination of the results and how well users understand and use the data. There are three components to the quality of statistics: accuracy, comparability, and quality control procedures.
Checks on data accuracy cover many aspects of the survey and include statistical errors due to the use of a sample, non-statistical errors resulting from field workers or survey tools, and response rates and their effect on estimations. This section includes:
Statistical Errors Data of this survey may be affected by statistical errors due to the use of a sample and not a complete enumeration. Therefore, certain differences can be expected in comparison with the real values obtained through censuses. Variances were calculated for the most important indicators.
Variance calculations revealed that there is no problem in disseminating results nationally or regionally (the West Bank, Gaza Strip), but some indicators show high variance by governorate, as noted in the tables of the main report.
Non-Statistical Errors Non-statistical errors are possible at all stages of the project, during data collection or processing. These are referred to as non-response errors, response errors, interviewing errors and data entry errors. To avoid errors and reduce their effects, strenuous efforts were made to train the field workers intensively. They were trained on how to carry out the interview, what to discuss and what to avoid, and practical and theoretical training took place during the training course. Training manuals were provided for each section of the questionnaire, along with practical exercises in class and instructions on how to approach respondents to reduce refused cases. Data entry staff were trained on the data entry program, which was tested before starting the data entry process.
Several measures were taken to avoid non-sampling errors. These included editing of questionnaires before data entry to check field errors, using a data entry application that does not allow mistakes during the process of data entry, and then examining the data by using frequency and cross tables. This ensured that data were error free; cleaning and inspection of the anomalous values were conducted to ensure harmony between the different questions on the questionnaire.
The sources of non-statistical errors can be summarized as: 1. Some of the households were not at home and could not be interviewed, and some households refused to be interviewed. 2. In unique cases, errors occurred due to the way the questions were asked by interviewers and respondents misunderstood some of the questions.
https://www.icpsr.umich.edu/web/ICPSR/studies/36361/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36361/terms
The National Survey on Drug Use and Health (NSDUH) series (formerly titled National Household Survey on Drug Abuse) primarily measures the prevalence and correlates of drug use in the United States. The surveys are designed to provide quarterly, as well as annual, estimates. Information is provided on the use of illicit drugs, alcohol, and tobacco among members of United States households aged 12 and older. Questions included age at first use as well as lifetime, annual, and past-month usage for the following drug classes: marijuana, cocaine (and crack), hallucinogens, heroin, inhalants, alcohol, tobacco, and nonmedical use of prescription drugs, including pain relievers, tranquilizers, stimulants, and sedatives. The survey covered substance abuse treatment history and perceived need for treatment, and included questions from the Diagnostic and Statistical Manual (DSM) of Mental Disorders that allow diagnostic criteria to be applied. The survey included questions concerning treatment for both substance abuse and mental health-related disorders. Respondents were also asked about personal and family income sources and amounts, health care access and coverage, illegal activities and arrest record, problems resulting from the use of drugs, and needle-sharing. Questions introduced in previous administrations were retained in the 2014 survey, including questions asked only of respondents aged 12 to 17. These "youth experiences" items covered a variety of topics, such as neighborhood environment, illegal activities, drug use by friends, social support, extracurricular activities, exposure to substance abuse prevention and education programs, and perceived adult attitudes toward drug use and activities such as school work. Several measures focused on prevention-related themes in this section. Also retained were questions on mental health and access to care, perceived risk of using drugs, perceived availability of drugs, driving and personal behavior, and cigar smoking. Questions on the tobacco brand used most often were introduced with the 1999 survey. For the 2008 survey, adult mental health questions were added to measure symptoms of psychological distress in the worst period of distress that a person experienced in the past 30 days and suicidal ideation. In 2008, a split-sample design also was included to administer separate sets of questions (WHODAS vs. SDS) to assess impairment due to mental health problems. Beginning with the 2009 NSDUH, however, all of the adults in the sample received only the WHODAS questions. Background information includes gender, race, age, ethnicity, marital status, educational level, job status, veteran status, and current household composition.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Supporting documentation on code lists, subject definitions, data accuracy, and statistical testing can be found on the American Community Survey website in the Technical Documentation section...Sample size and data quality measures (including coverage rates, allocation rates, and response rates) can be found on the American Community Survey website in the Methodology section..Although the American Community Survey (ACS) produces population, demographic and housing unit estimates, it is the Census Bureau''s Population Estimates Program that produces and disseminates the official estimates of the population for the nation, states, counties, cities, and towns and estimates of housing units for states and counties..Explanation of Symbols:An ''**'' entry in the margin of error column indicates that either no sample observations or too few sample observations were available to compute a standard error and thus the margin of error. A statistical test is not appropriate..An ''-'' entry in the estimate column indicates that either no sample observations or too few sample observations were available to compute an estimate, or a ratio of medians cannot be calculated because one or both of the median estimates falls in the lowest interval or upper interval of an open-ended distribution..An ''-'' following a median estimate means the median falls in the lowest interval of an open-ended distribution..An ''+'' following a median estimate means the median falls in the upper interval of an open-ended distribution..An ''***'' entry in the margin of error column indicates that the median falls in the lowest interval or upper interval of an open-ended distribution. A statistical test is not appropriate..An ''*****'' entry in the margin of error column indicates that the estimate is controlled. A statistical test for sampling variability is not appropriate. .An ''N'' entry in the estimate and margin of error columns indicates that data for this geographic area cannot be displayed because the number of sample cases is too small..An ''(X)'' means that the estimate is not applicable or not available..Estimates of urban and rural populations, housing units, and characteristics reflect boundaries of urban areas defined based on Census 2010 data. As a result, data for urban and rural areas from the ACS do not necessarily reflect the results of ongoing urbanization..While the 2017 American Community Survey (ACS) data generally reflect the July 2015 Office of Management and Budget (OMB) delineations of metropolitan and micropolitan statistical areas, in certain instances the names, codes, and boundaries of the principal cities shown in ACS tables may differ from the OMB delineations due to differences in the effective dates of the geographic entities..The category "No computer in household" consists of those who said "No" to all of the following types of computers: Desktop or laptop; smartphone; tablet or other portable wireless computer; and some other type of computer..The category "With a broadband Internet subscription" refers to those who said "Yes" to at least one of the following types of Internet subscriptions: Broadband such as cable, fiber optic, or DSL; a cellular data plan; satellite; or a fixed wireless subscription. The category "Without an Internet subscription" includes those who accessed the Internet without a subscription and also those with no Internet access at all..Caution should be used when comparing data for computer and Internet use before and after 2016. Changes in 2016 to the questions involving the wording as well as the response options resulted in changed response patterns in the data. Most noticeable are increases in overall computer ownership or use, the total of Internet subscriptions, satellite subscriptions, and cellular data plans for a smartphone or other mobile device. For more detailed information about these changes, see the 2016 American Community Survey Content Test Report for Computer and Internet Use located at https://www.census.gov/programs-surveys/acs/methodology/content-test.htm or the user note regarding changes in the 2016 questions located at https://www.census.gov/programs-surveys/acs/technical-documentation/user-notes.html..An Internet "subscription" refers to a type of service that someone pays for to access the Internet such as a cellular data plan, broadband such as cable, fiber optic or DSL, or other type of service. This will normally refer to a service that someone is billed for directly for Internet alone or sometimes as part of a bundle..Data about computer and Internet use were collected by asking respondents to select "Yes" or "No" to each type of computer and each type of Internet subscription. Therefore, respondents were able to select more than one type of computer and more than one type of Internet subscription..Data are based on a sample and are subject to sampling variability. The degree of uncertainty for an estimate arising from sampling variability is repr...
This study surveyed police chiefs and data analysts in order to determine the use of data in police departments. The surveys were sent to 1,379 police agencies serving populations of at least 25,000. The survey sample for this study was selected from the 2000 Law Enforcement Management and Administrative Statistics (LEMAS) survey. All police agencies serving populations of at least 25,000 were selected from the LEMAS database for inclusion. Separate surveys were sent for completion by police chiefs and data analysts. Surveys were used to gather information on data sharing and integration efforts to identify the needs and capacities for data usage in local law enforcement agencies. The police chief surveys focused on five main areas of interest: use of data, personnel response to data collection, the collection and reporting of incident-based data, sharing data, and the providing of statistics to the community and media. Like the police chief surveys, the data analyst surveys focused on five main areas of interest: use of data, agency structures and resources, data for strategies, data sharing and outside assistance, and incident-based data. The final total of police chief surveys included in the study is 790, while 752 data analyst responses are included.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The aim of this survey was to collect feedback about existing training programmes in statistical analysis for postgraduate researchers at the University of Edinburgh, as well as respondents' preferred methods for training, and their requirements for new courses. The survey was circulated via e-mail to research staff and postgraduate researchers across three colleges of the University of Edinburgh: the College of Arts, Humanities and Social Sciences; the College of Science and Engineering; and the College of Medicine and Veterinary Medicine. The survey was conducted on-line using the Bristol Online Survey tool, March through July 2017. 90 responses were received. The Scoping Statistical Analysis Support project, funded by Information Services Innovation Fund, aims to increase visibility and raise the profile of the Research Data Service by: understanding how statistical analysis support is conducted across University of Edinburgh Schools; scoping existing support mechanisms and models for students, researchers and teachers; identifying services and support that would satisfy existing or future demand.
The table below lists links to ad hoc statistical analyses on the Taking Part survey that have not been included in our standard publications.