https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Professional organizations in STEM (science, technology, engineering, and mathematics) can use demographic data to quantify recruitment and retention (R&R) of underrepresented groups within their memberships. However, variation in the types of demographic data collected can influence the targeting and perceived impacts of R&R efforts - e.g., giving false signals of R&R for some groups. We obtained demographic surveys from 73 U.S.-affiliated STEM organizations, collectively representing 712,000 members and conference-attendees. We found large differences in the demographic categories surveyed (e.g., disability status, sexual orientation) and the available response options. These discrepancies indicate a lack of consensus regarding the demographic groups that should be recognized and, for groups that are omitted from surveys, an inability of organizations to prioritize and evaluate R&R initiatives. Aligning inclusive demographic surveys across organizations will provide baseline data that can be used to target and evaluate R&R initiatives to better serve underrepresented groups throughout STEM. Methods We surveyed 164 STEM organizations (73 responses, rate = 44.5%) between December 2020 and July 2021 with the goal of understanding what demographic data each organization collects from its constituents (i.e., members and conference-attendees) and how the data are used. Organizations were sourced from a list of professional societies affiliated with the American Association for the Advancement of Science, AAAS, (n = 156) or from social media (n = 8). The survey was sent to the elected leadership and management firms for each organization, and follow-up reminders were sent after one month. The responding organizations represented a wide range of fields: 31 life science organizations (157,000 constituents), 5 mathematics organizations (93,000 constituents), 16 physical science organizations (207,000 constituents), 7 technology organizations (124,000 constituents), and 14 multi-disciplinary organizations spanning multiple branches of STEM (131,000 constituents). A list of the responding organizations is available in the Supplementary Materials. Based on the AAAS-affiliated recruitment of the organizations and the similar distribution of constituencies across STEM fields, we conclude that the responding organizations are a representative cross-section of the most prominent STEM organizations in the U.S. Each organization was asked about the demographic information they collect from their constituents, the response rates to their surveys, and how the data were used. Survey description The following questions are written as presented to the participating organizations. Question 1: What is the name of your STEM organization? Question 2: Does your organization collect demographic data from your membership and/or meeting attendees? Question 3: When was your organization’s most recent demographic survey (approximate year)? Question 4: We would like to know the categories of demographic information collected by your organization. You may answer this question by either uploading a blank copy of your organization’s survey (linked provided in online version of this survey) OR by completing a short series of questions. Question 5: On the most recent demographic survey or questionnaire, what categories of information were collected? (Please select all that apply)
Disability status Gender identity (e.g., male, female, non-binary) Marital/Family status Racial and ethnic group Religion Sex Sexual orientation Veteran status Other (please provide)
Question 6: For each of the categories selected in Question 5, what options were provided for survey participants to select? Question 7: Did the most recent demographic survey provide a statement about data privacy and confidentiality? If yes, please provide the statement. Question 8: Did the most recent demographic survey provide a statement about intended data use? If yes, please provide the statement. Question 9: Who maintains the demographic data collected by your organization? (e.g., contracted third party, organization executives) Question 10: How has your organization used members’ demographic data in the last five years? Examples: monitoring temporal changes in demographic diversity, publishing diversity data products, planning conferences, contributing to third-party researchers. Question 11: What is the size of your organization (number of members or number of attendees at recent meetings)? Question 12: What was the response rate (%) for your organization’s most recent demographic survey? *Organizations were also able to upload a copy of their demographics survey instead of responding to Questions 5-8. If so, the uploaded survey was used (by the study authors) to evaluate Questions 5-8.
Pursuant to Local Laws 126, 127, and 128 of 2016, certain demographic data is collected voluntarily and anonymously by persons voluntarily seeking social services. This data can be used by agencies and the public to better understand the demographic makeup of client populations and to better understand and serve residents of all backgrounds and identities. The data presented here has been collected through either electronic form or paper surveys offered at the point of application for services. These surveys are anonymous. Each record represents an anonymized demographic profile of an individual applicant for social services, disaggregated by response option, agency, and program. Response options include information regarding ancestry, race, primary and secondary languages, English proficiency, gender identity, and sexual orientation. Idiosyncrasies or Limitations: Note that while the dataset contains the total number of individuals who have identified their ancestry or languages spoke, because such data is collected anonymously, there may be instances of a single individual completing multiple voluntary surveys. Additionally, the survey being both voluntary and anonymous has advantages as well as disadvantages: it increases the likelihood of full and honest answers, but since it is not connected to the individual case, it does not directly inform delivery of services to the applicant. The paper and online versions of the survey ask the same questions but free-form text is handled differently. Free-form text fields are expected to be entered in English although the form is available in several languages. Surveys are presented in 11 languages. Paper Surveys 1. Are optional 2. Survey taker is expected to specify agency that provides service 2. Survey taker can skip or elect not to answer questions 3. Invalid/unreadable data may be entered for survey date or date may be skipped 4. OCRing of free-form tet fields may fail. 5. Analytical value of free-form text answers is unclear Online Survey 1. Are optional 2. Agency is defaulted based on the URL 3. Some questions must be answered 4. Date of survey is automated
https://www.icpsr.umich.edu/web/ICPSR/studies/29646/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/29646/terms
This data collection is comprised of responses from the March and April installments of the 2008 Current Population Survey (CPS). Both the March and April surveys used two sets of questions, the basic CPS and a separate supplement for each month.The CPS, administered monthly, is a labor force survey providing current estimates of the economic status and activities of the population of the United States. Specifically, the CPS provides estimates of total employment (both farm and nonfarm), nonfarm self-employed persons, domestics, and unpaid helpers in nonfarm family enterprises, wage and salaried employees, and estimates of total unemployment.In addition to the basic CPS questions, respondents were asked questions from the March supplement, known as the Annual Social and Economic (ASEC) supplement. The ASEC provides supplemental data on work experience, income, noncash benefits, and migration. Comprehensive work experience information was given on the employment status, occupation, and industry of persons 15 years old and older. Additional data for persons 15 years old and older are available concerning weeks worked and hours per week worked, reason not working full time, total income and income components, and place of residence on March 1, 2007. The March supplement also contains data covering nine noncash income sources: food stamps, school lunch program, employer-provided group health insurance plan, employer-provided pension plan, personal health insurance, Medicaid, Medicare, CHAMPUS or military health care, and energy assistance. Questions covering training and assistance received under welfare reform programs, such as job readiness training, child care services, or job skill training were also asked in the March supplement.The April supplement, sponsored by the Department of Health and Human Services, queried respondents on the economic situation of persons and families for the previous year. Moreover, all household members 15 years of age and older that are a biological parent of children in the household that have an absent parent were asked detailed questions about child support and alimony. Information regarding child support was collected to determine the size and distribution of the population with children affected by divorce or separation, or other relationship status change. Moreover, the data were collected to better understand the characteristics of persons requiring child support, and to help develop and maintain programs designed to assist in obtaining child support. These data highlight alimony and child support arrangements made at the time of separation or divorce, amount of payments actually received, and value and type of any property settlement.The April supplement data were matched to March supplement data for households that were in the sample in both March and April 2008. In March 2008, there were 4,522 household members eligible, of which 1,431 required imputation of child support data. When matching the March 2008 and April 2008 data sets, there were 170 eligible people on the March file that did not match to people on the April file. Child support data for these 170 people were imputed. The remaining 1,261 imputed cases were due to nonresponse to the child support questions. Demographic variables include age, sex, race, Hispanic origin, marital status, veteran status, educational attainment, occupation, and income. Data on employment and income refer to the preceding year, although other demographic data refer to the time at which the survey was administered.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file contains the demographic questions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data is from responses to demographic questions in the questionnaire on randomization.Older participants (66 years, ±16 vs. 61 years ±16, p = 0.02) and Maori (66% vs. 29%, p<0.001) were less likely to complete the questionnaire, however there were no differences between randomized groups. The total completion rate was higher for the simplified ICF + booklet (75%) compared to the standard ICF’s (64%, p = 0.05) and the short ICF + booklet (62%, p = 0.04).
Includes questions pertaining to: race & ethnicitygenderpreferred pronounssexual orientationagetribal affiliationdisabilityincomehouseholdlanguagelocationeducationhousing statustransportationemployment status
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Science in (Higher) Education – data of the February 2017 survey
This data set contains:
Survey structure
The survey includes 24 questions and its structure can be separated in five major themes: material used in courses (5), OER awareness, usage and development (6), collaborative tools used in courses (2), assessment and participation options (5), demographics (4). The last two questions include an open text questions about general issues on the topics and singular open education experiences, and a request on forwarding the respondent’s e-mail address for further questionings. The online survey was created with Limesurvey[1]. Several questions include filters, i.e. these questions were only shown if a participants did choose a specific answer beforehand ([n/a] in Excel file, [.] In SPSS).
Demographic questions
Demographic questions asked about the current position, the discipline, birth year and gender. The classification of research disciplines was adapted to general disciplines at German higher education institutions. As we wanted to have a broad classification, we summarised several disciplines and came up with the following list, including the option “other” for respondents who do not feel confident with the proposed classification:
The current job position classification was also chosen according to common positions in Germany, including positions with a teaching responsibility at higher education institutions. Here, we also included the option “other” for respondents who do not feel confident with the proposed classification:
We chose to have a free text (numerical) for asking about a respondent’s year of birth because we did not want to pre-classify respondents’ age intervals. It leaves us options to have different analysis on answers and possible correlations to the respondents’ age. Asking about the country was left out as the survey was designed for academics in Germany.
Remark on OER question
Data from earlier surveys revealed that academics suffer confusion about the proper definition of OER[2]. Some seem to understand OER as free resources, or only refer to open source software (Allen & Seaman, 2016, p. 11). Allen and Seaman (2016) decided to give a broad explanation of OER, avoiding details to not tempt the participant to claim “aware”. Thus, there is a danger of having a bias when giving an explanation. We decided not to give an explanation, but keep this question simple. We assume that either someone knows about OER or not. If they had not heard of the term before, they do not probably use OER (at least not consciously) or create them.
Data collection
The target group of the survey was academics at German institutions of higher education, mainly universities and universities of applied sciences. To reach them we sent the survey to diverse institutional-intern and extern mailing lists and via personal contacts. Included lists were discipline-based lists, lists deriving from higher education and higher education didactic communities as well as lists from open science and OER communities. Additionally, personal e-mails were sent to presidents and contact persons from those communities, and Twitter was used to spread the survey.
The survey was online from Feb 6th to March 3rd 2017, e-mails were mainly sent at the beginning and around mid-term.
Data clearance
We got 360 responses, whereof Limesurvey counted 208 completes and 152 incompletes. Two responses were marked as incomplete, but after checking them turned out to be complete, and we added them to the complete responses dataset. Thus, this data set includes 210 complete responses. From those 150 incomplete responses, 58 respondents did not answer 1st question, 40 respondents discontinued after 1st question. Data shows a constant decline in response answers, we did not detect any striking survey question with a high dropout rate. We deleted incomplete responses and they are not in this data set.
Due to data privacy reasons, we deleted seven variables automatically assigned by Limesurvey: submitdate, lastpage, startlanguage, startdate, datestamp, ipaddr, refurl. We also deleted answers to question No 24 (email address).
References
Allen, E., & Seaman, J. (2016). Opening the Textbook: Educational Resources in U.S. Higher Education, 2015-16.
First results of the survey are presented in the poster:
Heck, Tamara, Blümel, Ina, Heller, Lambert, Mazarakis, Athanasios, Peters, Isabella, Scherp, Ansgar, & Weisel, Luzian. (2017). Survey: Open Science in Higher Education. Zenodo. http://doi.org/10.5281/zenodo.400561
Contact:
Open Science in (Higher) Education working group, see http://www.leibniz-science20.de/forschung/projekte/laufende-projekte/open-science-in-higher-education/.
[1] https://www.limesurvey.org
[2] The survey question about the awareness of OER gave a broad explanation, avoiding details to not tempt the participant to claim “aware”.
The primary objective of the 2012 Indonesia Demographic and Health Survey (IDHS) is to provide policymakers and program managers with national- and provincial-level data on representative samples of all women age 15-49 and currently-married men age 15-54.
The 2012 IDHS was specifically designed to meet the following objectives: • Provide data on fertility, family planning, maternal and child health, adult mortality (including maternal mortality), and awareness of AIDS/STIs to program managers, policymakers, and researchers to help them evaluate and improve existing programs; • Measure trends in fertility and contraceptive prevalence rates, and analyze factors that affect such changes, such as marital status and patterns, residence, education, breastfeeding habits, and knowledge, use, and availability of contraception; • Evaluate the achievement of goals previously set by national health programs, with special focus on maternal and child health; • Assess married men’s knowledge of utilization of health services for their family’s health, as well as participation in the health care of their families; • Participate in creating an international database that allows cross-country comparisons that can be used by the program managers, policymakers, and researchers in the areas of family planning, fertility, and health in general
National coverage
Sample survey data [ssd]
Indonesia is divided into 33 provinces. Each province is subdivided into districts (regency in areas mostly rural and municipality in urban areas). Districts are subdivided into subdistricts, and each subdistrict is divided into villages. The entire village is classified as urban or rural.
The 2012 IDHS sample is aimed at providing reliable estimates of key characteristics for women age 15-49 and currently-married men age 15-54 in Indonesia as a whole, in urban and rural areas, and in each of the 33 provinces included in the survey. To achieve this objective, a total of 1,840 census blocks (CBs)-874 in urban areas and 966 in rural areas-were selected from the list of CBs in the selected primary sampling units formed during the 2010 population census.
Because the sample was designed to provide reliable indicators for each province, the number of CBs in each province was not allocated in proportion to the population of the province or its urban-rural classification. Therefore, a final weighing adjustment procedure was done to obtain estimates for all domains. A minimum of 43 CBs per province was imposed in the 2012 IDHS design.
Refer to Appendix B in the final report for details of sample design and implementation.
Face-to-face [f2f]
The 2012 IDHS used four questionnaires: the Household Questionnaire, the Woman’s Questionnaire, the Currently Married Man’s Questionnaire, and the Never-Married Man’s Questionnaire. Because of the change in survey coverage from ever-married women age 15-49 in the 2007 IDHS to all women age 15-49 in the 2012 IDHS, the Woman’s Questionnaire now has questions for never-married women age 15-24. These questions were part of the 2007 Indonesia Young Adult Reproductive Survey questionnaire.
The Household and Woman’s Questionnaires are largely based on standard DHS phase VI questionnaires (March 2011 version). The model questionnaires were adapted for use in Indonesia. Not all questions in the DHS model were adopted in the IDHS. In addition, the response categories were modified to reflect the local situation.
The Household Questionnaire was used to list all the usual members and visitors who spent the previous night in the selected households. Basic information collected on each person listed includes age, sex, education, marital status, education, and relationship to the head of the household. Information on characteristics of the housing unit, such as the source of drinking water, type of toilet facilities, construction materials used for the floor, roof, and outer walls of the house, and ownership of various durable goods were also recorded in the Household Questionnaire. These items reflect the household’s socioeconomic status and are used to calculate the household wealth index. The main purpose of the Household Questionnaire was to identify women and men who were eligible for an individual interview.
The Woman’s Questionnaire was used to collect information from all women age 15-49. These women were asked questions on the following topics: • Background characteristics (marital status, education, media exposure, etc.) • Reproductive history and fertility preferences • Knowledge and use of family planning methods • Antenatal, delivery, and postnatal care • Breastfeeding and infant and young children feeding practices • Childhood mortality • Vaccinations and childhood illnesses • Marriage and sexual activity • Fertility preferences • Woman’s work and husband’s background characteristics • Awareness and behavior regarding HIV-AIDS and other sexually transmitted infections (STIs) • Sibling mortality, including maternal mortality • Other health issues
Questions asked to never-married women age 15-24 addressed the following: • Additional background characteristics • Knowledge of the human reproduction system • Attitudes toward marriage and children • Role of family, school, the community, and exposure to mass media • Use of tobacco, alcohol, and drugs • Dating and sexual activity
The Man’s Questionnaire was administered to all currently married men age 15-54 living in every third household in the 2012 IDHS sample. This questionnaire includes much of the same information included in the Woman’s Questionnaire, but is shorter because it did not contain questions on reproductive history or maternal and child health. Instead, men were asked about their knowledge of and participation in health-careseeking practices for their children.
The questionnaire for never-married men age 15-24 includes the same questions asked to nevermarried women age 15-24.
All completed questionnaires, along with the control forms, were returned to the BPS central office in Jakarta for data processing. The questionnaires were logged and edited, and all open-ended questions were coded. Responses were entered in the computer twice for verification, and they were corrected for computeridentified errors. Data processing activities were carried out by a team of 58 data entry operators, 42 data editors, 14 secondary data editors, and 14 data entry supervisors. A computer package program called Census and Survey Processing System (CSPro), which was specifically designed to process DHS-type survey data, was used in the processing of the 2012 IDHS.
The response rates for both the household and individual interviews in the 2012 IDHS are high. A total of 46,024 households were selected in the sample, of which 44,302 were occupied. Of these households, 43,852 were successfully interviewed, yielding a household response rate of 99 percent.
Refer to Table 1.2 in the final report for more detailed summarized results of the of the 2012 IDHS fieldwork for both the household and individual interviews, by urban-rural residence.
The estimates from a sample survey are affected by two types of errors: (1) nonsampling errors, and (2) sampling errors. Nonsampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2012 Indonesia Demographic and Health Survey (2012 IDHS) to minimize this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2012 IDHS is only one of many samples that could have been selected from the same population, using the same design and identical size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling error is a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 2012 IDHS sample is the result of a multi-stage stratified design, and, consequently, it was necessary to use more complex formulae. The computer software used to calculate sampling errors for the 2012 IDHS is a SAS program. This program used the Taylor linearization method
https://www.gesis.org/en/institute/data-usage-termshttps://www.gesis.org/en/institute/data-usage-terms
ALLBUS (GGSS - the German General Social Survey) is a biennial trend survey based on random samples of the German population. Established in 1980, its mission is to monitor attitudes, behavior, and social change in Germany. Each ALLBUS cross-sectional survey consists of one or two main question modules covering changing topics, a range of supplementary questions and a core module providing detailed demographic information. Additionally, data on the interview and the interviewers are provided as well. Key topics generally follow a 10-year replication cycle, many individual indicators and item batteries are replicated at shorter intervals. The present data set contains socio-demographic variables from the ALLBUS 2021, which were harmonized to the standards developed as part of the KonsortSWD sub-project “Harmonized Variables” (Schneider et al., 2023). While there are already established recommendations for the formulation of socio-demographic questionnaire items (e.g. the “Demographic Standards” by Hoffmeyer-Zlotnik et al., 2016), there were no such standards at the variable level. The KonsortSWD project closes this gap and establishes 32 standard variables for 19 socio-demographic characteristics contained in this dataset.
The Thai Demographic and Health Survey (TDHS) was a nationally representative sample survey conducted from March through June 1988 to collect data on fertility, family planning, and child and maternal health. A total of 9,045 households and 6,775 ever-married women aged 15 to 49 were interviewed. Thai Demographic and Health Survey (TDHS) is carried out by the Institute of Population Studies (IPS) of Chulalongkorn University with the financial support from USAID through the Institute for Resource Development (IRD) at Westinghouse. The Institute of Population Studies was responsible for the overall implementation of the survey including sample design, preparation of field work, data collection and processing, and analysis of data. IPS has made available its personnel and office facilities to the project throughout the project duration. It serves as the headquarters for the survey.
The Thai Demographic and Health Survey (TDHS) was undertaken for the main purpose of providing data concerning fertility, family planning and maternal and child health to program managers and policy makers to facilitate their evaluation and planning of programs, and to population and health researchers to assist in their efforts to document and analyze the demographic and health situation. It is intended to provide information both on topics for which comparable data is not available from previous nationally representative surveys as well as to update trends with respect to a number of indicators available from previous surveys, in particular the Longitudinal Study of Social Economic and Demographic Change in 1969-73, the Survey of Fertility in Thailand in 1975, the National Survey of Family Planning Practices, Fertility and Mortality in 1979, and the three Contraceptive Prevalence Surveys in 1978/79, 1981 and 1984.
National
The population covered by the 1987 THADHS is defined as the universe of all women Ever-married women in the reproductive ages (i.e., women 15-49). This covered women in private households on the basis of a de facto coverage definition. Visitors and usual residents who were in the household the night before the first visit or before any subsequent visit during the few days the interviewing team was in the area were eligible. Excluded were the small number of married women aged under 15 and women not present in private households.
Sample survey data
SAMPLE SIZE AND ALLOCATION
The objective of the survey was to provide reliable estimates for major domains of the country. This consisted of two overlapping sets of reporting domains: (a) Five regions of the country namely Bangkok, north, northeast, central region (excluding Bangkok), and south; (b) Bangkok versus all provincial urban and all rural areas of the country. These requirements could be met by defining six non-overlapping sampling domains (Bangkok, provincial urban, and rural areas of each of the remaining 4 regions), and allocating approximately equal sample sizes to them. On the basis of past experience, available budget and overall reporting requirement, the target sample size was fixed at 7,000 interviews of ever-married women aged 15-49, expected to be found in around 9,000 households. Table A.I shows the actual number of households as well as eligible women selected and interviewed, by sampling domain (see Table i.I for reporting domains).
THE FRAME AND SAMPLE SELECTION
The frame for selecting the sample for urban areas, was provided by the National Statistical Office of Thailand and by the Ministry of the Interior for rural areas. It consisted of information on population size of various levels of administrative and census units, down to blocks in urban areas and villages in rural areas. The frame also included adequate maps and descriptions to identify these units. The extent to which the data were up-to-date as well as the quality of the data varied somewhat in different parts of the frame. Basically, the multi-stage stratified sampling design involved the following procedure. A specified number of sample areas were selected systematically from geographically/administratively ordered lists with probabilities proportional to the best available measure of size (PPS). Within selected areas (blocks or villages) new lists of households were prepared and systematic samples of households were selected. In principle, the sampling interval for the selection of households from lists was determined so as to yield a self weighting sample of households within each domain. However, in the absence of good measures of population size for all areas, these sampling intervals often required adjustments in the interest of controlling the size of the resulting sample. Variations in selection probabilities introduced due to such adjustment, where required, were compensated for by appropriate weighting of sample cases at the tabulation stage.
SAMPLE OUTCOME
The final sample of households was selected from lists prepared in the sample areas. The time interval between household listing and enumeration was generally very short, except to some extent in Bangkok where the listing itself took more time. In principle, the units of listing were the same as the ultimate units of sampling, namely households. However in a small proportion of cases, the former differed from the latter in several respects, identified at the stage of final enumeration: a) Some units listed actually contained more than one household each b) Some units were "blanks", that is, were demolished or not found to contain any eligible households at the time of enumeration. c) Some units were doubtful cases in as much as the household was reported as "not found" by the interviewer, but may in fact have existed.
Face-to-face
The DHS core questionnaires (Household, Eligible Women Respondent, and Community) were translated into Thai. A number of modifications were made largely to adapt them for use with an ever- married woman sample and to add a number of questions in areas that are of special interest to the Thai investigators but which were not covered in the standard core. Examples of such modifications included adding marital status and educational attainment to the household schedule, elaboration on questions in the individual questionnaire on educational attainment to take account of changes in the educational system during recent years, elaboration on questions on postnuptial residence, and adaptation of the questionnaire to take into account that only ever-married women are being interviewed rather than all women. More generally, attention was given to the wording of questions in Thai to ensure that the intent of the original English-language version was preserved.
a) Household questionnaire
The household questionnaire was used to list every member of the household who usually lives in the household and as well as visitors who slept in the household the night before the interviewer's visit. Information contained in the household questionnaire are age, sex, marital status, and education for each member (the last two items were asked only to members aged 13 and over). The head of the household or the spouse of the head of the household was the preferred respondent for the household questionnaire. However, if neither was available for interview, any adult member of the household was accepted as the respondent. Information from the household questionnaire was used to identify eligible women for the individual interview. To be eligible, a respondent had to be an ever-married woman aged 15-49 years old who had slept in the household 'the previous night'.
Prior evidence has indicated that when asked about current age, Thais are as likely to report age at next birthday as age at last birthday (the usual demographic definition of age). Since the birth date of each household number was not asked in the household questionnaire, it was not possible to calculate age at last birthday from the birthdate. Therefore a special procedure was followed to ensure that eligible women just under the higher boundary for eligible ages (i.e. 49 years old) were not mistakenly excluded from the eligible woman sample because of an overstated age. Ever-married women whose reported age was between 50-52 years old and who slept in the household the night before birthdate of the woman, it was discovered that these women (or any others being interviewed) were not actually within the eligible age range of 15-49, the interview was terminated and the case disqualified. This attempt recovered 69 eligible women who otherwise would have been missed because their reported age was over 50 years old or over.
b) Individual questionnaire
The questionnaire administered to eligible women was based on the DHS Model A Questionnaire for high contraceptive prevalence countries. The individual questionnaire has 8 sections: - Respondent's background - Reproduction - Contraception - Health and breastfeeding - Marriage - Fertility preference - Husband's background and woman's work - Heights and weights of children and mothers
The questionnaire was modified to suit the Thai context. As noted above, several questions were added to the standard DHS core questionnaire not only to meet the interest of IPS researchers hut also because of their relevance to the current demographic situation in Thailand. The supplemental questions are marked with an asterisk in the individual questionnaire. Questions concerning the following items were added in the individual questionnaire: - Did the respondent ever
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sexual, romantic, and related orientations across all institutions, based on the queered survey (n = 1932).
The Bangladesh Demographic and Health Survey (BDHS) is the first of this kind of study conducted in Bangladesh. It provides rapid feedback on key demographic and programmatic indicators to monitor the strength and weaknesses of the national family planning/MCH program. The wealth of information collected through the 1993-94 BDHS will be of immense value to the policymakers and program managers in order to strengthen future program policies and strategies.
The BDHS is intended to serve as a source of population and health data for policymakers and the research community. In general, the objectives of the BDHS are to: - asses the overall demographic situation in Bangladesh, - assist in the evaluation of the population and health programs in Bangladesh, and - advance survey methodology.
More specifically, the BDHS was designed to: - provide data on the family planning and fertility behavior of the Bangladesh population to evaluate the national family planning programs, - measure changes in fertility and contraceptive prevalence and, at the same time, study the factors which affect these changes, such as marriage patterns, urban/rural residence, availability of contraception, breastfeeding patterns, and other socioeconomic factors, and - examine the basic indicators of maternal and child health in Bangladesh.
National
Sample survey data
Bangladesh is divided into five administrative divisions, 64 districts (zillas), and 489 thanas. In rural areas, thanas are divided into unions and then mauzas, an administrative land unit. Urban areas are divided into wards and then mahallas. The 1993-94 BDHS employed a nationally-representative, two-stage sample. It was selected from the Integrated Multi-Purpose Master Sample (IMPS), newly created by the Bangladesh Bureau of Statistics. The IMPS is based on 1991 census data. Each of the five divisions was stratified into three groups: 1) statistical metropolitan areas (SMAs) 2) municipalities (other urban areas), and 3) rural areas. In rural areas, the primary sampling unit was the mauza, while in urban areas, it was the mahalla. Because the primary sampling units in the IMPS were selected with probability proportional to size from the 1991 census frame, the units for the BDHS were sub-selected from the IMPS with equal probability to make the BDHS selection equivalent to selection with probability proportional to size. A total of 304 primary sampling units were selected for the BDHS (30 in SMAs, 40 in municipalities, and 234 in rural areas), out of the 372 in the IMPS. Fieldwork in three sample points was not possible, so a total of 301 points were covered in the survey.
Since one objective of the BDHS is to provide separate survey estimates for each division as well as for urban and rural areas separately, it was necessary to increase the sampling rate for Barisal Division und for municipalities relative to the other divisions, SMAs, and rural areas. Thus, the BDHS sample is not self-weighting and weighting factors have been applied to the data in this report.
After the selection of the BDHS sample points, field staffs were trained by Mitra and Associates and conducted a household listing operation in September and October 1993. A systematic sample of households was then selected from these lists, with an average "take" of 25 households in the urban clusters and 37 households in rural clusters. Every second household was identified as selected for the husband's survey, meaning that, in addition to interviewing all ever-married women age 10-49, interviewers also interviewed the husband of any woman who was successfully interviewed. It was expected that the sample would yield interviews with approximately 10,000 ever-married women age 10-49 and 4,200 of their husbands.
Note: See detailed in APPENDIX A of the survey final report.
Data collected for women 10-49, indicators calculated for women 15-49. A total of 304 primary sampling units were selected, but fieldwork in 3 sample points was not possible.
Face-to-face
Four types of questionnaires were used for the BDHS: a Household Questionnaire, a Women's Questionnaire, a Husbands' Questionnaire, and a Service Availability Questionnaire. The contents of these questionnaires were based on the DHS Model A Questionnaire, which is designed for use in countries with relatively high levels of contraceptive use. Additions and modifications to the model questionnaires were made during a series of meetings with representatives of various organizations, including the Asia Foundation, the Bangladesh Bureau of Statistics, the Cambridge Consulting Corporation, the Family Planning Association of Bangladesh, GTZ, the International Centre for Diarrhoeal Disease Research (ICDDR,B), Pathfinder International, Population Communications Services, the Population Council, the Social Marketing Company, UNFPA, UNICEF, University Research Corporation/Bangladesh, and the World Bank. The questionnaires were developed in English and then translated into and printed in Bangla.
The Household Questionnaire was used to list all the usual members and visitors of selected households. Some basic information was collected on the characteristics of each person listed, including his/her age, sex, education, and relationship to the head of the household. The main purpose of the Household Questionnaire was to identify women and men who were eligible for individual interview. In addition, information was collected about the dwelling itself, such as the source of water, type of toilet facilities, materials used to construct the house, and ownership of various consumer goods.
The Women's Questionnaire was used to collect information from ever-married women age 10-49. These women were asked questions on the following topics: - Background characteristics (age, education, religion, etc.), - Reproductive history, - Knowledge and use of family planning methods, - Antenatal and delivery care, - Breastfeeding and weaning practices, - Vaccinations and health of children under age three, - Marriage, - Fertility preferences, and - Husband's background and respondent's work.
The Husbands' Questionnaire was used to interview the husbands of a subsample of women who were interviewed. The questionnaire included many of the same questions as the Women's Questionnaire, except that it omitted the detailed birth history, as well as the sections on maternal care, breastfeeding and child health.
The Service Availability Questionnaire was used to collect information on the family planning and health services available in and near the sampled areas. It consisted of a set of three questionnaires: one to collect data on characteristics of the community, one for interviewing family welfare visitors and one for interviewing family planning field workers, whether government or non-governent supported. One set of service availability questionnaires was to be completed in each cluster (sample point).
All questionnaires for the BDHS were returned to Dhaka for data processing at Mitra and Associates. The processing operation consisted of office editing, coding of open-ended questions, data entry, and editing inconsistencies found by the computer programs. One senior staff member, 1 data processing supervisor, questionnaire administrator, 2 office editors, and 5 data entry operators were responsible for the data processing operation. The data were processed on five microcomputers. The DHS data entry and editing programs were written in ISSA (Integrated System for Survey Analysis). Data processing commenced in early February and was completed by late April 1994.
A total of 9,681 households were selected for the sample, of which 9,174 were successfully interviewed. The shortfall is primarily due to dwellings that were vacant, or in which the inhabitants had left for an extended period at the time they were visited by the interviewing teams. Of the 9,255 households that were occupied, 99 percent were successfully interviewed. In these households, 9,900 women were identified as eligible for the individual interview and interviews were completed for 9,640 or 97 percent of these. In one-half of the households that were selected for inclusion in the husbands' survey, 3,874 eligible husbands were identified, of which 3,284 or 85 percent were interviewed.
The principal reason for non-response among eligible women and men was failure to find them at home despite repeated visits to the household. The refusal rate was very low (less than one-tenth of one percent among women and husbands). Since the main reason for interviewing husbands was to match the information with that from their wives, survey procedures called for interviewers not to interview husbands of women who were not interviewed. Such cases account for about one-third of the non-response among husbands. Where husbands and wives were both interviewed, they were interviewed simultaneously but separately.
Note: See summarized response rates by residence (urban/rural) in Table 1.1 of the survey final report.
The estimates from a sample survey are affected by two types of errors: non-sampling errors and sampling errors. Non-sampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions
The 1998 Ghana Demographic and Health Survey (GDHS) is the latest in a series of national-level population and health surveys conducted in Ghana and it is part of the worldwide MEASURE DHS+ Project, designed to collect data on fertility, family planning, and maternal and child health.
The primary objective of the 1998 GDHS is to provide current and reliable data on fertility and family planning behaviour, child mortality, children’s nutritional status, and the utilisation of maternal and child health services in Ghana. Additional data on knowledge of HIV/AIDS are also provided. This information is essential for informed policy decisions, planning and monitoring and evaluation of programmes at both the national and local government levels.
The long-term objectives of the survey include strengthening the technical capacity of the Ghana Statistical Service (GSS) to plan, conduct, process, and analyse the results of complex national sample surveys. Moreover, the 1998 GDHS provides comparable data for long-term trend analyses within Ghana, since it is the third in a series of demographic and health surveys implemented by the same organisation, using similar data collection procedures. The GDHS also contributes to the ever-growing international database on demographic and health-related variables.
National
Sample survey data
The major focus of the 1998 GDHS was to provide updated estimates of important population and health indicators including fertility and mortality rates for the country as a whole and for urban and rural areas separately. In addition, the sample was designed to provide estimates of key variables for the ten regions in the country.
The list of Enumeration Areas (EAs) with population and household information from the 1984 Population Census was used as the sampling frame for the survey. The 1998 GDHS is based on a two-stage stratified nationally representative sample of households. At the first stage of sampling, 400 EAs were selected using systematic sampling with probability proportional to size (PPS-Method). The selected EAs comprised 138 in the urban areas and 262 in the rural areas. A complete household listing operation was then carried out in all the selected EAs to provide a sampling frame for the second stage selection of households. At the second stage of sampling, a systematic sample of 15 households per EA was selected in all regions, except in the Northern, Upper West and Upper East Regions. In order to obtain adequate numbers of households to provide reliable estimates of key demographic and health variables in these three regions, the number of households in each selected EA in the Northern, Upper West and Upper East regions was increased to 20. The sample was weighted to adjust for over sampling in the three northern regions (Northern, Upper East and Upper West), in relation to the other regions. Sample weights were used to compensate for the unequal probability of selection between geographically defined strata.
The survey was designed to obtain completed interviews of 4,500 women age 15-49. In addition, all males age 15-59 in every third selected household were interviewed, to obtain a target of 1,500 men. In order to take cognisance of non-response, a total of 6,375 households nation-wide were selected.
Note: See detailed description of sample design in APPENDIX A of the survey report.
Face-to-face
Three types of questionnaires were used in the GDHS: the Household Questionnaire, the Women’s Questionnaire, and the Men’s Questionnaire. These questionnaires were based on model survey instruments developed for the international MEASURE DHS+ programme and were designed to provide information needed by health and family planning programme managers and policy makers. The questionnaires were adapted to the situation in Ghana and a number of questions pertaining to on-going health and family planning programmes were added. These questionnaires were developed in English and translated into five major local languages (Akan, Ga, Ewe, Hausa, and Dagbani).
The Household Questionnaire was used to enumerate all usual members and visitors in a selected household and to collect information on the socio-economic status of the household. The first part of the Household Questionnaire collected information on the relationship to the household head, residence, sex, age, marital status, and education of each usual resident or visitor. This information was used to identify women and men who were eligible for the individual interview. For this purpose, all women age 15-49, and all men age 15-59 in every third household, whether usual residents of a selected household or visitors who slept in a selected household the night before the interview, were deemed eligible and interviewed. The Household Questionnaire also provides basic demographic data for Ghanaian households. The second part of the Household Questionnaire contained questions on the dwelling unit, such as the number of rooms, the flooring material, the source of water and the type of toilet facilities, and on the ownership of a variety of consumer goods.
The Women’s Questionnaire was used to collect information on the following topics: respondent’s background characteristics, reproductive history, contraceptive knowledge and use, antenatal, delivery and postnatal care, infant feeding practices, child immunisation and health, marriage, fertility preferences and attitudes about family planning, husband’s background characteristics, women’s work, knowledge of HIV/AIDS and STDs, as well as anthropometric measurements of children and mothers.
The Men’s Questionnaire collected information on respondent’s background characteristics, reproduction, contraceptive knowledge and use, marriage, fertility preferences and attitudes about family planning, as well as knowledge of HIV/AIDS and STDs.
A total of 6,375 households were selected for the GDHS sample. Of these, 6,055 were occupied. Interviews were completed for 6,003 households, which represent 99 percent of the occupied households. A total of 4,970 eligible women from these households and 1,596 eligible men from every third household were identified for the individual interviews. Interviews were successfully completed for 4,843 women or 97 percent and 1,546 men or 97 percent. The principal reason for nonresponse among individual women and men was the failure of interviewers to find them at home despite repeated callbacks.
Note: See summarized response rates by place of residence in Table 1.1 of the survey report.
The estimates from a sample survey are affected by two types of errors: (1) nonsampling errors, and (2) sampling errors. Nonsampling errors are the results of shortfalls made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 1998 GDHS to minimize this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 1998 GDHS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 1998 GDHS sample is the result of a two-stage stratified design, and, consequently, it was necessary to use more complex formulae. The computer software used to calculate sampling errors for the 1998 GDHS is the ISSA Sampling Error Module. This module uses the Taylor linearization method of variance estimation for survey estimates that are means or proportions. The Jackknife repeated replication method is used for variance estimation of more complex statistics such as fertility and mortality rates.
Data Quality Tables - Household age distribution - Age distribution of eligible and interviewed women - Age distribution of eligible and interviewed men - Completeness of reporting - Births by calendar years - Reporting of age at death in days - Reporting of age at death in months
Note: See detailed tables in APPENDIX C of the survey report.
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
The JPFHS is part of the worldwide Demographic and Health Surveys Program, which is designed to collect data on fertility, family planning, and maternal and child health. The primary objective of the Jordan Population and Family Health Survey (JPFHS) is to provide reliable estimates of demographic parameters, such as fertility, mortality, family planning, fertility preferences, as well as maternal and child health and nutrition that can be used by program managers and policy makers to evaluate and improve existing programs. In addition, the JPFHS data will be useful to researchers and scholars interested in analyzing demographic trends in Jordan, as well as those conducting comparative, regional or crossnational studies.
The content of the 2002 JPFHS was significantly expanded from the 1997 survey to include additional questions on women’s status, reproductive health, and family planning. In addition, all women age 15-49 and children less than five years of age were tested for anemia.
National
Sample survey data
The estimates from a sample survey are affected by two types of errors: 1) nonsampling errors and 2) sampling errors. Nonsampling errors are the result of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2002 JPFHS to minimize this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2002 JPFHS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 2002 JPFHS sample is the result of a multistage stratified design and, consequently, it was necessary to use more complex formulas. The computer software used to calculate sampling errors for the 2002 JPFHS is the ISSA Sampling Error Module (ISSAS). This module used the Taylor linearization method of variance estimation for survey estimates that are means or proportions. The Jackknife repeated replication method is used for variance estimation of more complex statistics such as fertility and mortality rates.
Note: See detailed description of sample design in APPENDIX B of the survey report.
Face-to-face
The 2002 JPFHS used two questionnaires – namely, the Household Questionnaire and the Individual Questionnaire. Both questionnaires were developed in English and translated into Arabic. The Household Questionnaire was used to list all usual members of the sampled households and to obtain information on each member’s age, sex, educational attainment, relationship to the head of household, and marital status. In addition, questions were included on the socioeconomic characteristics of the household, such as source of water, sanitation facilities, and the availability of durable goods. The Household Questionnaire was also used to identify women who are eligible for the individual interview: ever-married women age 15-49. In addition, all women age 15-49 and children under five years living in the household were measured to determine nutritional status and tested for anemia.
The household and women’s questionnaires were based on the DHS Model “A” Questionnaire, which is designed for use in countries with high contraceptive prevalence. Additions and modifications to the model questionnaire were made in order to provide detailed information specific to Jordan, using experience gained from the 1990 and 1997 Jordan Population and Family Health Surveys. For each evermarried woman age 15 to 49, information on the following topics was collected:
In addition, information on births and pregnancies, contraceptive use and discontinuation, and marriage during the five years prior to the survey was collected using a monthly calendar.
Fieldwork and data processing activities overlapped. After a week of data collection, and after field editing of questionnaires for completeness and consistency, the questionnaires for each cluster were packaged together and sent to the central office in Amman where they were registered and stored. Special teams were formed to carry out office editing and coding of the open-ended questions.
Data entry and verification started after one week of office data processing. The process of data entry, including one hundred percent re-entry, editing and cleaning, was done by using PCs and the CSPro (Census and Survey Processing) computer package, developed specially for such surveys. The CSPro program allows data to be edited while being entered. Data processing operations were completed by the end of October 2002. A data processing specialist from ORC Macro made a trip to Jordan in October and November 2002 to follow up data editing and cleaning and to work on the tabulation of results for the survey preliminary report. The tabulations for the present final report were completed in December 2002.
A total of 7,968 households were selected for the survey from the sampling frame; among those selected households, 7,907 households were found. Of those households, 7,825 (99 percent) were successfully interviewed. In those households, 6,151 eligible women were identified, and complete interviews were obtained with 6,006 of them (98 percent of all eligible women). The overall response rate was 97 percent.
Note: See summarized response rates by place of residence in Table 1.1 of the survey report.
The estimates from a sample survey are affected by two types of errors: 1) nonsampling errors and 2) sampling errors. Nonsampling errors are the result of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2002 JPFHS to minimize this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2002 JPFHS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 2002 JPFHS sample is the result of a multistage stratified design and, consequently, it was necessary to use more complex formulas. The computer software used to calculate sampling errors for the 2002 JPFHS is the ISSA Sampling Error Module (ISSAS). This module used the Taylor linearization method of variance estimation for survey estimates that are means or proportions. The Jackknife repeated replication method is used for variance estimation of more complex statistics such as fertility and mortality rates.
Note: See detailed
The Gallup Poll Social Series (GPSS) is a set of public opinion surveys designed to monitor U.S. adults' views on numerous social, economic, and political topics. The topics are arranged thematically across 12 surveys. Gallup administers these surveys during the same month every year and includes the survey's core trend questions in the same order each administration. Using this consistent standard allows for unprecedented analysis of changes in trend data that are not susceptible to question order bias and seasonal effects.
Introduced in 2001, the GPSS is the primary method Gallup uses to update several hundred long-term Gallup trend questions, some dating back to the 1930s. The series also includes many newer questions added to address contemporary issues as they emerge.
The dataset currently includes responses from up to and including 2025.
Gallup conducts one GPSS survey per month, with each devoted to a different topic, as follows:
January: Mood of the Nation
February: World Affairs
March: Environment
April: Economy and Finance
May: Values and Beliefs
June: Minority Rights and Relations (discontinued after 2016)
July: Consumption Habits
August: Work and Education
September: Governance
October: Crime
November: Health
December: Lifestyle (conducted 2001-2008)
The core questions of the surveys differ each month, but several questions assessing the state of the nation are standard on all 12: presidential job approval, congressional job approval, satisfaction with the direction of the U.S., assessment of the U.S. job market, and an open-ended measurement of the nation's "most important problem." Additionally, Gallup includes extensive demographic questions on each survey, allowing for in-depth analysis of trends.
Interviews are conducted with U.S. adults aged 18 and older living in all 50 states and the District of Columbia using a dual-frame design, which includes both landline and cellphone numbers. Gallup samples landline and cellphone numbers using random-digit-dial methods. Gallup purchases samples for this study from Survey Sampling International (SSI). Gallup chooses landline respondents at random within each household based on which member had the next birthday. Each sample of national adults includes a minimum quota of 70% cellphone respondents and 30% landline respondents, with additional minimum quotas by time zone within region. Gallup conducts interviews in Spanish for respondents who are primarily Spanish-speaking.
Gallup interviews a minimum of 1,000 U.S. adults aged 18 and older for each GPSS survey. Samples for the June Minority Rights and Relations survey are significantly larger because Gallup includes oversamples of Blacks and Hispanics to allow for reliable estimates among these key subgroups.
Gallup weights samples to correct for unequal selection probability, nonresponse, and double coverage of landline and cellphone users in the two sampling frames. Gallup also weights its final samples to match the U.S. population according to gender, age, race, Hispanic ethnicity, education, region, population density, and phone status (cellphone only, landline only, both, and cellphone mostly).
Demographic weighting targets are based on the most recent Current Population Survey figures for the aged 18 and older U.S. population. Phone status targets are based on the most recent National Health Interview Survey. Population density targets are based on the most recent U.S. Census.
The year appended to each table name represents when the data was last updated. For example, January: Mood of the Nation - 2025** **has survey data collected up to and including 2025.
For more information about what survey questions were asked over time, see the Supporting Files.
Data access is required to view this section.
The Bangladesh Demographic and Health Survey (BDHS) is part of the worldwide Demographic and Health Surveys program, which is designed to collect data on fertility, family planning, and maternal and child health.
The BDHS is intended to serve as a source of population and health data for policymakers and the research community. In general, the objectives of the BDHS are to: - assess the overall demographic situation in Bangladesh, - assist in the evaluation of the population and health programs in Bangladesh, and - advance survey methodology.
More specifically, the objective of the BDHS is to provide up-to-date information on fertility and childhood mortality levels; nuptiality; fertility preferences; awareness, approval, and use of family planning methods; breastfeeding practices; nutrition levels; and maternal and child health. This information is intended to assist policymakers and administrators in evaluating and designing programs and strategies for improving health and family planning services in the country.
National
Sample survey data
Bangladesh is divided into six administrative divisions, 64 districts (zillas), and 490 thanas. In rural areas, thanas are divided into unions and then mauzas, a land administrative unit. Urban areas are divided into wards and then mahallas. The 1996-97 BDHS employed a nationally-representative, two-stage sample that was selected from the Integrated Multi-Purpose Master Sample (IMPS) maintained by the Bangladesh Bureau of Statistics. Each division was stratified into three groups: 1 ) statistical metropolitan areas (SMAs), 2) municipalities (other urban areas), and 3) rural areas. 3 In the rural areas, the primary sampling unit was the mauza, while in urban areas, it was the mahalla. Because the primary sampling units in the IMPS were selected with probability proportional to size from the 1991 Census frame, the units for the BDHS were sub-selected from the IMPS with equal probability so as to retain the overall probability proportional to size. A total of 316 primary sampling units were utilized for the BDHS (30 in SMAs, 42 in municipalities, and 244 in rural areas). In order to highlight changes in survey indicators over time, the 1996-97 BDHS utilized the same sample points (though not necessarily the same households) that were selected for the 1993-94 BDHS, except for 12 additional sample points in the new division of Sylhet. Fieldwork in three sample points was not possible (one in Dhaka Cantonment and two in the Chittagong Hill Tracts), so a total of 313 points were covered.
Since one objective of the BDHS is to provide separate estimates for each division as well as for urban and rural areas separately, it was necessary to increase the sampling rate for Barisal and Sylhet Divisions and for municipalities relative to the other divisions, SMAs and rural areas. Thus, the BDHS sample is not self-weighting and weighting factors have been applied to the data in this report.
Mitra and Associates conducted a household listing operation in all the sample points from 15 September to 15 December 1996. A systematic sample of 9,099 households was then selected from these lists. Every second household was selected for the men's survey, meaning that, in addition to interviewing all ever-married women age 10-49, interviewers also interviewed all currently married men age 15-59. It was expected that the sample would yield interviews with approximately 10,000 ever-married women age 10-49 and 3,000 currently married men age 15-59.
Note: See detailed in APPENDIX A of the survey report.
Face-to-face
Four types of questionnaires were used for the BDHS: a Household Questionnaire, a Women's Questionnaire, a Men' s Questionnaire and a Community Questionnaire. The contents of these questionnaires were based on the DHS Model A Questionnaire, which is designed for use in countries with relatively high levels of contraceptive use. These model questionnaires were adapted for use in Bangladesh during a series of meetings with a small Technical Task Force that consisted of representatives from NIPORT, Mitra and Associates, USAID/Bangladesh, the International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B), Population Council/Dhaka, and Macro International Inc (see Appendix D for a list of members). Draft questionnaires were then circulated to other interested groups and were reviewed by the BDHS Technical Review Committee (see Appendix D for list of members). The questionnaires were developed in English and then translated into and printed in Bangla (see Appendix E for final version in English).
The Household Questionnaire was used to list all the usual members and visitors in the selected households. Some basic information was collected on the characteristics of each person listed, including his/her age, sex, education, and relationship to the head of the household. The main purpose of the Household Questionnaire was to identify women and men who were eligible for the individual interview. In addition, information was collected about the dwelling itself, such as the source of water, type of toilet facilities, materials used to construct the house, and ownership of various consumer goods.
The Women's Questionnaire was used to collect information from ever-married women age 10-49. These women were asked questions on the following topics: - Background characteristics (age, education, religion, etc.), - Reproductive history, - Knowledge and use of family planning methods, - Antenatal and delivery care, - Breastfeeding and weaning practices, - Vaccinations and health of children under age five, - Marriage, - Fertility preferences, - Husband's background and respondent's work, - Knowledge of AIDS, - Height and weight of children under age five and their mothers.
The Men's Questionnaire was used to interview currently married men age 15-59. It was similar to that for women except that it omitted the sections on reproductive history, antenatal and delivery care, breastfeeding, vaccinations, and height and weight. The Community Questionnaire was completed for each sample point and included questions about the existence in the community of income-generating activities and other development organizations and the availability of health and family planning services.
A total of 9,099 households were selected for the sample, of which 8,682 were successfully interviewed. The shortfall is primarily due to dwellings that were vacant or in which the inhabitants had left for an extended period at the time they were visited by the interviewing teams. Of the 8,762 households occupied, 99 percent were successfully interviewed. In these households, 9,335 women were identified as eligible for the individual interview (i.e., ever-married and age 10-49) and interviews were completed for 9,127 or 98 percent of them. In the half of the households that were selected for inclusion in the men's survey, 3,611 eligible ever-married men age 15-59 were identified, of whom 3,346 or 93 percent were interviewed.
The principal reason for non-response among eligible women and men was the failure to find them at home despite repeated visits to the household. The refusal rate was low.
Note: See summarized response rates by residence (urban/rural) in Table 1.1 of the survey report.
The estimates from a sample survey are affected by two types of errors: (1) non-sampling errors, and (2) sampling errors. Non-sampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the BDHS to minimize this type of error, non-sampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the BDHS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the BDHS sample is the result of a two-stage stratified design, and, consequently, it was necessary to use more complex formulae. The computer software used to calculate sampling errors for the BDHS is the ISSA Sampling Error Module. This module used the Taylor
The key objective of every census is to count every person (man, woman, child) resident in the country on census night, and also collect information on assorted demographic (sex, age, marital status, citizenship) and socio-economic (education/qualifications; labour force and economic activity) information, as well as data pertinent to household and housing characteristics. This count provides a complete picture of the population make-up in each village and town, of each island and region, thus allowing for an assessment of demographic change over time.
The need for a national census became obvious to the Census Office (Bureau of Statistics) during 1997 when a memo was submitted to government officials proposing the need for a national census in an attempt to update old socio-economic figures. The then Acting Director of the Bureau of Statistics and his predecessor shared a similar view: that the 'heydays' and 'prosperity' were nearing their end. This may not have been apparent, as it took until almost mid-2001 for the current Acting Government Statistician to receive instructions to prepare planning for a national census targeted for 2002. It has been repeatedly said that for adequate planning at the national level, information about the characteristics of the society is required. With such information, potential impacts can be forecast and policies can be designed for the improvement and benefit of society. Without it, the people, national planners and leaders will inevitably face uncertainties.
National coverage as the Population Census covers the whole of Nauru.
The Census covers all individuals living in private and non-private dwellings and institutions.
Census/enumeration data [cen]
There is no sampling for the population census, full coverage.
Face-to-face [f2f]
The questionnaire was based on the Pacific Islands Model Population and Housing Census Form and the 1992 census, and comprised two parts: a set of household questions, asked only of the head of household, and an individual questionnaire, administered to each household member. Unlike the previous census, which consisted of a separate household form plus two separate individual forms for Nauruans and non-Nauruans, the 2 002 questionnaire consisted of only one form separated into different parts and sections. Instructions (and skips) were desi
The questionnaire cover recorded various identifiers: district name, enumeration area, house number, number of households (family units) residing, total number of residents, gender, and whether siblings of the head of the house were also recorded. The second page, representing a summary page, listed every individual residing within the house. This list was taken by the enumerator on the first visit, on the eve of census night. The first part of the census questionnaire focused on housing-related questions. It was administered only once in each household, with questions usually asked of the household head. The household form asked the same range of questions as those covered in the 1992 census, relating to type of housing, structure of outer walls, water supply sources and storage, toilet and cooking facilities, lighting, construction materials and subsistence-type activities. The second part of the census questionnaire focused on individual questions covering all household members. This section was based on the 1992 questions, with notable differences being the exclusion of income-level questions and the expansion of fertility and mortality questions. As in 1992, a problem emerged during questionnaire design regarding the question of who or what should determine a ‘Nauruan’. Unlike the 1992 census, where the emphasis was on blood ties, the issue of naturalisation and citizenship through the sale of passports seriously complicated matters in 2 002. To resolve this issue, it was decided to apply two filtering processes: Stage 1 identified persons with tribal heritage through manual editing, and Stage 2 identified persons of Nauruan nationality and citizenship through designed skips in the questionnaire that were incorporated in the data-processing programming.
The topics of questions for each of the parts include: - Person Particulars: - name - relationship - sex - ethnicity - religion - educational attainment - Economic Activity (to all persons 15 years and above): - economic activity - economic inactive - employment status - Fertility: - Fertility - Mortality - Labour Force Activity: - production of cash crops - fishing - own account businesses - handicrafts. - Disability: - type of disability - nature of disability - Household and housing: - electricity - water - tenure - lighting - cooking - sanitation - wealth ownerships
Coding, data entry and editing Coding took longer than expected when the Census Office found that more quality-control checks were required before coding could take place and that a large number of forms still required attention. While these quality-control checks were supposed to have been done by the supervisors in the field, the Census Office decided to review all census forms before commencing the coding. This process took approximately three months, before actual data processing could begin. The amount of additional time required to recheck the quality of every census form meant that data processing fell behind schedule. The Census Office had to improvise, with a little pressure from external stakeholders, and coding, in conjunction with data entry, began after recruiting two additional data entry personnel. All four Census Office staff became actively involved with coding, with one staff member alternating between coding and data entry, depending on which process was dropping behind schedule. In the end, the whole process took almost two months to complete. Prior to commencing data entry, the Census Office had to familiarise itself with the data entry processing system. For this purpose, SPC’s Demography/Population Programme was invited to lend assistance. Two office staff were appointed to work with Mr Arthur Jorari, SPC Population Specialist, who began by revising their skills for the data processing software that had been introduced by Dr McMurray. This training attachment took two weeks to complete. Data entry was undertaken using the 2 .3 version of the US Census Bureau’s census and surveying processing software, or CSPro2.3. This version was later updated to CSPro2.4, and all data were transferred accordingly. Technical assistance for data editing was provided by Mr Jorari over a two-week period. While most edits were completed during this period, it was discovered that some batches of questionnaires had not been entered during the initial data capturing. Therefore, batch-edit application had to be regenerated. This process was frequently interrupted by power outages prevailing at the time, which delayed data processing considerably and also required much longer periods of technical support to the two Nauru data processing staff via phone or email (when available).
Data was compared with Administrative records after the Census to review the quality and reliability of the data.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The STAMINA study examined the nutritional risks of low-income peri-urban mothers, infants and young children (IYC), and households in Peru during the COVID-19 pandemic. The study was designed to capture information through three, repeated cross-sectional surveys at approximately 6 month intervals over an 18 month period, starting in December 2020. The surveys were carried out by telephone in November-December 2020, July-August 2021 and in February-April 2022. The third survey took place over a longer period to allow for a household visit after the telephone interview.The study areas were Manchay (Lima) and Huánuco district in the Andean highlands (~ 1900m above sea level).In each study area, we purposively selected the principal health centre and one subsidiary health centre. Peri-urban communities under the jurisdiction of these health centres were then selected to participate. Systematic random sampling was employed with quotas for IYC age (6-11, 12-17 and 18-23 months) to recruit a target sample size of 250 mother-infant pairs for each survey.Data collected included: household socio-demographic characteristics; infant and young child feeding practices (IYCF), child and maternal qualitative 24-hour dietary recalls/7 day food frequency questionnaires, household food insecurity experience measured using the validated Food Insecurity Experience Scale (FIES) survey module (Cafiero, Viviani, & Nord, 2018), and maternal mental health.In addition, questions that assessed the impact of COVID-19 on households including changes in employment status, adaptations to finance, sources of financial support, household food insecurity experience as well as access to, and uptake of, well-child clinics and vaccination health services were included.This folder includes the questionnaire for survey 3 in both English and Spanish languages.The corresponding dataset and dictionary of variables for survey 3 are available at 10.17028/rd.lboro.21741014
aData are mean (SD), median or % (n).bNot all women answered all demographic questions.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Professional organizations in STEM (science, technology, engineering, and mathematics) can use demographic data to quantify recruitment and retention (R&R) of underrepresented groups within their memberships. However, variation in the types of demographic data collected can influence the targeting and perceived impacts of R&R efforts - e.g., giving false signals of R&R for some groups. We obtained demographic surveys from 73 U.S.-affiliated STEM organizations, collectively representing 712,000 members and conference-attendees. We found large differences in the demographic categories surveyed (e.g., disability status, sexual orientation) and the available response options. These discrepancies indicate a lack of consensus regarding the demographic groups that should be recognized and, for groups that are omitted from surveys, an inability of organizations to prioritize and evaluate R&R initiatives. Aligning inclusive demographic surveys across organizations will provide baseline data that can be used to target and evaluate R&R initiatives to better serve underrepresented groups throughout STEM. Methods We surveyed 164 STEM organizations (73 responses, rate = 44.5%) between December 2020 and July 2021 with the goal of understanding what demographic data each organization collects from its constituents (i.e., members and conference-attendees) and how the data are used. Organizations were sourced from a list of professional societies affiliated with the American Association for the Advancement of Science, AAAS, (n = 156) or from social media (n = 8). The survey was sent to the elected leadership and management firms for each organization, and follow-up reminders were sent after one month. The responding organizations represented a wide range of fields: 31 life science organizations (157,000 constituents), 5 mathematics organizations (93,000 constituents), 16 physical science organizations (207,000 constituents), 7 technology organizations (124,000 constituents), and 14 multi-disciplinary organizations spanning multiple branches of STEM (131,000 constituents). A list of the responding organizations is available in the Supplementary Materials. Based on the AAAS-affiliated recruitment of the organizations and the similar distribution of constituencies across STEM fields, we conclude that the responding organizations are a representative cross-section of the most prominent STEM organizations in the U.S. Each organization was asked about the demographic information they collect from their constituents, the response rates to their surveys, and how the data were used. Survey description The following questions are written as presented to the participating organizations. Question 1: What is the name of your STEM organization? Question 2: Does your organization collect demographic data from your membership and/or meeting attendees? Question 3: When was your organization’s most recent demographic survey (approximate year)? Question 4: We would like to know the categories of demographic information collected by your organization. You may answer this question by either uploading a blank copy of your organization’s survey (linked provided in online version of this survey) OR by completing a short series of questions. Question 5: On the most recent demographic survey or questionnaire, what categories of information were collected? (Please select all that apply)
Disability status Gender identity (e.g., male, female, non-binary) Marital/Family status Racial and ethnic group Religion Sex Sexual orientation Veteran status Other (please provide)
Question 6: For each of the categories selected in Question 5, what options were provided for survey participants to select? Question 7: Did the most recent demographic survey provide a statement about data privacy and confidentiality? If yes, please provide the statement. Question 8: Did the most recent demographic survey provide a statement about intended data use? If yes, please provide the statement. Question 9: Who maintains the demographic data collected by your organization? (e.g., contracted third party, organization executives) Question 10: How has your organization used members’ demographic data in the last five years? Examples: monitoring temporal changes in demographic diversity, publishing diversity data products, planning conferences, contributing to third-party researchers. Question 11: What is the size of your organization (number of members or number of attendees at recent meetings)? Question 12: What was the response rate (%) for your organization’s most recent demographic survey? *Organizations were also able to upload a copy of their demographics survey instead of responding to Questions 5-8. If so, the uploaded survey was used (by the study authors) to evaluate Questions 5-8.