Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Science in (Higher) Education – data of the February 2017 survey
This data set contains:
Survey structure
The survey includes 24 questions and its structure can be separated in five major themes: material used in courses (5), OER awareness, usage and development (6), collaborative tools used in courses (2), assessment and participation options (5), demographics (4). The last two questions include an open text questions about general issues on the topics and singular open education experiences, and a request on forwarding the respondent’s e-mail address for further questionings. The online survey was created with Limesurvey[1]. Several questions include filters, i.e. these questions were only shown if a participants did choose a specific answer beforehand ([n/a] in Excel file, [.] In SPSS).
Demographic questions
Demographic questions asked about the current position, the discipline, birth year and gender. The classification of research disciplines was adapted to general disciplines at German higher education institutions. As we wanted to have a broad classification, we summarised several disciplines and came up with the following list, including the option “other” for respondents who do not feel confident with the proposed classification:
The current job position classification was also chosen according to common positions in Germany, including positions with a teaching responsibility at higher education institutions. Here, we also included the option “other” for respondents who do not feel confident with the proposed classification:
We chose to have a free text (numerical) for asking about a respondent’s year of birth because we did not want to pre-classify respondents’ age intervals. It leaves us options to have different analysis on answers and possible correlations to the respondents’ age. Asking about the country was left out as the survey was designed for academics in Germany.
Remark on OER question
Data from earlier surveys revealed that academics suffer confusion about the proper definition of OER[2]. Some seem to understand OER as free resources, or only refer to open source software (Allen & Seaman, 2016, p. 11). Allen and Seaman (2016) decided to give a broad explanation of OER, avoiding details to not tempt the participant to claim “aware”. Thus, there is a danger of having a bias when giving an explanation. We decided not to give an explanation, but keep this question simple. We assume that either someone knows about OER or not. If they had not heard of the term before, they do not probably use OER (at least not consciously) or create them.
Data collection
The target group of the survey was academics at German institutions of higher education, mainly universities and universities of applied sciences. To reach them we sent the survey to diverse institutional-intern and extern mailing lists and via personal contacts. Included lists were discipline-based lists, lists deriving from higher education and higher education didactic communities as well as lists from open science and OER communities. Additionally, personal e-mails were sent to presidents and contact persons from those communities, and Twitter was used to spread the survey.
The survey was online from Feb 6th to March 3rd 2017, e-mails were mainly sent at the beginning and around mid-term.
Data clearance
We got 360 responses, whereof Limesurvey counted 208 completes and 152 incompletes. Two responses were marked as incomplete, but after checking them turned out to be complete, and we added them to the complete responses dataset. Thus, this data set includes 210 complete responses. From those 150 incomplete responses, 58 respondents did not answer 1st question, 40 respondents discontinued after 1st question. Data shows a constant decline in response answers, we did not detect any striking survey question with a high dropout rate. We deleted incomplete responses and they are not in this data set.
Due to data privacy reasons, we deleted seven variables automatically assigned by Limesurvey: submitdate, lastpage, startlanguage, startdate, datestamp, ipaddr, refurl. We also deleted answers to question No 24 (email address).
References
Allen, E., & Seaman, J. (2016). Opening the Textbook: Educational Resources in U.S. Higher Education, 2015-16.
First results of the survey are presented in the poster:
Heck, Tamara, Blümel, Ina, Heller, Lambert, Mazarakis, Athanasios, Peters, Isabella, Scherp, Ansgar, & Weisel, Luzian. (2017). Survey: Open Science in Higher Education. Zenodo. http://doi.org/10.5281/zenodo.400561
Contact:
Open Science in (Higher) Education working group, see http://www.leibniz-science20.de/forschung/projekte/laufende-projekte/open-science-in-higher-education/.
[1] https://www.limesurvey.org
[2] The survey question about the awareness of OER gave a broad explanation, avoiding details to not tempt the participant to claim “aware”.
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Professional organizations in STEM (science, technology, engineering, and mathematics) can use demographic data to quantify recruitment and retention (R&R) of underrepresented groups within their memberships. However, variation in the types of demographic data collected can influence the targeting and perceived impacts of R&R efforts - e.g., giving false signals of R&R for some groups. We obtained demographic surveys from 73 U.S.-affiliated STEM organizations, collectively representing 712,000 members and conference-attendees. We found large differences in the demographic categories surveyed (e.g., disability status, sexual orientation) and the available response options. These discrepancies indicate a lack of consensus regarding the demographic groups that should be recognized and, for groups that are omitted from surveys, an inability of organizations to prioritize and evaluate R&R initiatives. Aligning inclusive demographic surveys across organizations will provide baseline data that can be used to target and evaluate R&R initiatives to better serve underrepresented groups throughout STEM. Methods We surveyed 164 STEM organizations (73 responses, rate = 44.5%) between December 2020 and July 2021 with the goal of understanding what demographic data each organization collects from its constituents (i.e., members and conference-attendees) and how the data are used. Organizations were sourced from a list of professional societies affiliated with the American Association for the Advancement of Science, AAAS, (n = 156) or from social media (n = 8). The survey was sent to the elected leadership and management firms for each organization, and follow-up reminders were sent after one month. The responding organizations represented a wide range of fields: 31 life science organizations (157,000 constituents), 5 mathematics organizations (93,000 constituents), 16 physical science organizations (207,000 constituents), 7 technology organizations (124,000 constituents), and 14 multi-disciplinary organizations spanning multiple branches of STEM (131,000 constituents). A list of the responding organizations is available in the Supplementary Materials. Based on the AAAS-affiliated recruitment of the organizations and the similar distribution of constituencies across STEM fields, we conclude that the responding organizations are a representative cross-section of the most prominent STEM organizations in the U.S. Each organization was asked about the demographic information they collect from their constituents, the response rates to their surveys, and how the data were used. Survey description The following questions are written as presented to the participating organizations. Question 1: What is the name of your STEM organization? Question 2: Does your organization collect demographic data from your membership and/or meeting attendees? Question 3: When was your organization’s most recent demographic survey (approximate year)? Question 4: We would like to know the categories of demographic information collected by your organization. You may answer this question by either uploading a blank copy of your organization’s survey (linked provided in online version of this survey) OR by completing a short series of questions. Question 5: On the most recent demographic survey or questionnaire, what categories of information were collected? (Please select all that apply)
Disability status Gender identity (e.g., male, female, non-binary) Marital/Family status Racial and ethnic group Religion Sex Sexual orientation Veteran status Other (please provide)
Question 6: For each of the categories selected in Question 5, what options were provided for survey participants to select? Question 7: Did the most recent demographic survey provide a statement about data privacy and confidentiality? If yes, please provide the statement. Question 8: Did the most recent demographic survey provide a statement about intended data use? If yes, please provide the statement. Question 9: Who maintains the demographic data collected by your organization? (e.g., contracted third party, organization executives) Question 10: How has your organization used members’ demographic data in the last five years? Examples: monitoring temporal changes in demographic diversity, publishing diversity data products, planning conferences, contributing to third-party researchers. Question 11: What is the size of your organization (number of members or number of attendees at recent meetings)? Question 12: What was the response rate (%) for your organization’s most recent demographic survey? *Organizations were also able to upload a copy of their demographics survey instead of responding to Questions 5-8. If so, the uploaded survey was used (by the study authors) to evaluate Questions 5-8.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The questions list for questionnaire – Demographics and basic work characteristics of survey respondents
Pursuant to Local Laws 126, 127, and 128 of 2016, certain demographic data is collected voluntarily and anonymously by persons voluntarily seeking social services. This data can be used by agencies and the public to better understand the demographic makeup of client populations and to better understand and serve residents of all backgrounds and identities. The data presented here has been collected through either electronic form or paper surveys offered at the point of application for services. These surveys are anonymous. Each record represents an anonymized demographic profile of an individual applicant for social services, disaggregated by response option, agency, and program. Response options include information regarding ancestry, race, primary and secondary languages, English proficiency, gender identity, and sexual orientation. Idiosyncrasies or Limitations: Note that while the dataset contains the total number of individuals who have identified their ancestry or languages spoke, because such data is collected anonymously, there may be instances of a single individual completing multiple voluntary surveys. Additionally, the survey being both voluntary and anonymous has advantages as well as disadvantages: it increases the likelihood of full and honest answers, but since it is not connected to the individual case, it does not directly inform delivery of services to the applicant. The paper and online versions of the survey ask the same questions but free-form text is handled differently. Free-form text fields are expected to be entered in English although the form is available in several languages. Surveys are presented in 11 languages. Paper Surveys 1. Are optional 2. Survey taker is expected to specify agency that provides service 2. Survey taker can skip or elect not to answer questions 3. Invalid/unreadable data may be entered for survey date or date may be skipped 4. OCRing of free-form tet fields may fail. 5. Analytical value of free-form text answers is unclear Online Survey 1. Are optional 2. Agency is defaulted based on the URL 3. Some questions must be answered 4. Date of survey is automated
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The STAMINA study examined the nutritional risks of low-income peri-urban mothers, infants and young children, and households in Peru during the COVID-19 pandemic. The study was designed to capture information through three, repeated cross-sectional surveys at approximately 6 month intervals over an 18 month period, starting in December 2020. The surveys were carried out by telephone in November-December 2020, July-August 2021 and in February-April 2022. The third survey took place over a longer period to allow for a household visit after the telephone interview.The study areas were Manchay (Lima) and Huánuco district in the Andean highlands (~ 1900m above sea level).In each study area, we purposively selected the principal health centre and one subsidiary health centre. Peri-urban communities under the jurisdiction of these health centres were then selected to participate. Systematic random sampling was employed with quotas for IYC age (6-11, 12-17 and 18-23 months) to recruit a target sample size of 250 mother-infant pairs for each survey. .Data collected included: household socio-demographic characteristics; infant and young child feeding practices (IYCF), child and maternal qualitative 24-hour dietary recalls/7 day food frequency questionnaires, household food insecurity experience measured using the validated Food Insecurity Experience Scale (FIES) survey module (Cafiero, Viviani, & Nord, 2018), and maternal mental health.In addition, questions that assessed the impact of COVID-19 on households including changes in employment status, adaptations to finance, sources of financial support, household food insecurity experience as well as access to, and uptake of, well-child clinics and vaccination health services were included.This folder includes the dataset and dictionary of variables for survey 1 (English only).The survey questionnaire for survey 1 is available at 10.17028/rd.lboro.16825507.
The 1998 Ghana Demographic and Health Survey (GDHS) is the latest in a series of national-level population and health surveys conducted in Ghana and it is part of the worldwide MEASURE DHS+ Project, designed to collect data on fertility, family planning, and maternal and child health.
The primary objective of the 1998 GDHS is to provide current and reliable data on fertility and family planning behaviour, child mortality, children’s nutritional status, and the utilisation of maternal and child health services in Ghana. Additional data on knowledge of HIV/AIDS are also provided. This information is essential for informed policy decisions, planning and monitoring and evaluation of programmes at both the national and local government levels.
The long-term objectives of the survey include strengthening the technical capacity of the Ghana Statistical Service (GSS) to plan, conduct, process, and analyse the results of complex national sample surveys. Moreover, the 1998 GDHS provides comparable data for long-term trend analyses within Ghana, since it is the third in a series of demographic and health surveys implemented by the same organisation, using similar data collection procedures. The GDHS also contributes to the ever-growing international database on demographic and health-related variables.
National
Sample survey data
The major focus of the 1998 GDHS was to provide updated estimates of important population and health indicators including fertility and mortality rates for the country as a whole and for urban and rural areas separately. In addition, the sample was designed to provide estimates of key variables for the ten regions in the country.
The list of Enumeration Areas (EAs) with population and household information from the 1984 Population Census was used as the sampling frame for the survey. The 1998 GDHS is based on a two-stage stratified nationally representative sample of households. At the first stage of sampling, 400 EAs were selected using systematic sampling with probability proportional to size (PPS-Method). The selected EAs comprised 138 in the urban areas and 262 in the rural areas. A complete household listing operation was then carried out in all the selected EAs to provide a sampling frame for the second stage selection of households. At the second stage of sampling, a systematic sample of 15 households per EA was selected in all regions, except in the Northern, Upper West and Upper East Regions. In order to obtain adequate numbers of households to provide reliable estimates of key demographic and health variables in these three regions, the number of households in each selected EA in the Northern, Upper West and Upper East regions was increased to 20. The sample was weighted to adjust for over sampling in the three northern regions (Northern, Upper East and Upper West), in relation to the other regions. Sample weights were used to compensate for the unequal probability of selection between geographically defined strata.
The survey was designed to obtain completed interviews of 4,500 women age 15-49. In addition, all males age 15-59 in every third selected household were interviewed, to obtain a target of 1,500 men. In order to take cognisance of non-response, a total of 6,375 households nation-wide were selected.
Note: See detailed description of sample design in APPENDIX A of the survey report.
Face-to-face
Three types of questionnaires were used in the GDHS: the Household Questionnaire, the Women’s Questionnaire, and the Men’s Questionnaire. These questionnaires were based on model survey instruments developed for the international MEASURE DHS+ programme and were designed to provide information needed by health and family planning programme managers and policy makers. The questionnaires were adapted to the situation in Ghana and a number of questions pertaining to on-going health and family planning programmes were added. These questionnaires were developed in English and translated into five major local languages (Akan, Ga, Ewe, Hausa, and Dagbani).
The Household Questionnaire was used to enumerate all usual members and visitors in a selected household and to collect information on the socio-economic status of the household. The first part of the Household Questionnaire collected information on the relationship to the household head, residence, sex, age, marital status, and education of each usual resident or visitor. This information was used to identify women and men who were eligible for the individual interview. For this purpose, all women age 15-49, and all men age 15-59 in every third household, whether usual residents of a selected household or visitors who slept in a selected household the night before the interview, were deemed eligible and interviewed. The Household Questionnaire also provides basic demographic data for Ghanaian households. The second part of the Household Questionnaire contained questions on the dwelling unit, such as the number of rooms, the flooring material, the source of water and the type of toilet facilities, and on the ownership of a variety of consumer goods.
The Women’s Questionnaire was used to collect information on the following topics: respondent’s background characteristics, reproductive history, contraceptive knowledge and use, antenatal, delivery and postnatal care, infant feeding practices, child immunisation and health, marriage, fertility preferences and attitudes about family planning, husband’s background characteristics, women’s work, knowledge of HIV/AIDS and STDs, as well as anthropometric measurements of children and mothers.
The Men’s Questionnaire collected information on respondent’s background characteristics, reproduction, contraceptive knowledge and use, marriage, fertility preferences and attitudes about family planning, as well as knowledge of HIV/AIDS and STDs.
A total of 6,375 households were selected for the GDHS sample. Of these, 6,055 were occupied. Interviews were completed for 6,003 households, which represent 99 percent of the occupied households. A total of 4,970 eligible women from these households and 1,596 eligible men from every third household were identified for the individual interviews. Interviews were successfully completed for 4,843 women or 97 percent and 1,546 men or 97 percent. The principal reason for nonresponse among individual women and men was the failure of interviewers to find them at home despite repeated callbacks.
Note: See summarized response rates by place of residence in Table 1.1 of the survey report.
The estimates from a sample survey are affected by two types of errors: (1) nonsampling errors, and (2) sampling errors. Nonsampling errors are the results of shortfalls made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 1998 GDHS to minimize this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 1998 GDHS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 1998 GDHS sample is the result of a two-stage stratified design, and, consequently, it was necessary to use more complex formulae. The computer software used to calculate sampling errors for the 1998 GDHS is the ISSA Sampling Error Module. This module uses the Taylor linearization method of variance estimation for survey estimates that are means or proportions. The Jackknife repeated replication method is used for variance estimation of more complex statistics such as fertility and mortality rates.
Data Quality Tables - Household age distribution - Age distribution of eligible and interviewed women - Age distribution of eligible and interviewed men - Completeness of reporting - Births by calendar years - Reporting of age at death in days - Reporting of age at death in months
Note: See detailed tables in APPENDIX C of the survey report.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
We sent out the survey via institutional email to 125 M1 students, the entire class excepting the 5 students (Jonathan, Charles, I, and 2 others) who were involved in setting up the survey. Each survey would collect data for 7 days after it was sent out, and the first 13 participants to respond in each round were given an electronic 5$ amazon gift card from funds provided by CUSM's Student Scholars Presentation and Dissemination Initiative committee. The 4 rounds of surveys were sent out on 12/12/22, 1/3/23, 1/17/23, and 1/31/23. All survey round questionnaires were identical and consisted of survey items from two instruments: the Copenhagen Burnout Inventory (CBI) and the Patient Health Questionnaire-9 (PHQ-9). All three sections of the CBI (personal, work-related, and client-related burnout) were used, and the order of the questions was randomized. The order of the PHQ-9 questions was also randomized, but the CBI and PHQ-9 questions were delivered separately. Due to privacy concerns about the stigmatization of mental health, no demographic questions, such as race or age, were included.
https://www.icpsr.umich.edu/web/ICPSR/studies/29646/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/29646/terms
This data collection is comprised of responses from the March and April installments of the 2008 Current Population Survey (CPS). Both the March and April surveys used two sets of questions, the basic CPS and a separate supplement for each month.The CPS, administered monthly, is a labor force survey providing current estimates of the economic status and activities of the population of the United States. Specifically, the CPS provides estimates of total employment (both farm and nonfarm), nonfarm self-employed persons, domestics, and unpaid helpers in nonfarm family enterprises, wage and salaried employees, and estimates of total unemployment.In addition to the basic CPS questions, respondents were asked questions from the March supplement, known as the Annual Social and Economic (ASEC) supplement. The ASEC provides supplemental data on work experience, income, noncash benefits, and migration. Comprehensive work experience information was given on the employment status, occupation, and industry of persons 15 years old and older. Additional data for persons 15 years old and older are available concerning weeks worked and hours per week worked, reason not working full time, total income and income components, and place of residence on March 1, 2007. The March supplement also contains data covering nine noncash income sources: food stamps, school lunch program, employer-provided group health insurance plan, employer-provided pension plan, personal health insurance, Medicaid, Medicare, CHAMPUS or military health care, and energy assistance. Questions covering training and assistance received under welfare reform programs, such as job readiness training, child care services, or job skill training were also asked in the March supplement.The April supplement, sponsored by the Department of Health and Human Services, queried respondents on the economic situation of persons and families for the previous year. Moreover, all household members 15 years of age and older that are a biological parent of children in the household that have an absent parent were asked detailed questions about child support and alimony. Information regarding child support was collected to determine the size and distribution of the population with children affected by divorce or separation, or other relationship status change. Moreover, the data were collected to better understand the characteristics of persons requiring child support, and to help develop and maintain programs designed to assist in obtaining child support. These data highlight alimony and child support arrangements made at the time of separation or divorce, amount of payments actually received, and value and type of any property settlement.The April supplement data were matched to March supplement data for households that were in the sample in both March and April 2008. In March 2008, there were 4,522 household members eligible, of which 1,431 required imputation of child support data. When matching the March 2008 and April 2008 data sets, there were 170 eligible people on the March file that did not match to people on the April file. Child support data for these 170 people were imputed. The remaining 1,261 imputed cases were due to nonresponse to the child support questions. Demographic variables include age, sex, race, Hispanic origin, marital status, veteran status, educational attainment, occupation, and income. Data on employment and income refer to the preceding year, although other demographic data refer to the time at which the survey was administered.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The RECOVER Consortium developed a web-based interactive educational website and application to effectively disseminate oil spill science and research to students – ranging from elementary to collegiate levels – and the general public. The RECOVER Virtual Lab application allows users to conduct virtual experiments on the impacts of oil on fish physiology, similar to those of RECOVER researchers. By using the Virtual Lab, students, teachers and the general public are able to understand the real-world applications of data, experimental designs, and results generated by RECOVER researchers. Both Virtual Lab lessons utilize data produced by GoMRI scientists which are made available to students and the public to expand the reach of the oil spill science to individuals that may not otherwise have access to oil spill science and data. At the end of each lesson, students complete a demographic questionnaire and answer content-based questions through quizzes developed in Google Forms. From this data, the Virtual Lab has been used within 30 different states and 2 international countries, with a total usership of over 1,000 students.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Demographic questions: differences in response.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Sexual, romantic, and related orientations across all institutions, based on the queered survey (n = 1932).
This is the 25th annual survey in this series that explores changes in important values, behaviors, and lifestyle orientations of contemporary American youth. Students are randomly assigned to complete one of six questionnaires, each with a different subset of topical questions but all containing a set of 'core' questions on demographics and drug use. There are about 1,400 variables across the questionnaires. Drugs covered by this survey include tobacco, alcohol, marijuana, hashish, LSD, hallucinogens, amphetamines (stimulants), Ritalin (methylphenidate), quaaludes, barbiturates (tranquilizers), cocaine, crack, and heroin. Other items include attitudes toward religion, parental influences, changing roles for women, educational aspirations, self-esteem, exposure to drug education, and violence and crime -- both in and out of school.
https://borealisdata.ca/api/datasets/:persistentId/versions/7.1/customlicense?persistentId=doi:10.7939/DVN/10004https://borealisdata.ca/api/datasets/:persistentId/versions/7.1/customlicense?persistentId=doi:10.7939/DVN/10004
The Population Research Laboratory (PRL), a member of the Association of Academic Survey Research Organizations (AASRO), seeks to advance the research, education and service goals of the University of Alberta by helping academic researchers and policy makers design and implement applied social science research projects. The PRL specializes in the gathering, analysis, and presentation of data about demographic, social and public issues. The PRL research team provides expert consultation and implementation of quantitative and qualitative research methods, project design, sample design, web-based, paper-based and telephone surveys, field site testing, data analysis and report writing. The PRL follows scientifically rigorous and transparent methods in each phase of a research project. Research Coordinators are members of the American Association for Public Opinion Research (AAPOR) and use best practices when conducting all types of research. The PRL has particular expertise in conducting computer-assisted telephone interviews (referred to as CATI surveys). When conducting telephone surveys, all calls are displayed as being from the "U of A PRL", a procedure that assures recipients that the call is not from a telemarketer, and thus helps increase response rates. The PRL maintains a complement of highly skilled telephone interviewers and supervisors who are thoroughly trained in FOIPP requirements, respondent selection procedures, questionnaire instructions, and neutral probing. A subset of interviewers are specially trained to convince otherwise reluctant respondents to participate in the study, a practice that increases response rates and lowers selection bias. PRL staff monitors data collection on a daily basis to allow any necessary adjustments to the volume and timing of calls and respondent selection criteria. The Population Research Laboratory (PRL) administered the 2012 Alberta Survey B. This survey of households across the province of Alberta continues to enable academic researchers, government departments, and non-profit organizations to explore a wide range of topics in a structured research framework and environment. Sponsors' research questions are asked together with demographic questions in a telephone interview of Alberta households. This data consists of the information from 1207 Alberta residence, interviewed between June 5, 2012 and June 27, 2012. The amount of responses indicates that the response rate, as calculated percentages representing the number of people who participated in the survey divided by the number selected in the eligible sample, was 27.6% for survey B. The subject ares included in the 2012 Alberta Survey B includes socio-demographic and background variables such as: household composition, age, gender, marital status, highest level of education, household income, religion, ethnic background, place of birth, employment status, home ownership, political party support and perceptions of financial status. In addition, the topics of public health and injury control, tobacco reduction, activity limitations and personal directives, unions, politics and health.
The Gallup Poll Social Series (GPSS) is a set of public opinion surveys designed to monitor U.S. adults' views on numerous social, economic, and political topics. The topics are arranged thematically across 12 surveys. Gallup administers these surveys during the same month every year and includes the survey's core trend questions in the same order each administration. Using this consistent standard allows for unprecedented analysis of changes in trend data that are not susceptible to question order bias and seasonal effects.
Introduced in 2001, the GPSS is the primary method Gallup uses to update several hundred long-term Gallup trend questions, some dating back to the 1930s. The series also includes many newer questions added to address contemporary issues as they emerge.
The dataset currently includes responses from up to and including 2025.
Gallup conducts one GPSS survey per month, with each devoted to a different topic, as follows:
January: Mood of the Nation
February: World Affairs
March: Environment
April: Economy and Finance
May: Values and Beliefs
June: Minority Rights and Relations (discontinued after 2016)
July: Consumption Habits
August: Work and Education
September: Governance
October: Crime
November: Health
December: Lifestyle (conducted 2001-2008)
The core questions of the surveys differ each month, but several questions assessing the state of the nation are standard on all 12: presidential job approval, congressional job approval, satisfaction with the direction of the U.S., assessment of the U.S. job market, and an open-ended measurement of the nation's "most important problem." Additionally, Gallup includes extensive demographic questions on each survey, allowing for in-depth analysis of trends.
Interviews are conducted with U.S. adults aged 18 and older living in all 50 states and the District of Columbia using a dual-frame design, which includes both landline and cellphone numbers. Gallup samples landline and cellphone numbers using random-digit-dial methods. Gallup purchases samples for this study from Survey Sampling International (SSI). Gallup chooses landline respondents at random within each household based on which member had the next birthday. Each sample of national adults includes a minimum quota of 70% cellphone respondents and 30% landline respondents, with additional minimum quotas by time zone within region. Gallup conducts interviews in Spanish for respondents who are primarily Spanish-speaking.
Gallup interviews a minimum of 1,000 U.S. adults aged 18 and older for each GPSS survey. Samples for the June Minority Rights and Relations survey are significantly larger because Gallup includes oversamples of Blacks and Hispanics to allow for reliable estimates among these key subgroups.
Gallup weights samples to correct for unequal selection probability, nonresponse, and double coverage of landline and cellphone users in the two sampling frames. Gallup also weights its final samples to match the U.S. population according to gender, age, race, Hispanic ethnicity, education, region, population density, and phone status (cellphone only, landline only, both, and cellphone mostly).
Demographic weighting targets are based on the most recent Current Population Survey figures for the aged 18 and older U.S. population. Phone status targets are based on the most recent National Health Interview Survey. Population density targets are based on the most recent U.S. Census.
The year appended to each table name represents when the data was last updated. For example, January: Mood of the Nation - 2025** **has survey data collected up to and including 2025.
For more information about what survey questions were asked over time, see the Supporting Files.
Data access is required to view this section.
https://www.icpsr.umich.edu/web/ICPSR/studies/2398/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/2398/terms
The principal purposes of this national longitudinal study of the higher education system in the United States are to describe the characteristics of new college freshmen and to explore the effects of college on students. For each wave of this survey, students complete a questionnaire during freshman orientation or registration containing some 200 items covering information on academic skills and preparation, high school activities and experiences, educational and career plans, majors and careers, student values, financing college, and a variety of demographic questions such as sex, age, parental education and occupation, household income, race, religious preference, and state of birth. Specific questions asked of respondents in the 1968 survey included average grade in secondary school, how many colleges they had applied to for admission, accomplishments during their high school years, highest academic degree they intended to obtain, concerns about financing their education, if they were a twin, source of financing for the first year of school, academic standards and individual ranking at their high schools, size of locality in which they lived when growing up, and what they hoped to accomplish in college. Respondents were also asked to list their probable career occupation, first, second, and least appealing major field of study, and activities they engaged in during their previous year in school. Also elicited were respondents' opinions on the importance of various individuals and events in their decision to enroll in college, assessments of achieving certain goals during their college years, and general attitudes about faculty and other students.
The City of Norfolk is committed to using data to inform decisions and allocate resources. An important source of data is input from residents about their priorities and satisfaction with the services we provide. Norfolk last conducted a citywide survey of residents in 2022.
To provide up-to-date information regarding resident priorities and satisfaction, Norfolk contracted with ETC Institute to conduct a survey of residents. This survey was conducted in May and June 2024; surveys were sent via the U.S. Postal Service, and respondents were given the choice of responding by mail or online. This survey represents a random and statistically valid sample of residents from across the city, including each Ward. ETC Institute monitored responses and followed up to ensure all sections of the city were represented. Additionally, an opportunity was provided for residents not included in the random sample to take the survey and express their views. This dataset includes all random sample survey data including demographic information; it excludes free-form comments to protect privacy. It is grouped by Question Category, Question, Response, Demographic Question, and Demographic Question Response. This dataset will be updated every two years.
https://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.2/customlicense?persistentId=doi:10.7910/DVN/WWJAZAhttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/1.2/customlicense?persistentId=doi:10.7910/DVN/WWJAZA
This is a longitudinal study of the class of 1984 at Harvard and Tufts medical schools. The purpose of the study was to identify and describe experiences of stress in the lives and training of medical students, to determine the nature of the personal and environmental resources that students used to cope with stress, and to assess the effectiveness of these adaptational responses. All entering students at the two schools were invited to participate in the questionnaire phase of the research during orientation week in September, 1980. Data were collected in this first wave from 265 students, approximately 85% of the two first year classes. In addition to the demographic questions, the package of instruments included a self-esteem scale, a locus of control scale, sex-role and social support measures, TATs, Habits of Nervous Tension, Life Conditions Questionnaire, the Multiple Affect Checklist, and depression and anxiety scales. A subsample of 64 students was interviewed during the first year; this group contained equal numbers of men and women and equal numbers of nonminorities and minorities. Additional waves of interview data were collected in the third and fourth years; questionnaire data were again collected in seven subsequent years. The interviews were both semi-structured and open-ended, and covered contextual information, experiences of stress, feelings about oneself, attitudes about medicine and career, and moral development measures. The Murray Research Archive has original record paper data from the first wave of the study, as well as original record paper data from a pilot study. In addition, the Murray Archive holds interview transcripts from the first and third waves of data collection.
The Afrobarometer is a comparative series of public attitude surveys that assess African citizen's attitudes to democracy and governance, markets, and civil society, among other topics. The surveys have been undertaken at periodic intervals since 1999. The Afrobarometer's coverage has increased over time. Round 1 (1999-2001) initially covered 7 countries and was later extended to 12 countries. Round 2 (2002-2004) surveyed citizens in 16 countries. Round 3 (2005-2006) 18 countries, Round 4 (2008) 20 countries, Round 5 (2011-2013) 34 countries, Round 6 (2014-2015) 36 countries, and Round 7 (2016-2018) 34 countries. The survey covered 34 countries in Round 8 (2019-2021).
National coverage
Individual
The sample universe for Afrobarometer surveys includes all citizens of voting age within the country. In other words, we exclude anyone who is not a citizen and anyone who has not attained this age (usually 18 years) on the day of the survey. Also excluded are areas determined to be either inaccessible or not relevant to the study, such as those experiencing armed conflict or natural disasters, as well as national parks and game reserves. As a matter of practice, we have also excluded people living in institutionalized settings, such as students in dormitories and persons in prisons or nursing homes.
Sample survey data [ssd]
Afrobarometer Sampling Procedure
Afrobarometer uses national probability samples designed to meet the following criteria. Samples are designed to generate a sample that is a representative cross-section of all citizens of voting age in a given country. The goal is to give every adult citizen an equal and known chance of being selected for an interview. They achieve this by:
• using random selection methods at every stage of sampling; • sampling at all stages with probability proportionate to population size wherever possible to ensure that larger (i.e., more populated) geographic units have a proportionally greater probability of being chosen into the sample.
The sampling universe normally includes all citizens age 18 and older. As a standard practice, we exclude people living in institutionalized settings, such as students in dormitories, patients in hospitals, and persons in prisons or nursing homes. Occasionally, we must also exclude people living in areas determined to be inaccessible due to conflict or insecurity. Any such exclusion is noted in the technical information report (TIR) that accompanies each data set.
Sample size and design Samples usually include either 1,200 or 2,400 cases. A randomly selected sample of n=1200 cases allows inferences to national adult populations with a margin of sampling error of no more than +/-2.8% with a confidence level of 95 percent. With a sample size of n=2400, the margin of error decreases to +/-2.0% at 95 percent confidence level.
The sample design is a clustered, stratified, multi-stage, area probability sample. Specifically, we first stratify the sample according to the main sub-national unit of government (state, province, region, etc.) and by urban or rural location.
Area stratification reduces the likelihood that distinctive ethnic or language groups are left out of the sample. Afrobarometer occasionally purposely oversamples certain populations that are politically significant within a country to ensure that the size of the sub-sample is large enough to be analysed. Any oversamples is noted in the TIR.
Sample stages Samples are drawn in either four or five stages:
Stage 1: In rural areas only, the first stage is to draw secondary sampling units (SSUs). SSUs are not used in urban areas, and in some countries they are not used in rural areas. See the TIR that accompanies each data set for specific details on the sample in any given country. Stage 2: We randomly select primary sampling units (PSU). Stage 3: We then randomly select sampling start points. Stage 4: Interviewers then randomly select households. Stage 5: Within the household, the interviewer randomly selects an individual respondent. Each interviewer alternates in each household between interviewing a man and interviewing a woman to ensure gender balance in the sample.
To keep the costs and logistics of fieldwork within manageable limits, eight interviews are clustered within each selected PSU.
Benin - Sample size: 1,200 - Sampling frame: 2013 sampling frame updated from the General Population and Housing Census (RGPH 4) - Sample design: Nationally representative, random, clustered, stratified, multistage area, probability sampling - Stratification: Region, urban-rural distributio - Stages: PSUs (from strata), start points, households, respondents - PSU selection: Probability proportional to population size (PPPS) - Cluster size: 8 households per PSU - Household selection: Random choice of the starting point, followed by the sampling interval using an interval of 5/10 households - Respondent selection: Gender quota to be achieved by alternating interviews between men and women; potential respondents (i.e. household members) of the appropriate gender are listed, then the computer randomly selects the individual
Face-to-face [f2f]
The Round 8 questionnaire has been developed by the Questionnaire Committee after reviewing the findings and feedback obtained in previous Rounds, and securing input on preferred new topics from a host of donors, analysts, and users of the data.
The questionnaire consists of three parts: 1. Part 1 captures the steps for selecting households and respondents, and includes the introduction to the respondent and (pp.1-4). This section should be filled in by the Fieldworker. 2. Part 2 covers the core attitudinal and demographic questions that are asked by the Fieldworker and answered by the Respondent (Q1 – Q100). 3. Part 3 includes contextual questions about the setting and atmosphere of the interview, and collects information on the Fieldworker. This section is completed by the Fieldworker (Q101 – Q123).
Response rate was 80%.
The National Health and Nutrition Examination Surveys (NHANES) is a program of studies designed to assess the health and nutritional status of adults and children in the United States. The NHANES combines personal interviews and physical examinations, which focus on different population groups or health topics. These surveys have been conducted by the National Center for Health Statistics (NCHS) on a periodic basis from 1971 to 1994. In 1999, the NHANES became a continuous program with a changing focus on a variety of health and nutrition measurements designed to meet current and emerging concerns. The sample for the survey is selected to represent the U.S. population of all ages. Many of the NHANES 2001-2002 questions also were asked in NHANES II 1976-1980, Hispanic HANES 1982-1984, NHANES III 1988-1994. New questions were added to the survey based on recommendations from survey collaborators, NCHS staff, and other interagency work groups.
In the 2001-2002 wave, the NHANES includes more than 100 datasets. Most have been combined into three datasets for convenience. Each starts with the demographic dataset and includes datasets of a specific type.
1. National Health and Nutrition Examination Survey (NHANES), Demographic & Examination Data, 2001-2002 (The base of the Demographic dataset + all data from medical examinations).
2. National Health and Nutrition Examination Survey (NHANES), Demographic & Laboratory Data, 2001-2002 (The base of the Demographic dataset + all data from medical laboratories).
3. National Health and Nutrition Examination Survey (NHANES), Demographic & Questionnaire Data, 2001-2002 (The base of the Demographic dataset + all data from questionnaires)
Not all files from the 2001-2002 wave are included. This is for two reasons, both of which related to the merging variable (SEQN). For a subset of the files, SEQN is not a unique identifier for cases (i.e. some respondents have multiple cases) or SEQN is not in the file at all. The following datasets from this wave of the NHANES are not included in these three files and can be found individually from the "https://www.cdc.gov/nchs/nhanes/index.htm" Target="_blank">NHANES website at the CDC:
Examination: Dietary Interview (Individual Foods File)
Examination: Dual Energy X-ray Absorptiometry (DXX)
Examination: Dual Energy X-ray Absorptiometry (DXX)
Questionnaire: Analgesics Pain Relievers
Questionnaire: Dietary Supplement Use -- Ingredient Information
Questionnaire: Dietary Supplement Use -- Supplement Blend
Questionnaire: Dietary Supplement Use -- Supplement Information
Questionnaire: Drug Information
Questionnaire: Dietary Supplement Use -- Participants Use of Supplement
Questionnaire: Physical Activity Individual Activity File
Questionnaire: Prescription Medications
Variable SEQN is included for merging files within the waves. All data files should be sorted by SEQN.
Additional details of the design and content of each survey are available at the "https://www.cdc.gov/nchs/nhanes/index.htm" Target="_blank">NHANES website.
https://www.gesis.org/en/institute/data-usage-termshttps://www.gesis.org/en/institute/data-usage-terms
ALLBUS (GGSS - the German General Social Survey) is a biennial trend survey based on random samples of the German population. Established in 1980, its mission is to monitor attitudes, behavior, and social change in Germany. Each ALLBUS cross-sectional survey consists of one or two main question modules covering changing topics, a range of supplementary questions and a core module providing detailed demographic information. Additionally, data on the interview and the interviewers are provided as well. Key topics generally follow a 10-year replication cycle, many individual indicators and item batteries are replicated at shorter intervals. The present data set contains socio-demographic variables from the ALLBUS 2021, which were harmonized to the standards developed as part of the KonsortSWD sub-project “Harmonized Variables” (Schneider et al., 2023). While there are already established recommendations for the formulation of socio-demographic questionnaire items (e.g. the “Demographic Standards” by Hoffmeyer-Zlotnik et al., 2016), there were no such standards at the variable level. The KonsortSWD project closes this gap and establishes 32 standard variables for 19 socio-demographic characteristics contained in this dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Science in (Higher) Education – data of the February 2017 survey
This data set contains:
Survey structure
The survey includes 24 questions and its structure can be separated in five major themes: material used in courses (5), OER awareness, usage and development (6), collaborative tools used in courses (2), assessment and participation options (5), demographics (4). The last two questions include an open text questions about general issues on the topics and singular open education experiences, and a request on forwarding the respondent’s e-mail address for further questionings. The online survey was created with Limesurvey[1]. Several questions include filters, i.e. these questions were only shown if a participants did choose a specific answer beforehand ([n/a] in Excel file, [.] In SPSS).
Demographic questions
Demographic questions asked about the current position, the discipline, birth year and gender. The classification of research disciplines was adapted to general disciplines at German higher education institutions. As we wanted to have a broad classification, we summarised several disciplines and came up with the following list, including the option “other” for respondents who do not feel confident with the proposed classification:
The current job position classification was also chosen according to common positions in Germany, including positions with a teaching responsibility at higher education institutions. Here, we also included the option “other” for respondents who do not feel confident with the proposed classification:
We chose to have a free text (numerical) for asking about a respondent’s year of birth because we did not want to pre-classify respondents’ age intervals. It leaves us options to have different analysis on answers and possible correlations to the respondents’ age. Asking about the country was left out as the survey was designed for academics in Germany.
Remark on OER question
Data from earlier surveys revealed that academics suffer confusion about the proper definition of OER[2]. Some seem to understand OER as free resources, or only refer to open source software (Allen & Seaman, 2016, p. 11). Allen and Seaman (2016) decided to give a broad explanation of OER, avoiding details to not tempt the participant to claim “aware”. Thus, there is a danger of having a bias when giving an explanation. We decided not to give an explanation, but keep this question simple. We assume that either someone knows about OER or not. If they had not heard of the term before, they do not probably use OER (at least not consciously) or create them.
Data collection
The target group of the survey was academics at German institutions of higher education, mainly universities and universities of applied sciences. To reach them we sent the survey to diverse institutional-intern and extern mailing lists and via personal contacts. Included lists were discipline-based lists, lists deriving from higher education and higher education didactic communities as well as lists from open science and OER communities. Additionally, personal e-mails were sent to presidents and contact persons from those communities, and Twitter was used to spread the survey.
The survey was online from Feb 6th to March 3rd 2017, e-mails were mainly sent at the beginning and around mid-term.
Data clearance
We got 360 responses, whereof Limesurvey counted 208 completes and 152 incompletes. Two responses were marked as incomplete, but after checking them turned out to be complete, and we added them to the complete responses dataset. Thus, this data set includes 210 complete responses. From those 150 incomplete responses, 58 respondents did not answer 1st question, 40 respondents discontinued after 1st question. Data shows a constant decline in response answers, we did not detect any striking survey question with a high dropout rate. We deleted incomplete responses and they are not in this data set.
Due to data privacy reasons, we deleted seven variables automatically assigned by Limesurvey: submitdate, lastpage, startlanguage, startdate, datestamp, ipaddr, refurl. We also deleted answers to question No 24 (email address).
References
Allen, E., & Seaman, J. (2016). Opening the Textbook: Educational Resources in U.S. Higher Education, 2015-16.
First results of the survey are presented in the poster:
Heck, Tamara, Blümel, Ina, Heller, Lambert, Mazarakis, Athanasios, Peters, Isabella, Scherp, Ansgar, & Weisel, Luzian. (2017). Survey: Open Science in Higher Education. Zenodo. http://doi.org/10.5281/zenodo.400561
Contact:
Open Science in (Higher) Education working group, see http://www.leibniz-science20.de/forschung/projekte/laufende-projekte/open-science-in-higher-education/.
[1] https://www.limesurvey.org
[2] The survey question about the awareness of OER gave a broad explanation, avoiding details to not tempt the participant to claim “aware”.