Pursuant to Local Laws 126, 127, and 128 of 2016, certain demographic data is collected voluntarily and anonymously by persons voluntarily seeking social services. This data can be used by agencies and the public to better understand the demographic makeup of client populations and to better understand and serve residents of all backgrounds and identities. The data presented here has been collected through either electronic form or paper surveys offered at the point of application for services. These surveys are anonymous. Each record represents an anonymized demographic profile of an individual applicant for social services, disaggregated by response option, agency, and program. Response options include information regarding ancestry, race, primary and secondary languages, English proficiency, gender identity, and sexual orientation. Idiosyncrasies or Limitations: Note that while the dataset contains the total number of individuals who have identified their ancestry or languages spoke, because such data is collected anonymously, there may be instances of a single individual completing multiple voluntary surveys. Additionally, the survey being both voluntary and anonymous has advantages as well as disadvantages: it increases the likelihood of full and honest answers, but since it is not connected to the individual case, it does not directly inform delivery of services to the applicant. The paper and online versions of the survey ask the same questions but free-form text is handled differently. Free-form text fields are expected to be entered in English although the form is available in several languages. Surveys are presented in 11 languages. Paper Surveys 1. Are optional 2. Survey taker is expected to specify agency that provides service 2. Survey taker can skip or elect not to answer questions 3. Invalid/unreadable data may be entered for survey date or date may be skipped 4. OCRing of free-form tet fields may fail. 5. Analytical value of free-form text answers is unclear Online Survey 1. Are optional 2. Agency is defaulted based on the URL 3. Some questions must be answered 4. Date of survey is automated
analyze the current population survey (cps) annual social and economic supplement (asec) with r the annual march cps-asec has been supplying the statistics for the census bureau's report on income, poverty, and health insurance coverage since 1948. wow. the us census bureau and the bureau of labor statistics ( bls) tag-team on this one. until the american community survey (acs) hit the scene in the early aughts (2000s), the current population survey had the largest sample size of all the annual general demographic data sets outside of the decennial census - about two hundred thousand respondents. this provides enough sample to conduct state- and a few large metro area-level analyses. your sample size will vanish if you start investigating subgroups b y state - consider pooling multiple years. county-level is a no-no. despite the american community survey's larger size, the cps-asec contains many more variables related to employment, sources of income, and insurance - and can be trended back to harry truman's presidency. aside from questions specifically asked about an annual experience (like income), many of the questions in this march data set should be t reated as point-in-time statistics. cps-asec generalizes to the united states non-institutional, non-active duty military population. the national bureau of economic research (nber) provides sas, spss, and stata importation scripts to create a rectangular file (rectangular data means only person-level records; household- and family-level information gets attached to each person). to import these files into r, the parse.SAScii function uses nber's sas code to determine how to import the fixed-width file, then RSQLite to put everything into a schnazzy database. you can try reading through the nber march 2012 sas importation code yourself, but it's a bit of a proc freak show. this new github repository contains three scripts: 2005-2012 asec - download all microdata.R down load the fixed-width file containing household, family, and person records import by separating this file into three tables, then merge 'em together at the person-level download the fixed-width file containing the person-level replicate weights merge the rectangular person-level file with the replicate weights, then store it in a sql database create a new variable - one - in the data table 2012 asec - analysis examples.R connect to the sql database created by the 'download all microdata' progr am create the complex sample survey object, using the replicate weights perform a boatload of analysis examples replicate census estimates - 2011.R connect to the sql database created by the 'download all microdata' program create the complex sample survey object, using the replicate weights match the sas output shown in the png file below 2011 asec replicate weight sas output.png statistic and standard error generated from the replicate-weighted example sas script contained in this census-provided person replicate weights usage instructions document. click here to view these three scripts for more detail about the current population survey - annual social and economic supplement (cps-asec), visit: the census bureau's current population survey page the bureau of labor statistics' current population survey page the current population survey's wikipedia article notes: interviews are conducted in march about experiences during the previous year. the file labeled 2012 includes information (income, work experience, health insurance) pertaining to 2011. when you use the current populat ion survey to talk about america, subract a year from the data file name. as of the 2010 file (the interview focusing on america during 2009), the cps-asec contains exciting new medical out-of-pocket spending variables most useful for supplemental (medical spending-adjusted) poverty research. confidential to sas, spss, stata, sudaan users: why are you still rubbing two sticks together after we've invented the butane lighter? time to transition to r. :D
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Lake View population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Lake View. The dataset can be utilized to understand the population distribution of Lake View by age. For example, using this dataset, we can identify the largest age group in Lake View.
Key observations
The largest age group in Lake View, AR was for the group of age 50-54 years with a population of 53 (11.57%), according to the 2021 American Community Survey. At the same time, the smallest age group in Lake View, AR was the 35-39 years with a population of 7 (1.53%). Source: U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Age groups:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Lake View Population by Age. You can refer the same here
A random sample of households were invited to participate in this survey. In the dataset, you will find the respondent level data in each row with the questions in each column. The numbers represent a scale option from the survey, such as 1=Excellent, 2=Good, 3=Fair, 4=Poor. The question stem, response option, and scale information for each field can be found in the var "variable labels" and "value labels" sheets. VERY IMPORTANT NOTE: The scientific survey data were weighted, meaning that the demographic profile of respondents was compared to the demographic profile of adults in Bloomington from US Census data. Statistical adjustments were made to bring the respondent profile into balance with the population profile. This means that some records were given more "weight" and some records were given less weight. The weights that were applied are found in the field "wt". If you do not apply these weights, you will not obtain the same results as can be found in the report delivered to the Bloomington. The easiest way to replicate these results is likely to create pivot tables, and use the sum of the "wt" field rather than a count of responses.
The Pakistan Demographic and Health Survey PDHS 2017-18 was the fourth of its kind in Pakistan, following the 1990-91, 2006-07, and 2012-13 PDHS surveys.
The primary objective of the 2017-18 PDHS is to provide up-to-date estimates of basic demographic and health indicators. The PDHS provides a comprehensive overview of population, maternal, and child health issues in Pakistan. Specifically, the 2017-18 PDHS collected information on:
The information collected through the 2017-18 PDHS is intended to assist policymakers and program managers at the federal and provincial government levels, in the private sector, and at international organisations in evaluating and designing programs and strategies for improving the health of the country’s population. The data also provides information on indicators relevant to the Sustainable Development Goals.
National coverage
The survey covered all de jure household members (usual residents), children age 0-5 years, women age 15-49 years and men age 15-49 years resident in the household.
Sample survey data [ssd]
The sampling frame used for the 2017-18 PDHS is a complete list of enumeration blocks (EBs) created for the Pakistan Population and Housing Census 2017, which was conducted from March to May 2017. The Pakistan Bureau of Statistics (PBS) supported the sample design of the survey and worked in close coordination with NIPS. The 2017-18 PDHS represents the population of Pakistan including Azad Jammu and Kashmir (AJK) and the former Federally Administrated Tribal Areas (FATA), which were not included in the 2012-13 PDHS. The results of the 2017-18 PDHS are representative at the national level and for the urban and rural areas separately. The survey estimates are also representative for the four provinces of Punjab, Sindh, Khyber Pakhtunkhwa, and Balochistan; for two regions including AJK and Gilgit Baltistan (GB); for Islamabad Capital Territory (ICT); and for FATA. In total, there are 13 secondlevel survey domains.
The 2017-18 PDHS followed a stratified two-stage sample design. The stratification was achieved by separating each of the eight regions into urban and rural areas. In total, 16 sampling strata were created. Samples were selected independently in every stratum through a two-stage selection process. Implicit stratification and proportional allocation were achieved at each of the lower administrative levels by sorting the sampling frame within each sampling stratum before sample selection, according to administrative units at different levels, and by using a probability-proportional-to-size selection at the first stage of sampling.
The first stage involved selecting sample points (clusters) consisting of EBs. EBs were drawn with a probability proportional to their size, which is the number of households residing in the EB at the time of the census. A total of 580 clusters were selected.
The second stage involved systematic sampling of households. A household listing operation was undertaken in all of the selected clusters, and a fixed number of 28 households per cluster was selected with an equal probability systematic selection process, for a total sample size of approximately 16,240 households. The household selection was carried out centrally at the NIPS data processing office. The survey teams only interviewed the pre-selected households. To prevent bias, no replacements and no changes to the pre-selected households were allowed at the implementing stages.
For further details on sample design, see Appendix A of the final report.
Face-to-face [f2f]
Six questionnaires were used in the 2017-18 PDHS: Household Questionnaire, Woman’s Questionnaire, Man’s Questionnaire, Biomarker Questionnaire, Fieldworker Questionnaire, and the Community Questionnaire. The first five questionnaires, based on The DHS Program’s standard Demographic and Health Survey (DHS-7) questionnaires, were adapted to reflect the population and health issues relevant to Pakistan. The Community Questionnaire was based on the instrument used in the previous rounds of the Pakistan DHS. Comments were solicited from various stakeholders representing government ministries and agencies, nongovernmental organisations, and international donors. The survey protocol was reviewed and approved by the National Bioethics Committee, Pakistan Health Research Council, and ICF Institutional Review Board. After the questionnaires were finalised in English, they were translated into Urdu and Sindhi. The 2017-18 PDHS used paper-based questionnaires for data collection, while computerassisted field editing (CAFE) was used to edit the questionnaires in the field.
The processing of the 2017-18 PDHS data began simultaneously with the fieldwork. As soon as data collection was completed in each cluster, all electronic data files were transferred via IFSS to the NIPS central office in Islamabad. These data files were registered and checked for inconsistencies, incompleteness, and outliers. The field teams were alerted to any inconsistencies and errors. Secondary editing was carried out in the central office, which involved resolving inconsistencies and coding the openended questions. The NIPS data processing manager coordinated the exercise at the central office. The PDHS core team members assisted with the secondary editing. Data entry and editing were carried out using the CSPro software package. The concurrent processing of the data offered a distinct advantage as it maximised the likelihood of the data being error-free and accurate. The secondary editing of the data was completed in the first week of May 2018. The final cleaning of the data set was carried out by The DHS Program data processing specialist and completed on 25 May 2018.
A total of 15,671 households were selected for the survey, of which 15,051 were occupied. The response rates are presented separately for Pakistan, Azad Jammu and Kashmir, and Gilgit Baltistan. Of the 12,338 occupied households in Pakistan, 11,869 households were successfully interviewed, yielding a response rate of 96%. Similarly, the household response rates were 98% in Azad Jammu and Kashmir and 99% in Gilgit Baltistan.
In the interviewed households, 94% of ever-married women age 15-49 in Pakistan, 97% in Azad Jammu and Kashmir, and 94% in Gilgit Baltistan were interviewed. In the subsample of households selected for the male survey, 87% of ever-married men age 15-49 in Pakistan, 94% in Azad Jammu and Kashmir, and 84% in Gilgit Baltistan were successfully interviewed.
Overall, the response rates were lower in urban than in rural areas. The difference is slightly less pronounced for Azad Jammu and Kashmir and Gilgit Baltistan. The response rates for men are lower than those for women, as men are often away from their households for work.
The estimates from a sample survey are affected by two types of errors: nonsampling errors and sampling errors. Nonsampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2017-18 Pakistan Demographic and Health Survey (2017-18 PDHS) to minimise this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2017-18 PDHS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability among all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
Sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that
The Armenia Demographic and Health Survey (ADHS) was a nationally representative sample survey designed to provide information on population and health issues in Armenia. The primary goal of the survey was to develop a single integrated set of demographic and health data, the first such data set pertaining to the population of the Republic of Armenia. In addition to integrating measures of reproductive, child, and adult health, another feature of the DHS survey is that the majority of data are presented at the marz level.The ADHS was conducted by the National Statistical Service and the Ministry of Health of the Republic of Armenia during October through December 2000. ORC Macro provided technical support for the survey through the MEASURE DHS+ project. MEASURE DHS+ is a worldwide project, sponsored by the USAID, with a mandate to assist countries in obtaining information on key population and health indicators. USAID/Armenia provided funding for the survey. The United Nations Children’s Fund (UNICEF)/Armenia provided support through the donation of equipment.The ADHS collected national- and regional-level data on fertility and contraceptive use, maternal and child health, adult health, and AIDS and other sexually transmitted diseases. The survey obtained detailed information on these issues from women of reproductive age and, on certain topics, from men as well. Data are presented by marz wherever sample size permits.The ADHS results are intended to provide the information needed to evaluate existing social programs and to design new strategies for improving the health of and health services for the people of Armenia. The ADHS also contributes to the growing international database on demographic and health-related variables.
The SHDS is a national sample survey designed to provide information on population, birth spacing, reproductive health, nutrition, maternal and child health, child survival, HIV/AIDS and sexually transmitted infections (STIs), in Somalia.. The main objective of the SHDS was to provide evidence on the health and demographic characteristics of the Somali population that will guide the development of programmes and formulation of effective policies. This information would also help monitor and evaluate national, sub-national and sector development plans, including the Sustainable Development Goals (SDGs), both by the government and development partners. The target population for SHDS was the women between 15 and 49 years of age, and the children less than the age of 5 years
The SHDS 2020 was a nationally representative household survey.
The unit analysis of this survey are households, women aged 15-49 and children aged 0-5
This sample survey covered Women aged 15-49 and Children aged 0-5 years.
Sample survey data [ssd]
Sample Design The sample for the SHDS was designed to provide estimates of key indicators for the country as a whole, for each of the eighteen pre-war geographical regions, which are the country's first-level administrative divisions, as well as separately for urban, rural and nomadic areas. With the exception of Banadir region, which is considered fully urban, each region was stratified into urban, rural and nomadic areas, yielding a total of 55 sampling strata. All three strata of Lower Shabelle and Middle Juba regions, as well as the rural and nomadic strata of Bay region, were completely excluded from the survey due to security reasons. A final total of 47 sampling strata formed the sampling frame. Through the use of up-to-date, high-resolution satellite imagery, as well as on-the-ground knowledge of staff from the respective ministries of planning, all dwelling structures were digitized in urban and rural areas. Enumeration Areas (EAs) were formed onscreen through a spatial count of dwelling structures in a Geographic Information System (GIS) software. Thereafter, a sample ground verification of the digitized structures was carried out for large urban and rural areas and necessary adjustments made to the frame.
Each EA created had a minimum of 50 and a maximum of 149 dwelling structures. A total of 10,525 EAs were digitized: 7,488 in urban areas and 3,037 in rural areas. However, because of security and accessibility constraints, not all digitized areas were included in the final sampling frame-9,136 EAs (7,308 in urban and 1,828 in rural) formed the final frame. The nomadic frame comprised an updated list of temporary nomadic settlements (TNS) obtained from the nomadic link workers who are tied to these settlements. A total of 2,521 TNS formed the SHDS nomadic sampling frame. The SHDS followed a three-stage stratified cluster sample design in urban and rural strata with a probability proportional to size, for the sampling of Primary Sampling Units (PSU) and Secondary Sampling Units (SSU) (respectively at the first and second stage), and systematic sampling of households at the third stage. For the nomadic stratum, a two-stage stratified cluster sample design was applied with a probability proportional to size for sampling of PSUs at the first stage and systematic sampling of households at the second stage. To ensure that the survey precision is comparable across regions, PSUs were allocated equally to all regions with slight adjustments in two regions. Within each stratum, a sample of 35 EAs was selected independently, with probability proportional to the number of digitized dwelling structures. In this first stage, a total of 1,433 EAs were allocated (to urban - 770 EAs, rural - 488 EAs, and nomadic - 175 EAs) representing about 16 percent of the total frame of EAs. In the urban and rural selected EAs, all households were listed and information on births and deaths was recorded through the maternal mortality questionnaire. The data collected in this first phase was cleaned and a summary of households listed per EA formed the sampling frames for the second phase. In the second stage, 10 EAs were sampled out of the possible 35 that were listed, using probability proportional to the number of households. All households in each of these 10 EAs were serialized based on their location in the EA and 30 of these households sampled for the survey. The serialization was done to ensure distribution of the households interviewed for the survey in the EA sampled. A total of 220 EAs and 150 EAs were allocated to urban and rural strata respectively, while in the third stage, an average of 30 households were selected from the listed households in every EA to yield a total of 16,360 households from 538 EAs covered (220 EAs in urban, 147 EAs in rural and 171 EAs in nomadic) out of the sampled 545 EAs. In nomadic areas, a sample of 10 EAs (in this case TNS) were selected from each nomadic stratum, with probability proportional to the number of estimated households. A complete listing of households was carried out in the selected TNS followed by the selection of 30 households for the main survey interview. In those TNS with less than 30 households, all households were interviewed for the main survey. All eligible ever-married women aged 12 to 49 and never-married women aged 15 to 49 were interviewed in the selected households, while the household questionnaire was administered to all households selected. The maternal mortality questionnaire was administered to all households in each sampled TNS.
Face-to-face [f2f]
A total of 16,360 households were selected for the sample, of which 15,870 were occupied. Of the occupied households, 15,826 were successfully interviewed, yielding a response rate of 99.7 percent. The SHDS 2020 interviewed 16,486 women-11,876 ever-married women and 4,610 never-married women.
Sampling errors are important data quality parameters which give measure of the precision of the survey estimates. They aid in determining the statistical reliability of survey estimates. The estimates from a sample survey are affected by two types of errors: non-sampling errors and sampling errors. Non-sampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the Somaliland Health and Demographic Survey ( SHDS 2020) to minimise this type of error, non-sampling errors are impossible to avoid and difficult to evaluate statistically. Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the SHDS 2020 is only one of many samples that could have been selected from the same population, using the same design and sample size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability among all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results. Sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95% of all possible samples of identical size and design. If the sample of respondents had been selected by simple random sampling, it would have been possible to use straightforward formulas for calculating sampling errors. However, the SHDS 2020 sample was the result of a multi-stage stratified design, and, consequently, it was necessary to use more complex formulas. The variance approximation procedure that account for the complex sample design used R program was estimated sampling errors in SHDS which is Taylor series linearization. The non-linear estimates are approximated by linear ones for estimating variance. The linear approximation is derived by taking the first-order Tylor series approximation. Standard variance estimation methods for linear statistics are then used to estimate the variance of the linearized estimator. The Taylor linearisation method treats any linear statistic such as a percentage or mean as a ratio estimate, r = y/x, where y represents the total sample value for variable y and x represents the total number of cases in the group or subgroup under consideration
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Clear Lake population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Clear Lake. The dataset can be utilized to understand the population distribution of Clear Lake by age. For example, using this dataset, we can identify the largest age group in Clear Lake.
Key observations
The largest age group in Clear Lake, IA was for the group of age 65-69 years with a population of 899 (11.72%), according to the 2021 American Community Survey. At the same time, the smallest age group in Clear Lake, IA was the 30-34 years with a population of 216 (2.82%). Source: U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Age groups:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Clear Lake Population by Age. You can refer the same here
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
For years, we have relied on population surveys to keep track of regional public health statistics, including the prevalence of non-communicable diseases. Because of the cost and limitations of such surveys, we often do not have the up-to-date data on health outcomes of a region. In this paper, we examined the feasibility of inferring regional health outcomes from socio-demographic data that are widely available and timely updated through national censuses and community surveys. Using data for 50 American states (excluding Washington DC) from 2007 to 2012, we constructed a machine-learning model to predict the prevalence of six non-communicable disease (NCD) outcomes (four NCDs and two major clinical risk factors), based on population socio-demographic characteristics from the American Community Survey. We found that regional prevalence estimates for non-communicable diseases can be reasonably predicted. The predictions were highly correlated with the observed data, in both the states included in the derivation model (median correlation 0.88) and those excluded from the development for use as a completely separated validation sample (median correlation 0.85), demonstrating that the model had sufficient external validity to make good predictions, based on demographics alone, for areas not included in the model development. This highlights both the utility of this sophisticated approach to model development, and the vital importance of simple socio-demographic characteristics as both indicators and determinants of chronic disease.
For further detailed information about methodology, users should consult the Labour Force Survey User Guide, included with the APS documentation. For variable and value labelling and coding frames that are not included either in the data or in the current APS documentation, users are advised to consult the latest versions of the LFS User Guides, which are available from the ONS Labour Force Survey - User Guidance webpages.
Occupation data for 2021 and 2022
The ONS has identified an issue with the collection of some occupational data in 2021 and 2022 data files in a number of their surveys. While they estimate any impacts will be small overall, this will affect the accuracy of the breakdowns of some detailed (four-digit Standard Occupational Classification (SOC)) occupations, and data derived from them. None of ONS' headline statistics, other than those directly sourced from occupational data, are affected and you can continue to rely on their accuracy. The affected datasets have now been updated. Further information can be found in the ONS article published on 11 July 2023: Revision of miscoded occupational data in the ONS Labour Force Survey, UK: January 2021 to September 2022
APS Well-Being Datasets
From 2012-2015, the ONS published separate APS datasets aimed at providing initial estimates of subjective well-being, based on the Integrated Household Survey. In 2015 these were discontinued. A separate set of well-being variables and a corresponding weighting variable have been added to the April-March APS person datasets from A11M12 onwards. Further information on the transition can be found in the Personal well-being in the UK: 2015 to 2016 article on the ONS website.
APS disability variables
Over time, there have been some updates to disability variables in the APS. An article explaining the quality assurance investigations on these variables that have been conducted so far is available on the ONS Methodology webpage.
The Secure Access data have more restrictive access conditions than those made available under the standard EUL. Prospective users will need to gain ONS Accredited Researcher status, complete an extra application form and demonstrate to the data owners exactly why they need access to the additional variables. Users are strongly advised to first obtain the standard EUL version of the data to see if they are sufficient for their research requirements.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
analyze the national health and nutrition examination survey (nhanes) with r nhanes is this fascinating survey where doctors and dentists accompany survey interviewers in a little mobile medical center that drives around the country. while the survey folks are interviewing people, the medical professionals administer laboratory tests and conduct a real doctor's examination. the b lood work and medical exam allow researchers like you and me to answer tough questions like, "how many people have diabetes but don't know they have diabetes?" conducting the lab tests and the physical isn't cheap, so a new nhanes data set becomes available once every two years and only includes about twelve thousand respondents. since the number of respondents is so small, analysts often pool multiple years of data together. the replication scripts below give a few different examples of how multiple years of data can be pooled with r. the survey gets conducted by the centers for disease control and prevention (cdc), and generalizes to the united states non-institutional, non-active duty military population. most of the data tables produced by the cdc include only a small number of variables, so importation with the foreign package's read.xport function is pretty straightforward. but that makes merging the appropriate data sets trickier, since it might not be clear what to pull for which variables. for every analysis, start with the table with 'demo' in the name -- this file includes basic demographics, weighting, and complex sample survey design variables. since it's quick to download the files directly from the cdc's ftp site, there's no massive ftp download automation script. this new github repository co ntains five scripts: 2009-2010 interview only - download and analyze.R download, import, save the demographics and health insurance files onto your local computer load both files, limit them to the variables needed for the analysis, merge them together perform a few example variable recodes create the complex sample survey object, using the interview weights run a series of pretty generic analyses on the health insurance ques tions 2009-2010 interview plus laboratory - download and analyze.R download, import, save the demographics and cholesterol files onto your local computer load both files, limit them to the variables needed for the analysis, merge them together perform a few example variable recodes create the complex sample survey object, using the mobile examination component (mec) weights perform a direct-method age-adjustment and matc h figure 1 of this cdc cholesterol brief replicate 2005-2008 pooled cdc oral examination figure.R download, import, save, pool, recode, create a survey object, run some basic analyses replicate figure 3 from this cdc oral health databrief - the whole barplot replicate cdc publications.R download, import, save, pool, merge, and recode the demographics file plus cholesterol laboratory, blood pressure questionnaire, and blood pressure laboratory files match the cdc's example sas and sudaan syntax file's output for descriptive means match the cdc's example sas and sudaan synta x file's output for descriptive proportions match the cdc's example sas and sudaan syntax file's output for descriptive percentiles replicate human exposure to chemicals report.R (user-contributed) download, import, save, pool, merge, and recode the demographics file plus urinary bisphenol a (bpa) laboratory files log-transform some of the columns to calculate the geometric means and quantiles match the 2007-2008 statistics shown on pdf page 21 of the cdc's fourth edition of the report click here to view these five scripts for more detail about the national health and nutrition examination survey (nhanes), visit: the cdc's nhanes homepage the national cancer institute's page of nhanes web tutorials notes: nhanes includes interview-only weights and interview + mobile examination component (mec) weights. if you o nly use questions from the basic interview in your analysis, use the interview-only weights (the sample size is a bit larger). i haven't really figured out a use for the interview-only weights -- nhanes draws most of its power from the combination of the interview and the mobile examination component variables. if you're only using variables from the interview, see if you can use a data set with a larger sample size like the current population (cps), national health interview survey (nhis), or medical expenditure panel survey (meps) instead. confidential to sas, spss, stata, sudaan users: why are you still riding around on a donkey after we've invented the internal combustion engine? time to transition to r. :D
https://www.iza.org/wc/dataverse/IIL-1.0.pdfhttps://www.iza.org/wc/dataverse/IIL-1.0.pdf
The IZA Evaluation Dataset Survey (IZA ED) was developed in order to obtain reliable longitudinal estimates for the impact of Active Labor Market Policies (ALMP). Moreover, it is suitable for studying the processes of job search and labor market reintegration. The data allow analyzing dynamics with respect to a rich set of individual and labor market characteristics. It covers the initial period of unemployment as well as long-term outcomes, for a total period of up to 3 years after unemployment entry. A longitudinal questionnaire records monthly labor market activities and their duration in detail for the mentioned period. These activities are, for example, employment, unemployment, ALMP, other training etc. Available information covers employment status, occupation, sector, and related earnings, hours, unemployment benefits or other transfer payments. A cross-sectional questionnaire contains all basic information including the process of entering into unemployment, and demographics. The entry into unemployment describes detailed job search behavior such as search intensity, search channels and the role of the Employment Agency. Moreover, reservation wages and individual expectations about leaving unemployment or participating in ALMP programs are recorded. The available demographic information covers employment status, occupation and sector, as well as specifics about citizenship and ethnic background, educational levels, number and age of children, household structure and income, family background, health status, and workplace as well as place of residence regions. The survey provides as well detailed information about the treatment by the unemployment insurance authorities, imposed labor market policies, benefit receipt and sanctions. The survey focuses additionally on individual characteristics and behavior. Such co-variates of individuals comprise social networks, ethnic and migration background, relations and identity, personality traits, cognitive and non-cognitive skills, life and job satisfaction, risky behavior, attitudes and preferences. The main advantages of the IZA ED are the large sample size of unemployed individuals, the accuracy of employment histories, the innovative and rich set of individual co-variates and the fact that the survey measures important characteristics shortly after entry into unemployment.
Description and PurposeThese data include the individual responses for the City of Tempe Annual Community Survey conducted by ETC Institute. These data help determine priorities for the community as part of the City's on-going strategic planning process. Averaged Community Survey results are used as indicators for several city performance measures. The summary data for each performance measure is provided as an open dataset for that measure (separate from this dataset). The performance measures with indicators from the survey include the following (as of 2022):1. Safe and Secure Communities1.04 Fire Services Satisfaction1.06 Crime Reporting1.07 Police Services Satisfaction1.09 Victim of Crime1.10 Worry About Being a Victim1.11 Feeling Safe in City Facilities1.23 Feeling of Safety in Parks2. Strong Community Connections2.02 Customer Service Satisfaction2.04 City Website Satisfaction2.05 Online Services Satisfaction Rate2.15 Feeling Invited to Participate in City Decisions2.21 Satisfaction with Availability of City Information3. Quality of Life3.16 City Recreation, Arts, and Cultural Centers3.17 Community Services Programs3.19 Value of Special Events3.23 Right of Way Landscape Maintenance3.36 Quality of City Services4. Sustainable Growth & DevelopmentNo Performance Measures in this category presently relate directly to the Community Survey5. Financial Stability & VitalityNo Performance Measures in this category presently relate directly to the Community SurveyMethodsThe survey is mailed to a random sample of households in the City of Tempe. Follow up emails and texts are also sent to encourage participation. A link to the survey is provided with each communication. To prevent people who do not live in Tempe or who were not selected as part of the random sample from completing the survey, everyone who completed the survey was required to provide their address. These addresses were then matched to those used for the random representative sample. If the respondent’s address did not match, the response was not used. To better understand how services are being delivered across the city, individual results were mapped to determine overall distribution across the city. Additionally, demographic data were used to monitor the distribution of responses to ensure the responding population of each survey is representative of city population. Processing and LimitationsThe location data in this dataset is generalized to the block level to protect privacy. This means that only the first two digits of an address are used to map the location. When they data are shared with the city only the latitude/longitude of the block level address points are provided. This results in points that overlap. In order to better visualize the data, overlapping points were randomly dispersed to remove overlap. The result of these two adjustments ensure that they are not related to a specific address, but are still close enough to allow insights about service delivery in different areas of the city. This data is the weighted data provided by the ETC Institute, which is used in the final published PDF report.The 2022 Annual Community Survey report is available on data.tempe.gov. The individual survey questions as well as the definition of the response scale (for example, 1 means “very dissatisfied” and 5 means “very satisfied”) are provided in the data dictionary.Additional InformationSource: Community Attitude SurveyContact (author): Wydale HolmesContact E-Mail (author): wydale_holmes@tempe.govContact (maintainer): Wydale HolmesContact E-Mail (maintainer): wydale_holmes@tempe.govData Source Type: Excel tablePreparation Method: Data received from vendor after report is completedPublish Frequency: AnnualPublish Method: ManualData Dictionary
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Trinidad and Tobago DHS surveya national-level self-weighting random sample surveywas funded by the United States Agency for International Development (US/AID) and executed by the Family Planning Association of Trinidad and Tobago (FPATT). Technical assisstance was provided by the Demographic and Health Surveys Program at the Institute for Resource Development (IRD), a subsidiary of Westinghouse located in Columbia, Maryland. The sampling frame for the TTDHS was the Continuous Sample Survey of Population (CSSP), an ongoing survey conducted by the Central Statistical Office based on the 1980 Population and Housing Census. The TTDHS used a household schedule to collect information on residents of selected households, and to identify women eligible for the individual questionnaire. The individual questionnaire was based on DHS's Model "A" Questionnaire for High Contraceptive Prevalence countries, which was modified for use in Trinidad and Tobago. It covered four main areas: (1) background information on the respondent, her partner and marital status, (2) fertility and fertility preferences, (3) contraception, and (4) the health of children. The short term objective of the Trinidad and Tobago Demographic and Health Survey (TTDHS) is to collect and analyse data on the demographic characteristics of women in the reproductive years, and the health status of their young children. Policymakers and programme managers in public and private agencies will be able to utilize the data in designing and administering programmes. The long term objective of the project is to enhance the ability of organisations involved in the TTDHS to undertake surveys of excellent technical quality.
The Annual Population Survey (APS) is a major survey series, which aims to provide data that can produce reliable estimates at local authority level. Key topics covered in the survey include education, employment, health and ethnicity. The APS comprises key variables from the Labour Force Survey (LFS) (held at the UK Data Archive under GN 33246), all of its associated LFS boosts and the APS boost. Thus, the APS combines results from five different sources: the LFS (waves 1 and 5); the English Local Labour Force Survey (LLFS), the Welsh Labour Force Survey (WLFS), the Scottish Labour Force Survey (SLFS) and the Annual Population Survey Boost Sample (APS(B) - however, this ceased to exist at the end of December 2005, so APS data from January 2006 onwards will contain all the above data apart from APS(B)). Users should note that the LLFS, WLFS, SLFS and APS(B) are not held separately at the UK Data Archive. For further detailed information about methodology, users should consult the Labour Force Survey User Guide, selected volumes of which have been included with the APS documentation for reference purposes (see 'Documentation' table below).
The APS aims to provide enhanced annual data for England, covering a target sample of at least 510 economically active persons for each Unitary Authority (UA)/Local Authority District (LAD) and at least 450 in each Greater London Borough. In combination with local LFS boost samples such as the WLFS and SLFS, the survey provides estimates for a range of indicators down to Local Education Authority (LEA) level across the United Kingdom.
APS Well-Being data
Since April 2011, the APS has included questions about personal and subjective well-being. The responses to these questions have been made available as annual sub-sets to the APS Person level files. It is important to note that the size of the achieved sample of the well-being questions within the dataset is approximately 165,000 people. This reduction is due to the well-being questions being only asked of persons aged 16 and above, who gave a personal interview and proxy answers are not accepted. As a result some caution should be used when using analysis of responses to well-being questions at detailed geography areas and also in relation to any other variables where respondent numbers are relatively small. It is recommended that for lower level geography analysis that the variable UACNTY09 is used.
As well as annual datasets, three-year pooled datasets are available. When combining multiple APS datasets together, it is important to account for the rotational design of the APS and ensure that no person appears more than once in the multiple year dataset. This is because the well-being datasets are not designed to be longitudinal e.g. they are not designed to track individuals over time/be used for longitudinal analysis. They are instead cross-sectional, and are designed to use a cross-section of the population to make inferences about the whole population. For this reason, the three-year dataset has been designed to include only a selection of the cases from the individual year APS datasets, chosen in such a way that no individuals are included more than once, and the cases included are approximately equally spread across the three years. Further information is available in the 'Documentation' section below.
Secure Access APS Well-Being data
Secure Access datasets for the APS Well-Being include additional variables not included in either the standard End User Licence (EUL) versions (see under GN 33357) or the Special Licence (SL) access versions (see under GN 33376). Extra variables that typically can be found in the Secure Access version but not in the EUL or SL versions relate to:
The dataset is a relational dataset of 8,000 households households, representing a sample of the population of an imaginary middle-income country. The dataset contains two data files: one with variables at the household level, the other one with variables at the individual level. It includes variables that are typically collected in population censuses (demography, education, occupation, dwelling characteristics, fertility, mortality, and migration) and in household surveys (household expenditure, anthropometric data for children, assets ownership). The data only includes ordinary households (no community households). The dataset was created using REaLTabFormer, a model that leverages deep learning methods. The dataset was created for the purpose of training and simulation and is not intended to be representative of any specific country.
The full-population dataset (with about 10 million individuals) is also distributed as open data.
The dataset is a synthetic dataset for an imaginary country. It was created to represent the population of this country by province (equivalent to admin1) and by urban/rural areas of residence.
Household, Individual
The dataset is a fully-synthetic dataset representative of the resident population of ordinary households for an imaginary middle-income country.
ssd
The sample size was set to 8,000 households. The fixed number of households to be selected from each enumeration area was set to 25. In a first stage, the number of enumeration areas to be selected in each stratum was calculated, proportional to the size of each stratum (stratification by geo_1 and urban/rural). Then 25 households were randomly selected within each enumeration area. The R script used to draw the sample is provided as an external resource.
other
The dataset is a synthetic dataset. Although the variables it contains are variables typically collected from sample surveys or population censuses, no questionnaire is available for this dataset. A "fake" questionnaire was however created for the sample dataset extracted from this dataset, to be used as training material.
The synthetic data generation process included a set of "validators" (consistency checks, based on which synthetic observation were assessed and rejected/replaced when needed). Also, some post-processing was applied to the data to result in the distributed data files.
This is a synthetic dataset; the "response rate" is 100%.
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Abstract: The aim of this study is to gain insights into the attitudes of the population towards big data practices and the factors influencing them. To this end, a nationwide survey (N = 1,331), representative of the population of Germany, addressed the attitudes about selected big data practices exemplified by four scenarios, which may have a direct impact on the personal lifestyle. The scenarios contained price discrimination in retail, credit scoring, differentiations in health insurance, and differentiations in employment. The attitudes about the scenarios were set into relation to demographic characteristics, personal value orientations, knowledge about computers and the internet, and general attitudes about privacy and data protection. Another focus of the study is on the institutional framework of privacy and data protection, because the realization of benefits or risks of big data practices for the population also depends on the knowledge about the rights the institutional framework provided to the population and the actual use of those rights. As results, several challenges for the framework by big data practices were confirmed, in particular for the elements of informed consent with privacy policies, purpose limitation, and the individuals’ rights to request information about the processing of personal data and to have these data corrected or erased. TechnicalRemarks: TYPE OF SURVEY AND METHODS The data set includes responses to a survey conducted by professionally trained interviewers of a social and market research company in the form of computer-aided telephone interviews (CATI) from 2017-02 to 2017-04. The target population was inhabitants of Germany aged 18 years and more, who were randomly selected by using the sampling approaches ADM eASYSAMPLe (based on the Gabler-Häder method) for landline connections and eASYMOBILe for mobile connections. The 1,331 completed questionnaires comprise 44.2 percent mobile and 55.8 percent landline phone respondents. Most questions had options to answer with a 5-point rating scale (Likert-like) anchored with ‘Fully agree’ to ‘Do not agree at all’, or ‘Very uncomfortable’ to ‘Very comfortable’, for instance. Responses by the interviewees were weighted to obtain a representation of the entire German population (variable ‘gewicht’ in the data sets). To this end, standard weighting procedures were applied to reduce differences between the sample and the entire population with regard to known rates of response and non-response depending on household size, age, gender, educational level, and place of residence. RELATED PUBLICATION AND FURTHER DETAILS The questionnaire, analysis and results will be published in the corresponding report (main text in English language, questionnaire in Appendix B in German language of the interviews and English translation). The report will be available as open access publication at KIT Scientific Publishing (https://www.ksp.kit.edu/). Reference: Orwat, Carsten; Schankin, Andrea (2018): Attitudes towards big data practices and the institutional framework of privacy and data protection - A population survey, KIT Scientific Report 7753, Karlsruhe: KIT Scientific Publishing. FILE FORMATS The data set of responses is saved for the repository KITopen at 2018-11 in the following file formats: comma-separated values (.csv), tapulator-separated values (.dat), Excel (.xlx), Excel 2007 or newer (.xlxs), and SPSS Statistics (.sav). The questionnaire is saved in the following file formats: comma-separated values (.csv), Excel (.xlx), Excel 2007 or newer (.xlxs), and Portable Document Format (.pdf). PROJECT AND FUNDING The survey is part of the project Assessing Big Data (ABIDA) (from 2015-03 to 2019-02), which receives funding from the Federal Ministry of Education and Research (BMBF), Germany (grant no. 01IS15016A-F). http://www.abida.de
By Health [source]
The Behavioral Risk Factor Surveillance System (BRFSS) offers an expansive collection of data on the health-related quality of life (HRQOL) from 1993 to 2010. Over this time period, the Health-Related Quality of Life dataset consists of a comprehensive survey reflecting the health and well-being of non-institutionalized US adults aged 18 years or older. The data collected can help track and identify unmet population health needs, recognize trends, identify disparities in healthcare, determine determinants of public health, inform decision making and policy development, as well as evaluate programs within public healthcare services.
The HRQOL surveillance system has developed a compact set of HRQOL measures such as a summary measure indicating unhealthy days which have been validated for population health surveillance purposes and have been widely implemented in practice since 1993. Within this study's dataset you will be able to access information such as year recorded, location abbreviations & descriptions, category & topic overviews, questions asked in surveys and much more detailed information including types & units regarding data values retrieved from respondents along with their sample sizes & geographical locations involved!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset tracks the Health-Related Quality of Life (HRQOL) from 1993 to 2010 using data from the Behavioral Risk Factor Surveillance System (BRFSS). This dataset includes information on the year, location abbreviation, location description, type and unit of data value, sample size, category and topic of survey questions.
Using this dataset on BRFSS: HRQOL data between 1993-2010 will allow for a variety of analyses related to population health needs. The compact set of HRQOL measures can be used to identify trends in population health needs as well as determine disparities among various locations. Additionally, responses to survey questions can be used to inform decision making and program and policy development in public health initiatives.
- Analyzing trends in HRQOL over the years by location to identify disparities in health outcomes between different populations and develop targeted policy interventions.
- Developing new models for predicting HRQOL indicators at a regional level, and using this information to inform medical practice and public health implementation efforts.
- Using the data to understand differences between states in terms of their HRQOL scores and establish best practices for healthcare provision based on that understanding, including areas such as access to care, preventative care services availability, etc
If you use this dataset in your research, please credit the original authors. Data Source
See the dataset description for more information.
File: rows.csv | Column name | Description | |:-------------------------------|:----------------------------------------------------------| | Year | Year of survey. (Integer) | | LocationAbbr | Abbreviation of location. (String) | | LocationDesc | Description of location. (String) | | Category | Category of survey. (String) | | Topic | Topic of survey. (String) | | Question | Question asked in survey. (String) | | DataSource | Source of data. (String) | | Data_Value_Unit | Unit of data value. (String) | | Data_Value_Type | Type of data value. (String) | | Data_Value_Footnote_Symbol | Footnote symbol for data value. (String) | | Data_Value_Std_Err | Standard error of the data value. (Float) | | Sample_Size | Sample size used in sample. (Integer) | | Break_Out | Break out categories used. (String) | | Break_Out_Category | Type break out assessed. (String) | | **GeoLocation*...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Science in (Higher) Education – data of the February 2017 survey
This data set contains:
Survey structure
The survey includes 24 questions and its structure can be separated in five major themes: material used in courses (5), OER awareness, usage and development (6), collaborative tools used in courses (2), assessment and participation options (5), demographics (4). The last two questions include an open text questions about general issues on the topics and singular open education experiences, and a request on forwarding the respondent’s e-mail address for further questionings. The online survey was created with Limesurvey[1]. Several questions include filters, i.e. these questions were only shown if a participants did choose a specific answer beforehand ([n/a] in Excel file, [.] In SPSS).
Demographic questions
Demographic questions asked about the current position, the discipline, birth year and gender. The classification of research disciplines was adapted to general disciplines at German higher education institutions. As we wanted to have a broad classification, we summarised several disciplines and came up with the following list, including the option “other” for respondents who do not feel confident with the proposed classification:
The current job position classification was also chosen according to common positions in Germany, including positions with a teaching responsibility at higher education institutions. Here, we also included the option “other” for respondents who do not feel confident with the proposed classification:
We chose to have a free text (numerical) for asking about a respondent’s year of birth because we did not want to pre-classify respondents’ age intervals. It leaves us options to have different analysis on answers and possible correlations to the respondents’ age. Asking about the country was left out as the survey was designed for academics in Germany.
Remark on OER question
Data from earlier surveys revealed that academics suffer confusion about the proper definition of OER[2]. Some seem to understand OER as free resources, or only refer to open source software (Allen & Seaman, 2016, p. 11). Allen and Seaman (2016) decided to give a broad explanation of OER, avoiding details to not tempt the participant to claim “aware”. Thus, there is a danger of having a bias when giving an explanation. We decided not to give an explanation, but keep this question simple. We assume that either someone knows about OER or not. If they had not heard of the term before, they do not probably use OER (at least not consciously) or create them.
Data collection
The target group of the survey was academics at German institutions of higher education, mainly universities and universities of applied sciences. To reach them we sent the survey to diverse institutional-intern and extern mailing lists and via personal contacts. Included lists were discipline-based lists, lists deriving from higher education and higher education didactic communities as well as lists from open science and OER communities. Additionally, personal e-mails were sent to presidents and contact persons from those communities, and Twitter was used to spread the survey.
The survey was online from Feb 6th to March 3rd 2017, e-mails were mainly sent at the beginning and around mid-term.
Data clearance
We got 360 responses, whereof Limesurvey counted 208 completes and 152 incompletes. Two responses were marked as incomplete, but after checking them turned out to be complete, and we added them to the complete responses dataset. Thus, this data set includes 210 complete responses. From those 150 incomplete responses, 58 respondents did not answer 1st question, 40 respondents discontinued after 1st question. Data shows a constant decline in response answers, we did not detect any striking survey question with a high dropout rate. We deleted incomplete responses and they are not in this data set.
Due to data privacy reasons, we deleted seven variables automatically assigned by Limesurvey: submitdate, lastpage, startlanguage, startdate, datestamp, ipaddr, refurl. We also deleted answers to question No 24 (email address).
References
Allen, E., & Seaman, J. (2016). Opening the Textbook: Educational Resources in U.S. Higher Education, 2015-16.
First results of the survey are presented in the poster:
Heck, Tamara, Blümel, Ina, Heller, Lambert, Mazarakis, Athanasios, Peters, Isabella, Scherp, Ansgar, & Weisel, Luzian. (2017). Survey: Open Science in Higher Education. Zenodo. http://doi.org/10.5281/zenodo.400561
Contact:
Open Science in (Higher) Education working group, see http://www.leibniz-science20.de/forschung/projekte/laufende-projekte/open-science-in-higher-education/.
[1] https://www.limesurvey.org
[2] The survey question about the awareness of OER gave a broad explanation, avoiding details to not tempt the participant to claim “aware”.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Context
The dataset tabulates the Ocean City population distribution across 18 age groups. It lists the population in each age group along with the percentage population relative of the total population for Ocean City. The dataset can be utilized to understand the population distribution of Ocean City by age. For example, using this dataset, we can identify the largest age group in Ocean City.
Key observations
The largest age group in Ocean City, MD was for the group of age 65-69 years with a population of 808 (11.76%), according to the 2021 American Community Survey. At the same time, the smallest age group in Ocean City, MD was the 10-14 years with a population of 104 (1.51%). Source: U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
When available, the data consists of estimates from the U.S. Census Bureau American Community Survey (ACS) 2017-2021 5-Year Estimates.
Age groups:
Variables / Data Columns
Good to know
Margin of Error
Data in the dataset are based on the estimates and are subject to sampling variability and thus a margin of error. Neilsberg Research recommends using caution when presening these estimates in your research.
Custom data
If you do need custom data for any of your research project, report or presentation, you can contact our research staff at research@neilsberg.com for a feasibility of a custom tabulation on a fee-for-service basis.
Neilsberg Research Team curates, analyze and publishes demographics and economic data from a variety of public and proprietary sources, each of which often includes multiple surveys and programs. The large majority of Neilsberg Research aggregated datasets and insights is made available for free download at https://www.neilsberg.com/research/.
This dataset is a part of the main dataset for Ocean City Population by Age. You can refer the same here
Pursuant to Local Laws 126, 127, and 128 of 2016, certain demographic data is collected voluntarily and anonymously by persons voluntarily seeking social services. This data can be used by agencies and the public to better understand the demographic makeup of client populations and to better understand and serve residents of all backgrounds and identities. The data presented here has been collected through either electronic form or paper surveys offered at the point of application for services. These surveys are anonymous. Each record represents an anonymized demographic profile of an individual applicant for social services, disaggregated by response option, agency, and program. Response options include information regarding ancestry, race, primary and secondary languages, English proficiency, gender identity, and sexual orientation. Idiosyncrasies or Limitations: Note that while the dataset contains the total number of individuals who have identified their ancestry or languages spoke, because such data is collected anonymously, there may be instances of a single individual completing multiple voluntary surveys. Additionally, the survey being both voluntary and anonymous has advantages as well as disadvantages: it increases the likelihood of full and honest answers, but since it is not connected to the individual case, it does not directly inform delivery of services to the applicant. The paper and online versions of the survey ask the same questions but free-form text is handled differently. Free-form text fields are expected to be entered in English although the form is available in several languages. Surveys are presented in 11 languages. Paper Surveys 1. Are optional 2. Survey taker is expected to specify agency that provides service 2. Survey taker can skip or elect not to answer questions 3. Invalid/unreadable data may be entered for survey date or date may be skipped 4. OCRing of free-form tet fields may fail. 5. Analytical value of free-form text answers is unclear Online Survey 1. Are optional 2. Agency is defaulted based on the URL 3. Some questions must be answered 4. Date of survey is automated