Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe division of the Korean peninsula involved many neighbouring countries in the Korean War. The relations with those countries have since been reorganised due to active exchange. This study examined how the quantity and quality of contact with traditional alliance (US and Japan) and strategic partner (China and Russia) countries affected their national images.MethodsTo this end, this study analyzed the relation with the national image by measuring the quantity and quality of contact of an individual with each country. The quantity of contact included an evaluation of the individual’s subjective amount of contact, contact path, and contact status, and the quality of contact was measured as an evaluation for the pleasure, competitiveness, intimacy, spontaneity, and necessity when contacting each country’s culture. A total of 387 participants were divided into two groups based on the presence of direct contact and the quantity and quality of their contact and national images were examined. The participants were followed by a completion of the self-questionnaires including the Culture Experience Questionnaire, National Image Questionnaire, and demographic information questionnaire.ResultsThe results of this study are as follow: first, regardless of the type of country, the national image was highly correlated with the degree of subjective contact evaluated by individuals, but there was a weak tendency with contact quality. Second, there was no significant interaction between the country type and contact status for national image, however, different national images for each country were detected. In other word, for contact quantity, contact groups showed more positive national images compared to non-contact groups in Russia, but not Japan, China, and the US. For contact quality, the positive contact experience group showed more positive national images compared to the negative contact experience group, but only in traditional alliance countries.ConclusionThis study highlights the importance of implementing different strategies for countries to maintain peaceful international relations.
Nonverbal Immediacy is a term used to describe the behaviours used to signal positive feelings towards another person. The idea was based in the works of Albert Mehrabian done in the 1970s. He is the source of the famous statistic that 97% of communication is non-verbal (Yaffe, 2011). The Nonverbal Immediacy Scale measures individual differences in expression of nonverbal immediacy and was developed by Virginia Richmond, James McCrotsky and Aaron Johnson in 2003. This interactive is the self-report version. For statistical information about the test, see here.
The test consists of twenty six behaviours that you must rate on how often you exhibit them. The test should take 4 - 8 minutes to complete.
This data came from an online version of the Nonverbal Immediacy Scale.
See: Richmond, V. P., McCroskey, J. C., & Johnson, A. D. (2003). Development of the Nonverbal Immediacy Scale (NIS): Measures of self- and other-perceived nonverbal immediacy. Communication Quarterly, 51, 502-515.
Data collection took place 2016-2019.
The following items were rated of 5 point scale (1=Never, 2=Rarely, 3=Occasionaly, 4=Often, 5=Very often):
Q1 I use my hands and arms to gesture while talking to people. Q2 I touch others on the shoulder or arm while talking to them. Q3 I use a monotone or dull voice while talking to people. Q4 I look over or away from others while talking to them. Q5 I move away from others when they touch me while we are talking. Q6 I have a relaxed body position when I talk to people. Q7 I frown while talking to people. Q8 I avoid eye contact while talking to people. Q9 I have a tense body position while talking to people. Q10 I sit close or stand close to people while talking with them. Q11 My voice is monotonous or dull when I talk to people. Q12 I use a variety of vocal expressions when I talk to people. Q13 I gesture when I talk to people. Q14 I am animated when I talk to people. Q15 I have a bland facial expression when I talk to people. Q16 I move closer to people when I talk to them. Q17 I look directly at people while talking to them. Q18 I am stiff when I talk to people. Q19 I have a lot of vocal variety when I talk to people. Q20 I avoid gesturing while I am talking to people. Q21 I lean toward people when I talk to them. Q22 I maintain eye contact with people when I talk to them. Q23 I try not to sit or stand close to people when I talk with them. Q24 I lean away from people when I talk to them. Q25 I smile when I talk to people. Q26 I avoid touching people when I talk to them.
The time elapsed on each question in milliseconds was also recorded.
These other durations were also recorded (measured on the server side):
introelapse The time spent on the introduction/landing page (in seconds) testelapse The time spent on all the DASS questions (should be equivalent to the time elapsed on all the indiviudal questions combined) surveyelapse The time spent answering the rest of the demographic and survey questions
On the next page was a generic demographics survey with many different questions.
The Ten Item Personality Inventory was administered (see Gosling, S. D., Rentfrow, P. J., & Swann, W. B., Jr. (2003). A Very Brief Measure of the Big Five Personality Domains. Journal of Research in Personality, 37, 504-528.):
TIPI1 Extraverted, enthusiastic. TIPI2 Critical, quarrelsome. TIPI3 Dependable, self-disciplined. TIPI4 Anxious, easily upset. TIPI5 Open to new experiences, complex. TIPI6 Reserved, quiet. TIPI7 Sympathetic, warm. TIPI8 Disorganized, careless. TIPI9 Calm, emotionally stable. TIPI10 Conventional, uncreative.
The TIPI items were rated "I see myself as: _" such that
1 = Disagree strongly 2 = Disagree moderately 3 = Disagree a little 4 = Neither agree nor disagree 5 = Agree a little 6 = Agree moderately 7 = Agree strongly
The following items were presented as a check-list and subjects were instructed "In the grid below, check all the words whose definitions you are sure you know":
VCL1 boat VCL2 incoherent VCL3 pallid VCL4 robot VCL5 audible VCL6 cuivocal VCL7 paucity VCL8 epistemology VCL9 florted VCL10 decide VCL11 pastiche VCL12 verdid VCL13 abysmal VCL14 lucid VCL15 betray VCL16 funny
A value of 1 is checked, 0 means unchecked. The words at VCL6, VCL9, and VCL12 are not real words and can be used as a validity check.
A bunch more questions were then asked:
education "How much education have you completed?", 1=Less than high school, 2=High school, 3=University degree, 4=Graduate degree urban "What type of area did you live when you were a child?", 1=Rural (country side), 2=Suburban, 3=Urban (town, city) gender "What is your gender?", 1=Male, 2=Female, 3=Other engnat "Is English your native language?", 1=Yes, 2=No age "How ...
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
This dataset contains three sections of data. All data files have been anonymised.
The first section contains quantitative and qualitative survey online results from 1485 participants across Australia. The survey recruited people aged 18 and over, who had previously used or currently used hormonal and/or non-hormonal contraception (including withdrawal and fertility awareness-based methods). A conversational level of English was required, and participants had (currently or in the past) a cervix. This criterion allowed for gender-diverse people to participate, and those who may have had a hysterectomy, if they wished to reflect back on past experiences. Only 16.7% of survey participants were over 45 years; most data came from participants 18-44 years. Survey participants reported a broad range of gender identities, sexual preferences, cultural backgrounds, child-bearing desires, and other demographics. For example, most survey participants identified as cis women, with 15% identifying as a gender other than cis woman. Survey data is stored as a single Excel file (.xlsx) and as a CSV file (.csv).
The survey was titled “Voice Your Contraception Experiences” and contained five sections: demographics and contraception use; satisfaction with current or most recent contraception method (including use of an adapted quantitative survey instrument); contraception healthcare experiences (including use of a quantitative survey instrument); reproductive autonomy (including use of an adapted quantitative survey instrument); and free text open-ended questions about the three preceding instruments, and about contraception influences and side effects. Demographic data collected included age, gender, sexual preferences, cultural background, education level, childbearing desires, existing chronic health conditions, and whether these influenced contraception use. Open-ended questions were used to explore in greater depth satisfaction, healthcare, autonomy, and experiences of contraception method/s including side effects experienced, as well as any consequences of these experiences. Aspects of a trans survey developed by Moseson et al (2020) such as more gender inclusive questions and overall language, as well as participant suggestions from trans communities in Australian Facebook groups were included in a separately distributed trans version of the survey.
The second and third sections of data are from 20 participants who elected to complete a body mapping session, and in-depth interview, respectively. The body mapping comprised a participant written timeline of contraception use so far, thinking about first use, switching and discontinuations, and significant events of physical/emotional/psychological importance connected to contraception use (saved as a text file, .txt). The body mapping session also included a verbal description and recap of this by the participant (transcribed and saved as a Word doc file, .docx), a body map (digital image, .tiff), and a body map summary by the participant (transcribed and saved as a Word doc file, .docx). The in-depth interviews are transcribed and stored as Word doc files (.docx). The second section also contains some comments made by participants during the body mapping sessions (transcribed and saved as Word doc files, .docx). 20 participants completed the timeline of contraception use, 18 completed the body mapping session, and 17 completed the in-depth interview. Data from partial completion of stage two was included in the analysis. Stage two participants were aged 18-39, with a median age of 28, corresponding with the age range of the majority of survey participants. Of total stage two participants, 20% had a gender identity other than woman, and 60% had sexual preference as non-heterosexual. Regarding cultural diversity and childbearing desires, 25% of stage two participants were of a cultural background not solely White, with 45% not wanting any, or any more children, respectively.
This dataset cannot be published openly due to ethics conditions. To discuss the research, please contact Susan Manners S.Manners@westernsydney.edu.au ORCID 0000-0002-9281-257X
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe division of the Korean peninsula involved many neighbouring countries in the Korean War. The relations with those countries have since been reorganised due to active exchange. This study examined how the quantity and quality of contact with traditional alliance (US and Japan) and strategic partner (China and Russia) countries affected their national images.MethodsTo this end, this study analyzed the relation with the national image by measuring the quantity and quality of contact of an individual with each country. The quantity of contact included an evaluation of the individual’s subjective amount of contact, contact path, and contact status, and the quality of contact was measured as an evaluation for the pleasure, competitiveness, intimacy, spontaneity, and necessity when contacting each country’s culture. A total of 387 participants were divided into two groups based on the presence of direct contact and the quantity and quality of their contact and national images were examined. The participants were followed by a completion of the self-questionnaires including the Culture Experience Questionnaire, National Image Questionnaire, and demographic information questionnaire.ResultsThe results of this study are as follow: first, regardless of the type of country, the national image was highly correlated with the degree of subjective contact evaluated by individuals, but there was a weak tendency with contact quality. Second, there was no significant interaction between the country type and contact status for national image, however, different national images for each country were detected. In other word, for contact quantity, contact groups showed more positive national images compared to non-contact groups in Russia, but not Japan, China, and the US. For contact quality, the positive contact experience group showed more positive national images compared to the negative contact experience group, but only in traditional alliance countries.ConclusionThis study highlights the importance of implementing different strategies for countries to maintain peaceful international relations.
The Active Lives Children and Young People Survey, which was established in September 2017, provides a world-leading approach to gathering data on how children engage with sport and physical activity. This school-based survey is the first and largest established physical activity survey with children and young people in England. It gives anyone working with children aged 5-16 key insight to help understand children's attitudes and behaviours around sport and physical activity. The results will shape and influence local decision-making as well as inform government policy on the PE and Sport Premium, Childhood Obesity Plan and other cross-departmental programmes. More general information about the study can be found on the Sport England Active Lives Survey webpage and the Active Lives Online website, including reports and data tables.
The Active Lives Children and Young People survey is a school-based survey (i.e., historically always completed at school as part of lessons). Academic years 2020-2021 and 2019-20 have both been disrupted by the coronavirus pandemic, resulting in school sites being closed to many pupils for some of the year (e.g., during national lockdown periods, and during summer term for 2019-20). Due to the closure of school sites, the Active Lives Children and Young People Survey, 2020-2021 was adapted to allow at-home completion. Despite the disruption, the survey has still received a sufficient volume of responses for analysis.
The adaptions involved minor questionnaire changes (e.g., to ensure the wording was appropriate for those not attending school and to enable completion at home), and communication changes. For further details on the survey changes, please see the accompanying User Guide document. Academic year 2020-21 saw a more even split of responses by term across the year, compared to 2019-20 which had a reduced proportion of summer term responses due to the disruption caused by Covid-19. It is recommended to analyse the data within term, as well as at an overall level, because of the changes in termly distribution.
The survey identifies how participation varies across different activities and sports, by regions of England, between school types and terms, and between different demographic groups in the population. The survey measures levels of activity (active, fairly active and less active), attitudes towards sport and physical activity, swimming capability, the proportion of children and young people that volunteer in sport, sports spectating, and wellbeing measures such as happiness and life satisfaction. The questionnaire was designed to enable analysis of the findings by a broad range of variables, such as gender, family affluence and school year.
The following datasets have been provided:
1) Main dataset – this file includes responses from children and young people from school years 3 to 11, as well as responses from parents of children in years 1-2. The parents of children in years 1-2 provide behavioural answers about their child’s activity levels, they do not provide attitudinal information. Using this main dataset, full analyses can be carried out into sports and physical activity participation, levels of activity, volunteering (years 5 to 11), etc. Weighting is required when using this dataset (wt_gross / wt_gross.csplan files are available for SPSS users who can utilise them).
2) Year 1-2 dataset – this file include responses from children in school years 1-2 directly, providing their attitudinal responses (e.g. whether they like playing sport and find it easy). Analysis can be carried out into feelings towards swimming, enjoyment for being active, happiness etc. Weighting is required when using this dataset (wt_gross / wt_gross.csplan files are available for SPSS users who can utilise them).
3) Teacher dataset – this file includes response from the teachers at schools selected for the survey. Analysis can be carried out into school facilities available, length of PE lessons, whether swimming lessons are offered, etc. Weighting was formerly not available, however, as Sport England have started to publish the Teacher data, from December 2023 we decide to apply weighting to the data. The Teacher dataset now includes weighting by applying the ‘wt_teacher’ weighting variable.
For further information about the variables available for analysis, and the relevant school years asked survey questions, please see the supporting documentation. Please read the documentation before using the datasets. More general information about the study can be found on the Sport England Active Lives Survey webpages.
Latest edition information
For the second edition (January 2024), the Teacher dataset now includes a weighting variable (‘wt_teacher’). Previously, weighting was not available for these data.
The key objective of every census is to count every person (man, woman, child) resident in the country on census night, and also collect information on assorted demographic (sex, age, marital status, citizenship) and socio-economic (education/qualifications; labour force and economic activity) information, as well as data pertinent to household and housing characteristics. This count provides a complete picture of the population make-up in each village and town, of each island and region, thus allowing for an assessment of demographic change over time.
With Vanuatu, as many of her Pacific island neighbours increasingly embracing a culture of informed, or evidence-based policy development and decision-making, national census databases, and the possibility to extract complex cross-tabulations as well as a host of important sub-regional and small-area relevant information, are essential to feed a growing demand for data and information in both public and private sectors.
Educational, health and manpower planning, for example, including assessments of future demands for staffing, facilities, and programmed budgets, would not be possible without periodic censuses, and Government efforts to monitor development progress, such as in the context of its Millennium Development Goal (MDG) commitments, would also suffer greatly, if not be outright impossible, without reliable data provided by regular national population counts and updates.
While regular national-level surveys, such as Household Income and Expenditure Surveys, Labour force surveys, agriculture surveys and demographic and health surveys - to name but just a few - provide important data and information across specific sectors, these surveys could not be sustained or managed without a national sampling frame (which a census data provides). And the calculation and measurement of all population-based development indicators, such as most MDG indicators, would not be possible without up-to-date population statistics, which usually come from a census or from projections and estimates that are based on census data.
With most of this information now already 9 years old (and thus quite outdated), and in the absence of reliable population-register type databases, such as those provided from well-functional civil registration (births and deaths) and migration-recording systems, the 2009 Vanuatu census of population and housing, will provide much needed demographic, social and economic statistics that are essential for policy development, national development planning, and the regular monitoring of development progress.
Apart from achieving its general aims and objectives in delivering updated population, social and economic statistics, the 2009 census also represented a major national capacity building exercise, with most Vanuatu National Statistics Office (VNSO) staff who were involved with the census, having no prior census experience. Having been carefully planned and resourced, all 2009 census activities have potentially provided very useful (and desired) on-the-job-training for VNSO staff, right across the spectrum of professional rank and responsibilities. It also provided for short-term overseas training and professional attachments (at SPC or ABS, or elsewhere) for a limited number of professional staff, who subsequently mentored other staff in the Vanuatu National Statistics Office (VNSO).
With some key senior VNSO members involved with the 1999 census, they provided a wealth of experience that was available in-house and not to mention the ongoing surveys such HIES and Agriculture Census that the office has conducted before the census proper. The VNSO has also professional officers who have qualified in the fields of Population and Demography who had manned the project, and with this type of resources, we managed to conduct yet another successful project of the 2009 census.
While some short-term census advisory missions were fielded from SPC Demography/ Population programme staff, standard SPC technical assistance policy arrangements could not cater for long-term, or repeated in-country assignments. However, other relevant donors were invited for the longer-term attachments of TA expertise to the VNSO.
The 2009 Population and Housing Census Geographical Coverage included:
The Unit Analysis of the 2009 Population and Housing Census included: - Household - Person (Population)
The census covered all households and individuals throguhout Vanuatu
Census/enumeration data [cen]
Face-to-face [f2f]
The questionnaire basically has 5 sections; the geographical identifiers, the general population questions and education, labour force questions, the women and fertility questions and the housing questions.The geographical identifiers include the Village name, GPS code, EA number, household number and the Enumerator ID.The Person questions contain the person demographics including the education level and labour force status. A section on fertility for women in the reproductive age is also included. All have been guided by 'skip patterns' to guide the flow of questions asked.Household questions contained the basic description of the house materials, tenure, access to water and sanitation, energy, durables, use of treated mosquito nest and internet access.
In the Census proper, the Optical Character Recognition (OCR) system (ReadSoft Application System) was used to capture information from the completed forms. The captured data were then exported to MS Access database system for further editing and cleaning before the final data is transferred to CSPro for more editing and quality checks before the data was finalised. All system files and data files were stored in the server under 2009PopCensus folder. Three temporary data operators were hired to do the job, under the supervision of Rara Soro, the system analyst for VNSO. No data was stored in work stations, because all data were directly written to the DATA folder in the server.
Range checks and basic checks (online edits) were built in the manual data entry system, while the complex edits were written in a separate batch edit program. If the system encounter and error during data entry, an error message will be displayed and the data operator cannot proceed unless the error displayed is fixed. e.g Males + Females = Total Persons. Please re-enter. It was strongly recommended to the data operators not to make up answers but consult the supervisor if he/she cannot fix it. Listed below are the checks that were built into the data entry system.
01 Person 1 must be the head of household 02 Sex against relationship 03 Age against date of birth 04 Marital status - Married people should be age 15+ 05 Spouse should be married 06 P9, P10, P11 against village enumerated 07 Never been to school but can use internet - Is this possible 08 Check for multiple head or spouse in the household 09 Husband and wife of same sex 10 Total persons match total people in personal form 11 Total children born and live in household (F2a) against total persons total 12 Age difference of head and child is less than 13 13 Total children born (F4) against total alive(F2) + total died(F3)
A separate batch edit program was developed for further data cleaning. All online edits were also re-written in this program to make sure that all errors flagged out during data entry were fixed. Some of the errors detected are not really errors, but still requires double checking, and if the answer recorded is the correct answer, don't change it. The batch edit was performed on each batch, and also on the concatenated batch. Below is the summary list of errors generated from manual data entry data before batch editing.
MDE Error message summary
Age does not match date of birth 272
Total children born and living in household (F2a) > total in 1
Attend school full-time in P12 but also working 16
Too young for highest education recorded 14
Highest education completed does not match with grade currently attending 80
Age had the highest errors rate, and this is due to an error in the logic statement, otherwise all ages that do not match their date of birth are corrected during data entry.
The Data capturing (Scanning) and Editing process took about 6 months to be completed but then more checks were made after that to finalise the dataset before publishing the results.
During re-coding of zero's and blanks, a couple of batch edit statement written in the batch edit program were wrong, and it created errors in the scanned data. The batch edit was suppose to recode only those people that didn't answer questions P19, P23 - P25, but instead it recoded valid codes as well to blanks. This was only picked up when tables were generated and numbers were found to be so much different in manual data entry and scanned data. Another batch edit program was developed to recode and fix this problem.
Household characteristics and basic demographic variables for the census data was used in comparision with the 1999 census data to determine the accuracy of the pilot data. Some of the key indicators used for comparision are the household size, sex ratio, educational attainment, employment status. A pyramid was also used
The National Labor Force Survey aims to obtain characteristics of employment, unemployment, underemployment, and of working age population not in the labor force who are in schools, doing housekeeping, others, excludes personal activity. This survey covered all provinces in Indonesia (33 provinces). The total number of household samples were around 200 000 households that consist of 50 000 households of quarterly samples and 150 000 household of additional samples, with a 96.30 percent of response rate.
It is important to note the industrial classification applied in the survey is the Indonesian Standard Industrial Classification (KBLI) 2009 (aligned with ISIC Rev 4). The occupation classification is based on the Indonesian Classification of Occupation (KBJI) 2002, which refers to ISCO 88, which presents occupation classification in much more detail.
Sakernas is a survey which is specifically designed for labor data collection. Sakernas is relatively different compared with SP (Population Census) and Supas (Intercencal Population Cencus) which are more focused on demographic characteristics of the population. Other labor force data source is Susenas (National Social and Economic Survey) which collects data on many aspects of social and economic characteristics, such as: consumption, labor, health and household variables. The differences in the characteristics coverage of surveys/census mentioned above are contributed to the labor data quality, in which the Sakernas results are regarded better.
National coverage
The August 2014 Sakernas covered all provinces in Indonesia (33 provinces). The total number of household samples were around 200 000 households that consist of 50 000 households of quarterly samples and 150 000 household of additional samples, with a 96.30 percent of response rate.
The main information collected in The National Labor Force Survey are data on individual household members covering persons aged 10 years and older. However tabulated data covers household members aged 15 years and older.
Sample survey data [ssd]
Total number of household samples were 200,000 with a response rate of 96.30 percent.
The difference of sample size of Sakernas compared with SP, Supas and Susenas will lead to different levels of sampling error. The smaller sample size will cause the greater sampling error of a survey.
Face-to-face [f2f]
Questionnaires were published in Bahasa Indonesia only.
The way of structuring and wording questionnaire on labor characteristics will strongly affect the quality of census and survey data attainment. The structure and word of questionnaire plan covers producing correct sentences without ambiguous meaning, choosing appropriate word and order in the questions, and the number of variables and questions that will be asked to respondent. In the Sakernas, the questionnaire has been designed in a simple and concise way. The respondent is expected to understand and easily catch the aim of questions in the survey and avoid memory lapse and uninterested response during interview data collection. Furthermore, the design of Sakernas questionnaire remain stable in order to keep data comparison needs.
96.30 percent.
Population 15 Years of Age and Over Economically Active Not Economically Active Labor Force Participation Rate (%) Unemployment Rate (%) Educational Attainment Main Industry Main Employment Status
The 2022 Kenya Demographic and Health Survey (2022 KDHS) was implemented by the Kenya National Bureau of Statistics (KNBS) in collaboration with the Ministry of Health (MoH) and other stakeholders. The survey is the 7th KDHS implemented in the country.
The primary objective of the 2022 KDHS is to provide up-to-date estimates of basic sociodemographic, nutrition and health indicators. Specifically, the 2022 KDHS collected information on: • Fertility levels and contraceptive prevalence • Childhood mortality • Maternal and child health • Early Childhood Development Index (ECDI) • Anthropometric measures for children, women, and men • Children’s nutrition • Woman’s dietary diversity • Knowledge and behaviour related to the transmission of HIV and other sexually transmitted diseases • Noncommunicable diseases and other health issues • Extent and pattern of gender-based violence • Female genital mutilation.
The information collected in the 2022 KDHS will assist policymakers and programme managers in monitoring, evaluating, and designing programmes and strategies for improving the health of Kenya’s population. The 2022 KDHS also provides indicators relevant to monitoring the Sustainable Development Goals (SDGs) for Kenya, as well as indicators relevant for monitoring national and subnational development agendas such as the Kenya Vision 2030, Medium Term Plans (MTPs), and County Integrated Development Plans (CIDPs).
National coverage
The survey covered all de jure household members (usual residents), all women aged 15-49, men ageed 15-54, and all children aged 0-4 resident in the household.
Sample survey data [ssd]
The sample for the 2022 KDHS was drawn from the Kenya Household Master Sample Frame (K-HMSF). This is the frame that KNBS currently uses to conduct household-based sample surveys in Kenya. The frame is based on the 2019 Kenya Population and Housing Census (KPHC) data, in which a total of 129,067 enumeration areas (EAs) were developed. Of these EAs, 10,000 were selected with probability proportional to size to create the K-HMSF. The 10,000 EAs were randomised into four equal subsamples. A survey can utilise a subsample or a combination of subsamples based on the sample size requirements. The 2022 KDHS sample was drawn from subsample one of the K-HMSF. The EAs were developed into clusters through a process of household listing and geo-referencing. The Constitution of Kenya 2010 established a devolved system of government in which Kenya is divided into 47 counties. To design the frame, each of the 47 counties in Kenya was stratified into rural and urban strata, which resulted in 92 strata since Nairobi City and Mombasa counties are purely urban.
The 2022 KDHS was designed to provide estimates at the national level, for rural and urban areas separately, and, for some indicators, at the county level. The sample size was computed at 42,300 households, with 25 households selected per cluster, which resulted in 1,692 clusters spread across the country, 1,026 clusters in rural areas, and 666 in urban areas. The sample was allocated to the different sampling strata using power allocation to enable comparability of county estimates.
The 2022 KDHS employed a two-stage stratified sample design where in the first stage, 1,692 clusters were selected from the K-HMSF using the Equal Probability Selection Method (EPSEM). The clusters were selected independently in each sampling stratum. Household listing was carried out in all the selected clusters, and the resulting list of households served as a sampling frame for the second stage of selection, where 25 households were selected from each cluster. However, after the household listing procedure, it was found that some clusters had fewer than 25 households; therefore, all households from these clusters were selected into the sample. This resulted in 42,022 households being sampled for the 2022 KDHS. Interviews were conducted only in the pre-selected households and clusters; no replacement of the preselected units was allowed during the survey data collection stages.
For further details on sample design, see APPENDIX A of the survey report.
Computer Assisted Personal Interview [capi]
Four questionnaires were used in the 2022 KDHS: Household Questionnaire, Woman’s Questionnaire, Man’s Questionnaire, and the Biomarker Questionnaire. The questionnaires, based on The DHS Program’s model questionnaires, were adapted to reflect the population and health issues relevant to Kenya. In addition, a self-administered Fieldworker Questionnaire was used to collect information about the survey’s fieldworkers.
CAPI was used during data collection. The devices used for CAPI were Android-based computer tablets programmed with a mobile version of CSPro. The CSPro software was developed jointly by the U.S. Census Bureau, Serpro S.A., and The DHS Program. Programming of questionnaires into the Android application was done by ICF, while configuration of tablets was completed by KNBS in collaboration with ICF. All fieldwork personnel were assigned usernames, and devices were password protected to ensure the integrity of the data.
Work was assigned by supervisors and shared via Bluetooth® to interviewers’ tablets. After completion, assigned work was shared with supervisors, who conducted initial data consistency checks and edits and then submitted data to the central servers hosted at KNBS via SyncCloud. Data were downloaded from the central servers and checked against the inventory of expected returns to account for all data collected in the field. SyncCloud was also used to generate field check tables to monitor progress and identify any errors, which were communicated back to the field teams for correction.
Secondary editing was done by members of the KNBS and ICF central office team, who resolved any errors that were not corrected by field teams during data collection. A CSPro batch editing tool was used for cleaning and tabulation during data analysis.
A total of 42,022 households were selected for the survey, of which 38,731 (92%) were found to be occupied. Among the occupied households, 37,911 were successfully interviewed, yielding a response rate of 98%. The response rates for urban and rural households were 96% and 99%, respectively. In the interviewed households, 33,879 women age 15-49 were identified as eligible for individual interviews. Of these, 32,156 women were interviewed, yielding a response rate of 95%. The response rates among women selected for the full and short questionnaires were similar (95%). In the households selected for the men’s survey, 16,552 men age 15-54 were identified as eligible for individual interviews and 14,453 were successfully interviewed, yielding a response rate of 87%.
The estimates from a sample survey are affected by two types of errors: (1) non-sampling errors, and (2) sampling errors. Non-sampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2022 Kenya Demographic and Health Survey (2022 KDHS) to minimise this type of error, non-sampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2022 KDHS is only one of many samples that could have been selected from the same population, using the same design and identical size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design.
If the sample of respondents had been selected as a simple random sample, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 2022 KDHS sample is the result of a multi-stage stratified design, and, consequently, it was necessary to use more complex formulae. The computer software used to calculate sampling errors for the 2022 KDHS is a SAS program. This program used the Taylor linearisation method for variance estimation for survey estimates that are means, proportions or ratios. The Jackknife repeated replication method is used for variance estimation of more complex statistics such as fertility and mortality rates.
A more detailed description of estimates of sampling errors are presented in APPENDIX B of the survey report.
Data
The 2023 Jordan Population and Family Health Survey (JPFHS) is the eighth Population and Family Health Survey conducted in Jordan, following those conducted in 1990, 1997, 2002, 2007, 2009, 2012, and 2017–18. It was implemented by the Department of Statistics (DoS) at the request of the Ministry of Health (MoH).
The primary objective of the 2023 JPFHS is to provide up-to-date estimates of key demographic and health indicators. Specifically, the 2023 JPFHS: • Collected data at the national level that allowed calculation of key demographic indicators • Explored the direct and indirect factors that determine levels of and trends in fertility and childhood mortality • Measured contraceptive knowledge and practice • Collected data on key aspects of family health, including immunisation coverage among children, prevalence and treatment of diarrhoea and other diseases among children under age 5, and maternity care indicators such as antenatal visits and assistance at delivery • Obtained data on child feeding practices, including breastfeeding, and conducted anthropometric measurements to assess the nutritional status of children under age 5 and women age 15–49 • Conducted haemoglobin testing with eligible children age 6–59 months and women age 15–49 to gather information on the prevalence of anaemia • Collected data on women’s and men’s knowledge and attitudes regarding sexually transmitted infections and HIV/AIDS • Obtained data on women’s experience of emotional, physical, and sexual violence • Gathered data on disability among household members
The information collected through the 2023 JPFHS is intended to assist policymakers and programme managers in evaluating and designing programmes and strategies for improving the health of the country’s population. The survey also provides indicators relevant to the Sustainable Development Goals (SDGs) for Jordan.
National coverage
The survey covered all de jure household members (usual residents), all women aged 15-49, men aged 15-59, and all children aged 0-4 resident in the household.
Sample survey data [ssd]
The sampling frame used for the 2023 JPFHS was the 2015 Jordan Population and Housing Census (JPHC) frame. The survey was designed to produce representative results for the country as a whole, for urban and rural areas separately, for each of the country’s 12 governorates, and for four nationality domains: the Jordanian population, the Syrian population living in refugee camps, the Syrian population living outside of camps, and the population of other nationalities. Each of the 12 governorates is subdivided into districts, each district into subdistricts, each subdistrict into localities, and each locality into areas and subareas. In addition to these administrative units, during the 2015 JPHC each subarea was divided into convenient area units called census blocks. An electronic file of a complete list of all of the census blocks is available from DoS. The list contains census information on households, populations, geographical locations, and socioeconomic characteristics of each block. Based on this list, census blocks were regrouped to form a general statistical unit of moderate size, called a cluster, which is widely used in various surveys as the primary sampling unit (PSU). The sample clusters for the 2023 JPFHS were selected from the frame of cluster units provided by the DoS.
The sample for the 2023 JPFHS was a stratified sample selected in two stages from the 2015 census frame. Stratification was achieved by separating each governorate into urban and rural areas. In addition, the Syrian refugee camps in Zarqa and Mafraq each formed a special sampling stratum. In total, 26 sampling strata were constructed. Samples were selected independently in each sampling stratum, through a twostage selection process, according to the sample allocation. Before the sample selection, the sampling frame was sorted by district and subdistrict within each sampling stratum. By using a probability proportional to size selection at the first stage of sampling, an implicit stratification and proportional allocation were achieved at each of the lower administrative levels.
For further details on sample design, see APPENDIX A of the final report.
Computer Assisted Personal Interview [capi]
Five questionnaires were used for the 2023 JPFHS: (1) the Household Questionnaire, (2) the Woman’s Questionnaire, (3) the Man’s Questionnaire, (4) the Biomarker Questionnaire, and (5) the Fieldworker Questionnaire. The questionnaires, based on The DHS Program’s model questionnaires, were adapted to reflect the population and health issues relevant to Jordan. Input was solicited from various stakeholders representing government ministries and agencies, nongovernmental organisations, and international donors. After all questionnaires were finalised in English, they were translated into Arabic.
All electronic data files for the 2023 JPFHS were transferred via SynCloud to the DoS central office in Amman, where they were stored on a password-protected computer. The data processing operation included secondary editing, which required resolution of computer-identified inconsistencies and coding of open-ended questions. Data editing was accomplished using CSPro software. During the duration of fieldwork, tables were generated to check various data quality parameters, and specific feedback was given to the teams to improve performance. Secondary editing and data processing were initiated in July and completed in September 2023.
A total of 20,054 households were selected for the sample, of which 19,809 were occupied. Of the occupied households, 19,475 were successfully interviewed, yielding a response rate of 98%.
In the interviewed households, 13,020 eligible women age 15–49 were identified for individual interviews; interviews were completed with 12,595 women, yielding a response rate of 97%. In the subsample of households selected for the male survey, 6,506 men age 15–59 were identified as eligible for individual interviews and 5,873 were successfully interviewed, yielding a response rate of 90%.
The estimates from a sample survey are affected by two types of errors: nonsampling errors and sampling errors. Nonsampling errors are the results of mistakes made in implementing data collection and in data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2023 Jordan Population and Family Health Survey (2023 JPFHS) to minimise this type of error, nonsampling errors are impossible to avoid and difficult to evaluate statistically.
Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2023 JPFHS is only one of many samples that could have been selected from the same population, using the same design and sample size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability among all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
Sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95% of all possible samples of identical size and design.
If the sample of respondents had been selected by simple random sampling, it would have been possible to use straightforward formulas for calculating sampling errors. However, the 2023 JPFHS sample was the result of a multistage stratified design, and, consequently, it was necessary to use more complex formulas. Sampling errors are computed using SAS programs developed by ICF. These programs use the Taylor linearisation method to estimate variances for survey estimates that are means, proportions, or ratios. The Jackknife repeated replication method is used for variance estimation of more complex statistics such as fertility and mortality rates.
A more detailed description of estimates of sampling errors are presented in APPENDIX B of the survey report.
Data Quality Tables
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe division of the Korean peninsula involved many neighbouring countries in the Korean War. The relations with those countries have since been reorganised due to active exchange. This study examined how the quantity and quality of contact with traditional alliance (US and Japan) and strategic partner (China and Russia) countries affected their national images.MethodsTo this end, this study analyzed the relation with the national image by measuring the quantity and quality of contact of an individual with each country. The quantity of contact included an evaluation of the individual’s subjective amount of contact, contact path, and contact status, and the quality of contact was measured as an evaluation for the pleasure, competitiveness, intimacy, spontaneity, and necessity when contacting each country’s culture. A total of 387 participants were divided into two groups based on the presence of direct contact and the quantity and quality of their contact and national images were examined. The participants were followed by a completion of the self-questionnaires including the Culture Experience Questionnaire, National Image Questionnaire, and demographic information questionnaire.ResultsThe results of this study are as follow: first, regardless of the type of country, the national image was highly correlated with the degree of subjective contact evaluated by individuals, but there was a weak tendency with contact quality. Second, there was no significant interaction between the country type and contact status for national image, however, different national images for each country were detected. In other word, for contact quantity, contact groups showed more positive national images compared to non-contact groups in Russia, but not Japan, China, and the US. For contact quality, the positive contact experience group showed more positive national images compared to the negative contact experience group, but only in traditional alliance countries.ConclusionThis study highlights the importance of implementing different strategies for countries to maintain peaceful international relations.