The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
National
Schools, teachers, students, public officials
Sample survey data [ssd]
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level. We also wish to detect differences by urban/rural location.
For our school survey, we will employ a two-stage random sample design, where in the first stage a sample of typically around 200 schools, based on local conditions, is drawn, chosen in advance by the Bank staff. In the second stage, a sample of teachers and students will be drawn to answer questions from our survey modules, chosen in the field. A total of 10 teachers will be sampled for absenteeism. Five teachers will be interviewed and given a content knowledge exam. Three 1st grade students will be assessed at random, and a classroom of 4th grade students will be assessed at random. Stratification will be based on the school’s urban/rural classification and based on region. When stratifying by region, we will work with our partners within the country to make sure we include all relevant geographical divisions.
For our Survey of Public Officials, we will sample a total of 200 public officials. Roughly 60 officials are typically surveyed at the federal level, while 140 officials will be surveyed at the regional/district level. For selection of officials at the regional and district level, we will employ a cluster sampling strategy, where roughly 10 regional offices (or whatever the secondary administrative unit is called) are chosen at random from among the regions in which schools were sampled. Then among these 10 regions, we also typically select around 10 districts (tertiary administrative level units) from among the districts in which schools werer sampled. The result of this sampling approach is that for 10 clusters we will have links from the school to the district office to the regional office to the central office. Within the regions/districts, five or six officials will be sampled, including the head of organization, HR director, two division directors from finance and planning, and one or two randomly selected professional employees among the finance, planning, and one other service related department chosen at random. At the federal level, we will interview the HR director, finance director, planning director, and three randomly selected service focused departments. In addition to the directors of each of these departments, a sample of 9 professional employees will be chosen in each department at random on the day of the interview.
For our school survey, we select only schools that are supervised by the Minsitry or Education or are Private schools. No schools supervised by the Ministry of Defense, Ministry of Endowments, Ministry of Higher Education , or Ministry of Social Development are included. This left us with a sampling frame containing 3,330 schools, with 1297 private schools and 2003 schools managed by the Minsitry of Education. The schools must also have at least 3 grade 1 students, 3 grade 4 students, and 3 teachers. We oversampled Southern schools to reach a total of 50 Southern schools for regional comparisons. Additionally, we oversampled Evening schools, for a total of 40 evening schools.
A total of 250 schools were surveyed.
Computer Assisted Personal Interview [capi]
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
More information pertaining to each of the three instruments can be found below:
School Survey: The School Survey collects data primarily on practices (the quality of service delivery in schools), but also on some de facto policy indicators. It consists of streamlined versions of existing instruments—including Service Delivery Surveys on teachers and inputs/infrastructure, Teach on pedagogical practice, Global Early Child Development Database (GECDD) on school readiness of young children, and the Development World Management Survey (DWMS) on management quality—together with new questions to fill gaps in those instruments. Though the number of modules is similar to the full version of the Service Delivery Indicators (SDI) Survey, the number of items and the complexity of the questions within each module is significantly lower. The School Survey includes 8 short modules: School Information, Teacher Presence, Teacher Survey, Classroom Observation, Teacher Assessment, Early Learner Direct Assessment, School Management Survey, and 4th-grade Student Assessment. For a team of two enumerators, it takes on average about 4 hours to collect all information in a given school. For more information, refer to the Frequently Asked Questions.
Policy Survey: The Policy Survey collects information to feed into the policy de jure indicators. This survey is filled out by key informants in each country, drawing on their knowledge to identify key elements of the policy framework (as in the SABER approach to policy-data collection that the Bank has used over the past 7 years). The survey includes questions on policies related to teachers, school management, inputs and infrastructure, and learners. In total, there are 52 questions in the survey as of June 2020. The key informant is expected to spend 2-3 days gathering and analyzing the relavant information to answer the survey questions.
Survey of Public Officials: The Survey of Public Officials collects information about the capacity and orientation of the bureaucracy, as well as political factors affecting education outcomes. This survey is a streamlined and education-focused version of the civil-servant surveys that the Bureaucracy Lab (a joint initiative of the Governance Global Practice and the Development Impact Evaluation unit of the World Bank) has implemented in several countries. The survey includes questions about technical and leadership skills, work environment, stakeholder engagement, impartial decision-making, and attitudes and behaviors. The survey takes 30-45 minutes per public official and is used to interview Ministry of Education officials working at the central, regional, and district levels in each country.
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level.
Interest group scholars have struggled to document whether and how interest groups impact policy outcomes. At the same time, large, powerful vested interests like teachers’ unions have been accused of getting in the way of policy change, despite a lack of consistent evidence. This dissertation uses the case of education reform to disentangle the role of different types of interest groups in U.S. state policymaking. Through four essays, this dissertation shows that interest group power comes in multiple forms, that interest groups benefit where they have legislative allies, and that interest competition impacts policy. Bucking the conventional wisdom that, as the strongest interest group in education, teachers’ unions’ preferences dictate education policy outcomes, I show that teachers’ unions most strongly impact those policies that affect them organizationally. For other policies, however, other groups matter more. I show that education reform groups use information and assistance, while philanthropic foundations use funding to state bureaucracies to further policies that teachers’ unions oppose.
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
National
Schools, teachers, students, public officials
Sample survey data [ssd]
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level. We also wish to detect differences by urban/rural location. For our school survey, we will employ a two-stage random sample design, where in the first stage a sample of typically around 200 schools, based on local conditions, is drawn, chosen in advance by the Bank staff. In the second stage, a sample of teachers and students will be drawn to answer questions from our survey modules, chosen in the field. A total of 10 teachers will be sampled for absenteeism. Five teachers will be interviewed and given a content knowledge exam. Three 1st grade students will be assessed at random, and a classroom of 4th grade students will be assessed at random. Stratification will be based on the school’s urban/rural classification and based on region. When stratifying by region, we will work with our partners within the country to make sure we include all relevant geographical divisions. For our Survey of Public Officials, we will sample a total of 200 public officials. Roughly 60 officials are typically surveyed at the federal level, while 140 officials will be surveyed at the regional/district level. For selection of officials at the regional and district level, we will employ a cluster sampling strategy, where roughly 10 regional offices (or whatever the secondary administrative unit is called) are chosen at random from among the regions in which schools were sampled. Then among these 10 regions, we also typically select around 10 districts (tertiary administrative level units) from among the districts in which schools werer sampled. The result of this sampling approach is that for 10 clusters we will have links from the school to the district office to the regional office to the central office. Within the regions/districts, five or six officials will be sampled, including the head of organization, HR director, two division directors from finance and planning, and one or two randomly selected professional employees among the finance, planning, and one other service related department chosen at random. At the federal level, we will interview the HR director, finance director, planning director, and three randomly selected service focused departments. In addition to the directors of each of these departments, a sample of 9 professional employees will be chosen in each department at random on the day of the interview.
The sample for the Global Education Policy Dashboard in SLE was based in part on a previous sample of 260 schools which were part of an early EGRA study. Details from the sampling for that study are quoted below. An additional booster sample of 40 schools was chosen to be representative of smaller schools of less than 30 learners.
EGRA Details:
"The sampling frame began with the 2019 Annual School Census (ASC) list of primary schools as provided by UNICEF/MBSSE where the sample of 260 schools for this study were obtained from an initial list of 7,154 primary schools. Only schools that meet a pre-defined selection criteria were eligible for sampling.
To achieve the recommended sample size of 10 learners per grade, schools that had an enrolment of at least 30 learners in Grade 2 in 2019 were considered. To achieve a high level of confidence in the findings and generate enough data for analysis, the selection criteria only considered schools that: • had an enrolment of at least 30 learners in grade 1; and • had an active grade 4 in 2019 (enrolment not zero)
The sample was taken from a population of 4,597 primary schools that met the eligibility criteria above, representing 64.3% of all the 7,154 primary schools in Sierra Leone (as per the 2019 school census). Schools with higher numbers of learners were purposefully selected to ensure the sample size could be met in each site.
As a result, a sample of 260 schools were drawn using proportional to size allocation with simple random sampling without replacement in each stratum. In the population, there were 16 districts and five school ownership categories (community, government, mission/religious, private and others). A total of 63 strata were made by forming combinations of the 16 districts and school ownership categories. In each stratum, a sample size was computed proportional to the total population and samples were drawn randomly without replacement. Drawing from other EGRA/EGMA studies conducted by Montrose in the past, a backup sample of up to 78 schools (30% of the sample population) with which enumerator teams can replace sample schools was also be drawn.
In the distribution of sampled schools by ownership, majority of the sampled schools are owned by mission/religious group (62.7%, n=163) followed by the government owned schools at 18.5% (n=48). Additionally, in school distribution by district, majority of the sampled schools (54%) were found in Bo, Kambia, Kenema, Kono, Port Loko and Kailahun districts. Refer to annex 9. for details on the population and sample distribution by district."
Because of the restriction that at least 30 learners were available in Grade 2, we chose to add an additional 40 schools to the sample from among smaller schools, with between 3 and 30 grade 2 students. The objective of this supplement was to make the sample more nationally representative, as the restriction reduced the sampling frame for the EGRA/EGMA sample by over 1,500 schools from 7,154 to 4,597.
The 40 schools were chosen in a manner consistent with the original set of EGRA/EGMA schools. The 16 districts formed the strata. In each stratum, the number of schools selected were proportional to the total population of the stratum, and within stratum schools were chosen with probability proportional to size.
Computer Assisted Personal Interview [capi]
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
More information pertaining to each of the three instruments can be found below: - School Survey: The School Survey collects data primarily on practices (the quality of service delivery in schools), but also on some de facto policy indicators. It consists of streamlined versions of existing instruments—including Service Delivery Surveys on teachers and inputs/infrastructure, Teach on pedagogical practice, Global Early Child Development Database (GECDD) on school readiness of young children, and the Development World Management Survey (DWMS) on management quality—together with new questions to fill gaps in those instruments. Though the number of modules is similar to the full version of the Service Delivery Indicators (SDI) Survey, the number of items and the complexity of the questions within each module is significantly lower. The School Survey includes 8 short modules: School Information, Teacher Presence, Teacher Survey, Classroom Observation, Teacher Assessment, Early Learner Direct Assessment, School Management Survey, and 4th-grade Student Assessment. For a team of two enumerators, it takes on average about 4 hours to collect all information in a given school. For more information, refer to the Frequently Asked Questions.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Users can view brief descriptions of laws and policies pertaining to the health of students Topics include: wellness policy, health education curriculum, school meal programs, physical activity, emergency response, bullying, and facility safety, among others. Background The State School Health Policy Database was developed by the National Association of State Boards of Education and is supported by the Division of Adolescent and School Health of the Centers (DASH) of the Centers for Disease Control and Prevention (CDC) and the Robert Wood Johnson Foundation. This database is useful for school policymakers interested in viewing strategies and policies across states and researchers and policy evaluators seeking to track changes in polici es across the United States. Topics include: wellness policies, health education curriculum; school meal programs, school food environment, physical activity, drug-free schools, bullying, emergency response, tobacco use, air quality, pesticide use, and facility safety. User Functionality Users can view brief descriptions of laws and policies pertaining to the health of students. When possible, hyperlinks to full written policies are included. Data Notes The data base is updated regularly with new and revised laws and policies from across the United States.
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
National
Schools, teachers, students, public officials
Sample survey data [ssd]
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level. We also wish to detect differences by urban/rural location.
For our school survey, we will employ a two-stage random sample design, where in the first stage a sample of typically around 200 schools, based on local conditions, is drawn, chosen in advance by the Bank staff. In the second stage, a sample of teachers and students will be drawn to answer questions from our survey modules, chosen in the field. A total of 10 teachers will be sampled for absenteeism. Five teachers will be interviewed and given a content knowledge exam. Three 1st grade students will be assessed at random, and a classroom of 4th grade students will be assessed at random. Stratification will be based on the school’s urban/rural classification and based on region. When stratifying by region, we will work with our partners within the country to make sure we include all relevant geographical divisions.
For our Survey of Public Officials, we will sample a total of 200 public officials. Roughly 60 officials are typically surveyed at the federal level, while 140 officials will be surveyed at the regional/district level. For selection of officials at the regional and district level, we will employ a cluster sampling strategy, where roughly 10 regional offices (or whatever the secondary administrative unit is called) are chosen at random from among the regions in which schools were sampled. Then among these 10 regions, we also typically select around 10 districts (tertiary administrative level units) from among the districts in which schools werer sampled. The result of this sampling approach is that for 10 clusters we will have links from the school to the district office to the regional office to the central office. Within the regions/districts, five or six officials will be sampled, including the head of organization, HR director, two division directors from finance and planning, and one or two randomly selected professional employees among the finance, planning, and one other service related department chosen at random. At the federal level, we will interview the HR director, finance director, planning director, and three randomly selected service focused departments. In addition to the directors of each of these departments, a sample of 9 professional employees will be chosen in each department at random on the day of the interview.
Overall, we draw a sample of 300 public schools from each of the regions of Ethiopia. As a comparison to the total number of schools in Ethiopia, this consistutes an approximately 1% sample. Because of the large size of the country, and because there can be very large distances between Woredas within the same region, we chose a cluster sampling approach. In this approach, 100 Woredas were chosen with probability proportional to 4th grade size. Then within each Woreda two rural and one urban school were chosen with probability proportional to 4th grade size.
Because of conflict in the Tigray region, an initial set of 12 schools that were selected had to be trimmed to 6 schools in Tigray. These six schools were then distributed to other regions in Ethiopia.
Computer Assisted Personal Interview [capi]
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
More information pertaining to each of the three instruments can be found below:
School Survey: The School Survey collects data primarily on practices (the quality of service delivery in schools), but also on some de facto policy indicators. It consists of streamlined versions of existing instruments—including Service Delivery Surveys on teachers and inputs/infrastructure, Teach on pedagogical practice, Global Early Child Development Database (GECDD) on school readiness of young children, and the Development World Management Survey (DWMS) on management quality—together with new questions to fill gaps in those instruments. Though the number of modules is similar to the full version of the Service Delivery Indicators (SDI) Survey, the number of items and the complexity of the questions within each module is significantly lower. The School Survey includes 8 short modules: School Information, Teacher Presence, Teacher Survey, Classroom Observation, Teacher Assessment, Early Learner Direct Assessment, School Management Survey, and 4th-grade Student Assessment. For a team of two enumerators, it takes on average about 4 hours to collect all information in a given school. For more information, refer to the Frequently Asked Questions.
Policy Survey: The Policy Survey collects information to feed into the policy de jure indicators. This survey is filled out by key informants in each country, drawing on their knowledge to identify key elements of the policy framework (as in the SABER approach to policy-data collection that the Bank has used over the past 7 years). The survey includes questions on policies related to teachers, school management, inputs and infrastructure, and learners. In total, there are 52 questions in the survey as of June 2020. The key informant is expected to spend 2-3 days gathering and analyzing the relavant information to answer the survey questions.
Survey of Public Officials: The Survey of Public Officials collects information about the capacity and orientation of the bureaucracy, as well as political factors affecting education outcomes. This survey is a streamlined and education-focused version of the civil-servant surveys that the Bureaucracy Lab (a joint initiative of the Governance Global Practice and the Development Impact Evaluation unit of the World Bank) has implemented in several countries. The survey includes questions about technical and leadership skills, work environment, stakeholder engagement, impartial decision-making, and attitudes and behaviors. The survey takes 30-45 minutes per public official and is used to interview Ministry of Education officials working at the central, regional, and district levels in each country.
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
In this study, I use longitudinal data on inter-district open enrollment patterns and policies to estimate the effect of local decisions over open enrollment policy on non-resident enrollment in Wayne County, Michigan—which includes Detroit and parts of the metropolitan Detroit area. I find that when districts set more restrictive open enrollment policies, they enroll fewer new non-resident students, driven by a decrease in new Black, Hispanic, and low-income non-resident students specifically. When districts allow inter-district open enrollment, racial and socioeconomic segregation within those districts remain unchanged, and racial isolation slightly increases. My findings suggest that regulating enrollment policies to reduce discretionary exclusion can increase access to inter-district choice for some low-income and racially minoritized students, but that these kinds of policy changes are unlikely to reduce racial segregation and socioeconomic stratification more broadly.This research result used data structured and maintained by the MERI-Michigan Education Data Center (MEDC). MEDC data are modified for analysis purposes using rules governed by MEDC and are not identical to those data collected and maintained by the Michigan Department of Education (MDE) and/or Michigan’s Center for Educational Performance and Information (CEPI). Results, information, and opinions solely represent the analysis, information, and opinions of the author and are not endorsed by, or reflect the views or positions of, grantors, MDE, and CEPI or any employee thereof. All errors are my own.
This table contains data on the percent of population age 25 and up with a four-year college degree or higher for California, its regions, counties, county subdivisions, cities, towns, and census tracts. Greater educational attainment has been associated with health-promoting behaviors including consumption of fruits and vegetables and other aspects of healthy eating, engaging in regular physical activity, and refraining from excessive consumption of alcohol and from smoking. Completion of formal education (e.g., high school) is a key pathway to employment and access to healthier and higher paying jobs that can provide food, housing, transportation, health insurance, and other basic necessities for a healthy life. Education is linked with social and psychological factors, including sense of control, social standing and social support. These factors can improve health through reducing stress, influencing health-related behaviors and providing practical and emotional support. More information on the data table and a data dictionary can be found in the Data and Resources section. The educational attainment table is part of a series of indicators in the Healthy Communities Data and Indicators Project (HCI) of the Office of Health Equity. The goal of HCI is to enhance public health by providing data, a standardized set of statistical measures, and tools that a broad array of sectors can use for planning healthy communities and evaluating the impact of plans, projects, policy, and environmental changes on community health. The creation of healthy social, economic, and physical environments that promote healthy behaviors and healthy outcomes requires coordination and collaboration across multiple sectors, including transportation, housing, education, agriculture and others. Statistical metrics, or indicators, are needed to help local, regional, and state public health and partner agencies assess community environments and plan for healthy communities that optimize public health. More information on HCI can be found here: https://www.cdph.ca.gov/Programs/OHE/CDPH%20Document%20Library/Accessible%202%20CDPH_Healthy_Community_Indicators1pager5-16-12.pdf The format of the educational attainment table is based on the standardized data format for all HCI indicators. As a result, this data table contains certain variables used in the HCI project (e.g., indicator ID, and indicator definition). Some of these variables may contain the same value for all observations.
https://dataverse.ada.edu.au/api/datasets/:persistentId/versions/1.4/customlicense?persistentId=doi:10.26193/6JG0DMhttps://dataverse.ada.edu.au/api/datasets/:persistentId/versions/1.4/customlicense?persistentId=doi:10.26193/6JG0DM
The Curriculum Policies Project (http://scpp.esrc.unimelb.edu.au/) dataset contains a series of 17 transcripts of interviews with 19 state curriculum experts and education policymakers, as part of the ARC Discovery project 'School Knowledge, Working Knowledge and the Knowing Subject: A Review of State Curriculum Policies 1975–2005,' based at the University of Melbourne. Responding to a noted dearth of systematic scholarship about the development of state curriculum policies, the Curriculum Policies project aimed to produce a foundation picture of developments in curriculum policies across the nation over a thirty-year period. The project provided a wide overview of the last generation of state curricula, moving past previous projects that were limited in scope to individual government reports, Commonwealth developments, subject areas or political contexts. The overarching focus of the project was on charting continuities and changes in state curriculum policies, especially regarding changing approaches to knowledge, to students, and to the marking out of academic and vocational agendas. The focus was broadly on secondary schooling, and aimed at building up snapshots of curriculum changes at ten-year intervals. As part of this research project, 34 public servants and education department officials, curriculum academics and scholars were interviewed by Lyn Yates and Cherry Collins over 2007 and 2008. 19 interviewees gave consent for the transcripts of their interviews to appear in this archive. Interviewees were asked to give their personal reflections on the broad changes in curriculum policy over the thirty years from 1975 to 2005, and were invited to shed light on the reasoning and institutional factors that lay behind various policy decisions. The interviews were broad-ranging, informal and largely open-ended; research participants were asked to give a general assessment of their own involvement in curriculum over the thirty years in question, and to highlight any landmarks that were significant to them. They were also invited to address the broader themes of the research study, namely changing attitudes to knowledge, to students and to academic/vocational agendas, and to similarities and differences between different the approaches taken in different states.
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
National
Schools, teachers, students, public officials
Sample survey data [ssd]
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level. We also wish to detect differences by urban/rural location. For our school survey, we will employ a two-stage random sample design, where in the first stage a sample of typically around 200 schools, based on local conditions, is drawn, chosen in advance by the Bank staff. In the second stage, a sample of teachers and students will be drawn to answer questions from our survey modules, chosen in the field. A total of 10 teachers will be sampled for absenteeism. Five teachers will be interviewed and given a content knowledge exam. Three 1st grade students will be assessed at random, and a classroom of 4th grade students will be assessed at random. Stratification will be based on the school’s urban/rural classification and based on region. When stratifying by region, we will work with our partners within the country to make sure we include all relevant geographical divisions. For our Survey of Public Officials, we will sample a total of 200 public officials. Roughly 60 officials are typically surveyed at the federal level, while 140 officials will be surveyed at the regional/district level. For selection of officials at the regional and district level, we will employ a cluster sampling strategy, where roughly 10 regional offices (or whatever the secondary administrative unit is called) are chosen at random from among the regions in which schools were sampled. Then among these 10 regions, we also typically select around 10 districts (tertiary administrative level units) from among the districts in which schools were sampled. The result of this sampling approach is that for 10 clusters we will have links from the school to the district office to the regional office to the central office. Within the regions/districts, five or six officials will be sampled, including the head of organization, HR director, two division directors from finance and planning, and one or two randomly selected professional employees among the finance, planning, and one other service related department chosen at random. At the federal level, we will interview the HR director, finance director, planning director, and three randomly selected service focused departments. In addition to the directors of each of these departments, a sample of 9 professional employees will be chosen in each department at random on the day of the interview.
In order to visit two schools per day, we clustered at the sector level choosing two schools per cluster. With a sample of 200 schools, this means that we had to allocate 100 PSUs. We combined this clustering with stratification by district and by the urban rural status of the schools. The number of PSUs allocated to each stratum is proportionate to the number of schools in each stratum (i.e. the district X urban/rural status combination).
Computer Assisted Personal Interview [capi]
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
More information pertaining to each of the three instruments can be found below: - School Survey: The School Survey collects data primarily on practices (the quality of service delivery in schools), but also on some de facto policy indicators. It consists of streamlined versions of existing instruments—including Service Delivery Surveys on teachers and inputs/infrastructure, Teach on pedagogical practice, Global Early Child Development Database (GECDD) on school readiness of young children, and the Development World Management Survey (DWMS) on management quality—together with new questions to fill gaps in those instruments. Though the number of modules is similar to the full version of the Service Delivery Indicators (SDI) Survey, the number of items and the complexity of the questions within each module is significantly lower. The School Survey includes 8 short modules: School Information, Teacher Presence, Teacher Survey, Classroom Observation, Teacher Assessment, Early Learner Direct Assessment, School Management Survey, and 4th-grade Student Assessment. For a team of two enumerators, it takes on average about 4 hours to collect all information in a given school. For more information, refer to the Frequently Asked Questions.
Policy Survey: The Policy Survey collects information to feed into the policy de jure indicators. This survey is filled out by key informants in each country, drawing on their knowledge to identify key elements of the policy framework (as in the SABER approach to policy-data collection that the Bank has used over the past 7 years). The survey includes questions on policies related to teachers, school management, inputs and infrastructure, and learners. In total, there are 52 questions in the survey as of June 2020. The key informant is expected to spend 2-3 days gathering and analyzing the relavant information to answer the survey questions.
Survey of Public Officials: The Survey of Public Officials collects information about the capacity and orientation of the bureaucracy, as well as political factors affecting education outcomes. This survey is a streamlined and education-focused version of the civil-servant surveys that the Bureaucracy Lab (a joint initiative of the Governance Global Practice and the Development Impact Evaluation unit of the World Bank) has implemented in several countries. The survey includes questions about technical and leadership skills, work environment, stakeholder engagement, impartial decision-making, and attitudes and behaviors. The survey takes 30-45 minutes per public official and is used to interview Ministry of Education officials working at the central, regional, and district levels in each country.
Data quality control was performed in R and Stata Code to calculate all indicators can be found on github here: https://github.com/worldbank/GEPD/blob/master/Countries/Rwanda/2019/School/01_data/03_school_data_cleaner.R
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT: The objective of the article is to analyze the impacts of the current Brazilian educational counter-reform process on Professional and Technological Education (EPT). In this context, we start from the hypothesis that the counter-reform of High School has produced transformations in other fields of educational policy, including EPT, not restricted to the general education curriculum. The concept of counter-reform as elaborated by the political theory of Poulantzas is assumed as a theoretical presupposition. From a methodological way, initially, a group of legal-political mechanisms that relate simultaneously to High School and EPT were delimited. Then, through the Content Analysis technique, official documents that support such mechanisms were investigated, emphasizing the political-pedagogical and curricular principles. The mechanisms analyzed are: a) Common National Curriculum Base (BNCC) for High School, expressed in Resolution CNE/CEB No. 17/2018; b) New Paths Program (PNC), announced by the federal government in November 2019; c) National Curriculum Guidelines for EPT (DCNEPT), approved by Resolution CNE/CP No. 01/2021. In conclusion, we defend the thesis that a reform of the Brazilian EPT has been in development, since 2016, a reform of the Brazilian EPT. This reform has been constructed in three stages, chronologically comprised between the years 2016-2018, 2018-2021, and 2021 onwards. The emphasis given to the reform of the EPT is intended to characterize it as one of the sectors that make up the current context of neoliberal counter-reforms in Brazil, therefore, professional education is not exempt from deeper changes in the content of its policies.
https://www.ibisworld.com/about/termsofuse/https://www.ibisworld.com/about/termsofuse/
The Educational Services sector comprises 13 subsectors of the US economy, ranging from public schools to testing and educational support services. Primary, secondary and postsecondary schools alone generate 92.0% of the sector's revenue. Most of these institutions rely entirely on government funding, and nearly three-quarters of the educational services revenue comes from public schools and public universities. Accordingly, strong federal, state and local support for all levels of education has driven revenue upward over the past five years. Expanding discretionary budgets made private schools and higher education more affordable for students and parents, but the Trump administration's changing policies have brought new complications. Still, substantial funding and skyrocketing investment returns for private nonprofit universities have elevated revenue. Revenue has climbed at a CAGR of 4.6% to an estimated $2.7 trillion through the end of 2025, when revenue will rise by 1.1%. Solid state and local government funding for education has helped support the sector's success despite fluctuating enrollment. Faltering birth rates are leading to lower headcounts in K-12 schools, and ballooning student debt has made many would-be college students skeptical of the return on investment of an expensive degree. While student loan forgiveness efforts slowed a decline in the number of college students, the new presidential administration's end to these efforts has begun to exacerbate price-based and quality-based competition among higher education institutions. President Trump's scrutiny of course curricula has made public funds harder to acquire for schools, and the administration's efforts to close the Department of Education have begun to deter would-be students from attending college. Trends in the domestic economy are set to move in the Educational Services sector's favor over the next five years as prospective students become better able to pay for rising tuition rates and premium education options. Government funding for primary, secondary and postsecondary institutions will continue to escalate through the next period, though lackluster enrollment will temper revenue growth. Public schools, which account for over half the sector's revenue, will continue to post losses and drag down the average profit for educational services. New school choice initiatives, including Texas's new, largest-ever voucher program, will make private schools more affordable for parents. However, heightened oversight and continued efforts to close the Department of Education will remain a significant pain point for many educational services. Overall, revenue is set to climb at a CAGR of 0.8% to $2.8 trillion through the end of 2030.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Suspended Education in California Authors: By Daniel J. Losen, Tia Martinez and Jon Gillespie Date Published: April 10, 2012 This report and companion spreadsheet covering nearly 500 districts reveals to the public the unusually high levels of risk for suspension as well as the stark differences in discipline when these risks are presented by race, gender and disability status. Related Documents Attached file Report: Suspended Education in California Attached file Spreadsheet: Suspension Risk in California Districts Attached file Spreadsheet Instructions Suspended Education in California The Civil Rights Project has been examining out-of-school suspensions since 1999 due to concerns about the frequency of suspensions, observed racial disparities in their systemic use and the possible negative impact, especially for children of color. Most important, a robust study of school discipline by the Council of State Governments tracked every middle school student in Texas over 6 years and has helped educators crystalize what the evidence has always suggested: that the frequent use of out-of-school suspensions has no academic benefits, is strongly associated with low achievement, a heightened risk for dropping out and a greater likelihood of juvenile justice involvement. If suspending a student out-of-school for minor infractions is a counterproductive educational response, logic dictates that it should be reserved as a measure of last resort. Unfortunately, education policy makers and parents are not fully aware of just how many students are at risk for being suspended. For the first time, this report and companion spreadsheet covering nearly 500 districts reveals to the public the unusually high levels of risk for suspension as well as the stark differences in discipline when these risks are presented by race, gender and disability status. The alarming findings suggest not only a hidden crisis for many historically disadvantaged subgroups in too many districts but also a widespread need to reform discipline policy for California’s public schools. Data released from the Office for Civil Rights (OCR) at the US Department of Education revealed that more than 400,000 students were suspended out-of-school at least one time during the 2009-10 school year in California. That’s enough students suspended out-of-school to fill every seat in all the professional baseball and football stadiums in the state, with no guarantee of any adult supervision. OCR collected data from districts on the number of students who were suspended just once during the year and the number suspended more than once. The analysis in this report combined these two mutually exclusive categories in order to report the number of students suspended one or more times as a percentage of their total enrollment. We describe this percentage throughout this report as the “risk” for suspension...
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Chapter 4 is a case study of how changes in institutional factors may be used to offset the effect of parent’s education on children’s education by improving educational attainment of children. We use the extension of compulsory education in 2004 in Senegal from primary to lower secondary. This involves observing the marginal impact of the increase in the number of years of compulsory education on compulsory school completion (grade 10 completion) and on changes in post-compulsory grades completion (grades 11 to 13 completion). The data used is the 2017 survey on Senegal. The analysis comprises of a treatment group (individuals aged 13 to 15) and a control group (individuals aged 16 to 18). This is because the new school leaving age is 16, therefore, individuals 16 years and above are not affected by the policy while those 15 years and below are affected by the policy. A logistic regression discontinuity and chi-square tests are applied. The policy substantially increased grade 10 completion for children aged 13 to 15 as compared to children aged 16-18. This shows that the effect of the change in the compulsory education law on compulsory school completion is highly significant and positive for these marginal learners. Treatment group individuals are 7% more likely to complete lower secondary education as compared to the control group individuals. In terms of gender, no statistically significant gender differential effect is found of the increase in compulsory education. On the completion of post-compulsory school grades or high school grades (grades 11, 12, and 13), the chi-square tests of association show that the completion of grade 11 and the completion of grade 12 are significantly associated with the education policy for these marginal learners. Therefore, more individuals completed grades 11 and 12 in the treatment group (those aged 13 to 15) as compared to the control group (those aged 16 to 18). However, completion of grade 13 shows no statistically significant association with the education policy. That is, there is no change in obtaining a high school certificate. Nevertheless, the positive impact of the change in compulsory education years on grades 10 to 12 provides support for the hypothesis that favourable institutional characteristics are some of the channels through which intergenerational correlation of education can be reduced, by means of the improvement of educational attainment of children. This confirms the correlation between the compulsory years of education and the intergenerational correlation of education in Chapter 3. Children in countries with a higher number of years of compulsory education face higher intergenerational mobility in education. That is, the higher the compulsory education years the lower the intergenerational correlation of education.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
ABSTRACT An historical overview is outlined about the individualized educational plan (IEP) concept and the consequent changes in their practice due to school segregation and inclusion. Then, analyzing the legislation on IEP and the models of execution of this type of planning in other countries, namely: France, Italy, the United States and Brazil. The model of IEP in France is broader, encompassing the whole life cycle. In the United States and Italy, the model focuses on the exclusive planning of school life, although all emphasize the importance of measures to control the transition process from school to work or/and community. Brazil does not have provisions in legislation to ensure that such students have an IEP based on their peculiarities, resulting in a planning more strongly focused on the existing services rather than on student needs. Thus, although the announced era is of school inclusion, planning practices have not changed.
https://www.icpsr.umich.edu/web/ICPSR/studies/26282/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/26282/terms
To meet the growing need for high-quality research on whole-school approaches to instructional improvement, researchers at the University of Michigan School of Education, in cooperation with the Consortium for Policy Research in Education (CPRE), conducted a large-scale, mixed method, longitudinal Study of Instructional Improvement to investigate the design, implementation, and effects on student achievement of three of the most widely-adopted whole-school school reform programs in the United States: the Accelerated Schools (ASP), America's Choice (AC), and Success for All (SFA). Each of these school reform programs sought to make "comprehensive" changes in the instructional capacity of schools, and each was being implemented in schools in diverse social environments. Each program, however, also pursued a different design for instructional improvement, and each developed particular strategies for assisting schools in the change process. In order to better understand the process of whole-school reform, Study of Instructional Improvement (SII) developed a program of research to examine how these interventions operated and to investigate their impact on schools' instructional practice and student achievement in reading and mathematics. The research program had 3 components: a longitudinal survey of 115 schools (roughly 30 schools in each of the 3 interventions under study, plus 26 matched control schools), case studies of the 3 interventions under study, and detailed case studies of 9 schools implementing the interventions under study (plus 3 matched control schools). Across all components of the SII study, the research examined alternative designs for instructional improvement, alternative strategies for putting these designs into practice in local schools, and the extent to which alternative designs and support strategies promote substantial changes in instructional capacity and student achievement in reading and mathematics. The most comprehensive component of SII was a large-scale, longitudinal, multisurvey study of schools. The use of survey research methods was intended to track the course of schools' engagement in comprehensive approaches to instructional improvement and to investigate the conditions under which this led to substantive changes in instructional practices and student achievement in reading and mathematics. The study design called for each school to participate in the study for a period of three years, although some schools voluntarily provided a fourth year of teacher, leader, and school-level information (no additional student-level data). In addition, survey researchers conducted interviews, primarily a telephone protocol with a parent or guardian of each cohort student in order to gather information on students' family background and on students' home and community environments. Researchers also gathered data from school leaders and others about the policy environments in which the schools are located. Another component of the research program involved the development of detailed case studies of a small number schools participating in the study. The case studies gathered observational, interview, and documentary evidence to better understand how instructional change processes unfolded in different school settings. Case studies were conducted in 12 schools operating in differently configured state and district policy environments. In each environment, researchers selected schools participating in one of the interventions under study as well as a "matched" control school. Finally, case study data was used to chart key similarities and differences in the design and operations of the interventions under study, to analyze how different design features affect operating strategies, and to better understand the general problem of how intervention programs can work to devise and "bring to scale" a feasible scheme for improving instruction in local schools.
https://creativecommons.org/share-your-work/public-domain/pdmhttps://creativecommons.org/share-your-work/public-domain/pdm
The Public-Use Data File User’s Manual for the 2013–14 Civil Rights Data Collection (CRDC) provides documentation and guidance for users of the 2013–14 data. The manual provides information about the purpose of the study, the target population and respondents, data anomalies and considerations, differences in the restricted and public-use data, data collection procedures, the data file structure, and data processing.Since 1968, the CRDC, formerly the Elementary and Secondary School Survey, has collected data on key education and civil rights issues in our nation's public schools for use by the U.S. Department of Education’s Office for Civil Rights (OCR) in its enforcement and monitoring efforts, by other Department of Education offices and federal agencies, and by policymakers and researchers outside the Department of Education. The CRDC collects information about school characteristics and about programs, services, and outcomes for students. Most student data are disaggregated by race/ethnicity, sex, limited English proficiency (LEP), and disability.The CRDC is a biennial survey (i.e., it is conducted every other school year), and response to the survey is required by law. Data from the 2011–12 collection and prior collections back to 2000 are also available. The 2013–14 CRDC collected data from the universe of all public school districts, also referred to as local education agencies (LEAs), and schools, including long-term secure juvenile justice facilities, charter schools, alternative schools, and schools serving students with disabilities. Data were collected for the 2013–14 school year. Data collection began in April 2015 and ended on January 8, 2016.The CRDC data are collected pursuant to the 1980 Department of Education Organization Act and 34 CFR Section 100.6(b) of the Department of Education regulation implementing Title VI of the Civil Rights Act of 1964. The requirements are also incorporated by reference in Department regulations implementing Title IX of the Education Amendments of 1972, Section 504 of the Rehabilitation Act of 1973, and the Age Discrimination Act of 1975.The CRDC is a longstanding and critical aspect of the overall enforcement and monitoring strategy used by OCR to ensure that recipients of the Department of Education’s federal financial assistance do not discriminate on the basis of race, color, national origin, sex, or disability. OCR relies on the CRDC data it receives from public school districts as it investigates complaints alleging discrimination, determines whether the federal civil rights laws it enforces have been violated, initiates proactive compliance reviews to focus on particularly acute or nationwide civil rights compliance problems, and provides policy guidance and technical assistance to educational institutions, parents, students, and others. Additionally, the data are used to report state and national estimates and trends about school characteristics, programs, services, and outcomes covered by the CRDC.
The NEP 2015–19 is designed to give everyone in Papua New Guinea, regardless of their ability, gender or socio - economic background, an opportunity to be educated and to transform their lives, using an holistic, inclusive and integrated approach.
There have been many changes in the country’s education system and policies since independence 40 years ago. This plan is designed to build on past successes and experience and begin a system providing 13 years Universal Education. High-quality, relevant education and training for all.
This plan is different to previous plans in a number of ways. Firstly it outlines all interventions needed that together will achieve Universal Education in PNG. This National Education Plan intends to have PNG leading the way by taking the bold step towards13 years of Universal Education for every student in the country. Universal Education can be achieved in our country with a coordinated approach supported by strong political drive, quality leadership and a relentlessness to achieve its intent.
Contained in this plan is the outline of the radical and comprehensive overhaul of many aspects of the education system proposed to give all children the opportunity to enter school at the age of six and receive a relevant education for 13 years, until they reach Grade 12 or the equivalent (18 years of age).
The NEP 2015-19 is based on a logic framework, so that the plan itself can be monitored and progressed measured. Every province will play a critical role in implementing the plan and using the same framework, they can compare their progress at provincial and national level. This approach means DoE is transparent in its approach and accountable to achieve its targets.
This series comprises school council records documenting the organisation and operation of Northern School for Autism (School No. 5219) (VA 5309).
Local school committees and councils have existed in some form since the beginning of State education in Victoria. Until 1974, each primary school had a school committee made up of parents, while high schools and technical schools had councils comprised of parents and other community members. In 1975, changes to the Education Act 1958 allowed the establishment of school councils with common powers across State schools.
In 1983, the Minister of Education released several new policies shifting the responsibility of decision-making from the State onto individual schools, with the principal and staff to report on and explain their education programs to their local school community. In late 1983, the Education Act was amended to give the school council responsibility for determining the general education policy of the school withing the guidelines set by the Minister. The amendment became effective in 1984, by which time all school councils were re-constituted to fit government guidelines on membership.
School councils in Victoria operate under the Education and Training Reform Act 2006, the Education and Training Reform Regulations 2017, Ministerial Orders, and the school council’s own constituting Order of the Minister of Education. Under this Order, the council’s membership, size and configuration, objectives, powers, functions and accountabilities, and the role of its executive officer (the school principal), is laid out.
Members comprise the School Principal, representatives of the teaching staff, parents and the wider community. They share governance responsibilities with the Department of Education (VA 5283). Councils are responsible for a range of matters including the condition of the school grounds and buildings (including the organisation of maintenance, improvement and cleaning work), forming opinions about the school's conduct and management and other duties prescribed by regulations.
The school council has functions in setting and monitoring the school’s direction, with primary responsibilities around governance, strategic planning, finance, policy and review.
Critical functions of school councils also included: maintaining school grounds and facilities; entering into contracts such as regulating and facilitating after-hours use of school premises and grounds; reporting annually to the school community and the Department; and representing and taking the views of the community into account.
This series may include the following records: minutes, agendas, correspondence, policies, financial statements and reports from principals and other staff members. Records created by the former school committee, prior to the formal establishment of school councils, and sub-committees of the school council may also be included.
Over the past three decades a reform movement bent on improving schools and educational outcomes through standards-based accountability systems and market-like competitive pressures has dominated policy debates. Many have examined reform policies’ effects on academic outcomes, but few have explored these policies’ influence on citizens' political orientations. In this study, using data from an original survey, I examine whether and how No Child Left Behind’s (NCLB) accountability-based architecture influences parents’ attitudes toward government and federal involvement in education. I find little evidence that diversity in parents’ lived policy experiences shapes their political orientations. However, the results of a survey experiment suggest that information linking school experience to policy and government action may increase parents’ confidence in their ability to contribute to the political process. Understanding whether and under what conditions parents use public school experiences to inform orientations toward government can inform the design of future reforms.
List of sources: Data on the age cohorts was found in the following official publications: - Prussian statistics. Official source work. Issued by the royal statistical office. - Monthlies of the statistic of the German Reich, different years. Edited by the royal statistical office (Statistics of the German Reich) - Statistics of the German Reich. Edited by the royal statistical office.
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
National
Schools, teachers, students, public officials
Sample survey data [ssd]
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level. We also wish to detect differences by urban/rural location.
For our school survey, we will employ a two-stage random sample design, where in the first stage a sample of typically around 200 schools, based on local conditions, is drawn, chosen in advance by the Bank staff. In the second stage, a sample of teachers and students will be drawn to answer questions from our survey modules, chosen in the field. A total of 10 teachers will be sampled for absenteeism. Five teachers will be interviewed and given a content knowledge exam. Three 1st grade students will be assessed at random, and a classroom of 4th grade students will be assessed at random. Stratification will be based on the school’s urban/rural classification and based on region. When stratifying by region, we will work with our partners within the country to make sure we include all relevant geographical divisions.
For our Survey of Public Officials, we will sample a total of 200 public officials. Roughly 60 officials are typically surveyed at the federal level, while 140 officials will be surveyed at the regional/district level. For selection of officials at the regional and district level, we will employ a cluster sampling strategy, where roughly 10 regional offices (or whatever the secondary administrative unit is called) are chosen at random from among the regions in which schools were sampled. Then among these 10 regions, we also typically select around 10 districts (tertiary administrative level units) from among the districts in which schools werer sampled. The result of this sampling approach is that for 10 clusters we will have links from the school to the district office to the regional office to the central office. Within the regions/districts, five or six officials will be sampled, including the head of organization, HR director, two division directors from finance and planning, and one or two randomly selected professional employees among the finance, planning, and one other service related department chosen at random. At the federal level, we will interview the HR director, finance director, planning director, and three randomly selected service focused departments. In addition to the directors of each of these departments, a sample of 9 professional employees will be chosen in each department at random on the day of the interview.
For our school survey, we select only schools that are supervised by the Minsitry or Education or are Private schools. No schools supervised by the Ministry of Defense, Ministry of Endowments, Ministry of Higher Education , or Ministry of Social Development are included. This left us with a sampling frame containing 3,330 schools, with 1297 private schools and 2003 schools managed by the Minsitry of Education. The schools must also have at least 3 grade 1 students, 3 grade 4 students, and 3 teachers. We oversampled Southern schools to reach a total of 50 Southern schools for regional comparisons. Additionally, we oversampled Evening schools, for a total of 40 evening schools.
A total of 250 schools were surveyed.
Computer Assisted Personal Interview [capi]
The dashboard project collects new data in each country using three new instruments: a School Survey, a Policy Survey, and a Survey of Public Officials. Data collection involves school visits, classroom observations, legislative reviews, teacher and student assessments, and interviews with teachers, principals, and public officials. In addition, the project draws on some existing data sources to complement the new data it collects. A major objective of the GEPD project was to develop focused, cost-effective instruments and data-collection procedures, so that the dashboard can be inexpensive enough to be applied (and re-applied) in many countries. The team achieved this by streamlining and simplifying existing instruments, and thereby reducing the time required for data collection and training of enumerators.
More information pertaining to each of the three instruments can be found below:
School Survey: The School Survey collects data primarily on practices (the quality of service delivery in schools), but also on some de facto policy indicators. It consists of streamlined versions of existing instruments—including Service Delivery Surveys on teachers and inputs/infrastructure, Teach on pedagogical practice, Global Early Child Development Database (GECDD) on school readiness of young children, and the Development World Management Survey (DWMS) on management quality—together with new questions to fill gaps in those instruments. Though the number of modules is similar to the full version of the Service Delivery Indicators (SDI) Survey, the number of items and the complexity of the questions within each module is significantly lower. The School Survey includes 8 short modules: School Information, Teacher Presence, Teacher Survey, Classroom Observation, Teacher Assessment, Early Learner Direct Assessment, School Management Survey, and 4th-grade Student Assessment. For a team of two enumerators, it takes on average about 4 hours to collect all information in a given school. For more information, refer to the Frequently Asked Questions.
Policy Survey: The Policy Survey collects information to feed into the policy de jure indicators. This survey is filled out by key informants in each country, drawing on their knowledge to identify key elements of the policy framework (as in the SABER approach to policy-data collection that the Bank has used over the past 7 years). The survey includes questions on policies related to teachers, school management, inputs and infrastructure, and learners. In total, there are 52 questions in the survey as of June 2020. The key informant is expected to spend 2-3 days gathering and analyzing the relavant information to answer the survey questions.
Survey of Public Officials: The Survey of Public Officials collects information about the capacity and orientation of the bureaucracy, as well as political factors affecting education outcomes. This survey is a streamlined and education-focused version of the civil-servant surveys that the Bureaucracy Lab (a joint initiative of the Governance Global Practice and the Development Impact Evaluation unit of the World Bank) has implemented in several countries. The survey includes questions about technical and leadership skills, work environment, stakeholder engagement, impartial decision-making, and attitudes and behaviors. The survey takes 30-45 minutes per public official and is used to interview Ministry of Education officials working at the central, regional, and district levels in each country.
The aim of the Global Education Policy Dashboard school survey is to produce nationally representative estimates, which will be able to detect changes in the indicators over time at a minimum power of 80% and with a 0.05 significance level.