100+ datasets found
  1. d

    Data from: Vocational and Educational Programs: Impacts on Recidivism

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Perry, Claire (2023). Vocational and Educational Programs: Impacts on Recidivism [Dataset]. http://doi.org/10.7910/DVN/28791
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Perry, Claire
    Description

    This paper looks at the relationship between educational and vocational program participation and recidivism. Using a cohort of individuals released from state prison in 1994 across five states and tracked for three years, this paper takes into consideration both re-arrest and re-confinement. It finds that vocational programs in particular have significant reductions in both re-arrest and re-confinement; results are primarily driven by programs in Illinois and New York. This paper builds off the extensive literature on the topic by accounting for varying levels of program completion, including different types of recidivism, controlling for state-to-state variation, and looking at both education and vocation programs. In addition to predicting recidivism based on program participation, this study seeks to control for motivation of people choosing to participate in educational and vocational programs using instrumental variable analysis and looks at time until recidivism using proportional hazard models.

  2. d

    Elementary School Math Professional Development Impact Evaluation

    • catalog.data.gov
    • data.amerigeoss.org
    • +1more
    Updated Aug 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Center for Education Evaluation and Regional Assistance (NCEERA) (2023). Elementary School Math Professional Development Impact Evaluation [Dataset]. https://catalog.data.gov/dataset/elementary-school-math-professional-development-impact-evaluation-97abc
    Explore at:
    Dataset updated
    Aug 12, 2023
    Dataset provided by
    National Center for Education Evaluation and Regional Assistance (NCEERA)
    Description

    The Elementary School Math Professional Development Impact Evaluation (Elementary School Math PD Evaluation) is a data collection that is part of the Professional Development Impact Program (PD Impact). Elementary School Math PD Evaluation examines the implementation and impact of the Math Professional Development (PD) program, by means of collecting fourth-grade teachers' content knowledge, classroom practices, and their student's achievement for one academic year. Two-hundred such volunteer teachers from six local education agencies (LEAs) were recruited to be randomly assigned to either an experimental or control group. Teachers in the experimental group were given the PD intervention, whereas teachers in the control group engaged in their LEAs' and schools' standard PD regimen over the 2013-14 academic year. At the end of the experimental period, a survey was then administered to obtain teacher background information for covariate use in the impact analyses. Key statistics produced from Elementary School Math PD Evaluation include, inter alia, the implementation of the PD program and its impact on the participating teachers' students' achievement in the classroom.

  3. w

    Education Quality Improvement Programme Impact Evaluation Baseline Survey...

    • microdata.worldbank.org
    • catalog.ihsn.org
    Updated Dec 2, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Oxford Policy Management Ltd (2021). Education Quality Improvement Programme Impact Evaluation Baseline Survey 2014-2015 - Tanzania [Dataset]. https://microdata.worldbank.org/index.php/catalog/2290
    Explore at:
    Dataset updated
    Dec 2, 2021
    Dataset authored and provided by
    Oxford Policy Management Ltd
    Time period covered
    2014 - 2015
    Area covered
    Tanzania
    Description

    Abstract

    The Education Quality Improvement Programme in Tanzania (EQUIP-T) is a large, four-year Department for International Development (DFID) funded programme. It targets some of the most educationally disadvantaged regions in Tanzania to increase the quality of primary education and improve pupil learning outcomes, in particular for girls. EQUIP-T covers seven regions in Tanzania and has five components: 1) enhanced professional capacity and performance of teachers; 2) enhanced school leadership and management skills; 3) strengthened systems that support the district and regional management of education; 4) strengthened community participation and demand for accountability; and 5) strengthened learning and dissemination of results. Together, changes in these five outputs are intended to reduce constraints on pupil learning and thereby contribute to better-quality education (outcome) and ultimately improved pupil learning (impact).

    The independent impact evaluation (IE) of EQUIP-T conducted by Oxford Policy Management Ltd (OPM) is a four-year study funded by DFID. It covers five of the seven programme regions (the two regions that will join EQUIP-T in a later phase are not included) and the first four EQUIP-T components (see above). The IE uses a mixed methods approach where qualitative and quantitative methods are integrated. The baseline approach consists of three main parts to allow the IE to: 1) capture the situation prior to the start of EQUIP-T so that changes can be measured during the follow-up data collection rounds; impact attributable to the programme assessed and mechanisms for programme impact explored; 2) develop an expanded programme theory of change to help inform possible programme adjustments; and 3) provide an assessment of the education situation in some of the most educationally disadvantaged regions in Tanzania to the Government and other education stakeholders.

    This approach includes:

    • Quantitative survey of 100 government primary schools in 17 programme treatment districts and 100 schools in eight control districts in 2014, 2016 and 2018 covering:
    • Standard three pupils
    • Teachers who teach standards 1-3 Kiswahili and/or mathematics;
    • Teachers who teach standards 4-7 mathematics;
    • Head teachers; and
    • Standard two lesson observations in Kiswahili and mathematics.

    • Qualitative fieldwork in nine research sites that overlap with a sub-set of the quantitative survey schools, in 2014, 2016 and 2018, consisting of key informant interviews (KIIs) and focus group discussions (FGDs) with head teachers, teachers, pupils, parents, school committee (SC) members, region, district and ward education officials and EQUIP-T programme staff; and

    • A mapping of causal mechanisms, and assessment of the strength of assumptions underpinning the programme theory of change using qualitative and quantitative IE baseline data as well as national and international evidence.

    The data and documentation contained in the World Bank Microdata Catalog are those from the EQUIP-T IE quantitative baseline survey conducted in 2014. For information on the qualitative research findings see OPM. 2015b. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes.

    Geographic coverage

    The survey is representative of the 17 EQUIP-T programme treatment districts. The survey is NOT representative of the eight control districts. For more details see the section on Representativeness and OPM. 2015. EQUIP-Tanzania Impact Evaluation: Final Baseline Technical Report, Volume I: Results and Discussion and OPM. 2015. EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes.

    The 17 treatment districts are:

    -Dodoma Region: Bahi DC, Chamwino DC, Kongwa DC, Mpwapwa DC -Kigoma Region: Kakonko DC, Kibondo DC -Shinyanga Region: Kishapu DC, Shinyanga DC -Simiyu Region: Bariadi DC, Bariadi TC, Itilima DC, Maswa DC, Meatu DC -Tabora Region: Igunga DC, Nzega DC, Sikonge DC, Uyui DC

    The 8 control districts are:

    -Arusha Region: Ngorongoro DC -Mwanza Region: Misungwi DC -Pwani Region: Rufiji DC
    -Rukwa Region: Nkasi DC -Ruvuma Region: Tunduru DC -Singida Region: Ikungi DC, Singida DC -Tanga Region: Kilindi DC

    Analysis unit

    • School
    • Teacher
    • Pupil
    • Lesson (not sampled)

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    Because the EQUIP-T regions and districts were purposively selected (see OPM. 2015. EQUIP-Tanzania Impact Evaluation: Final Baseline Technical Report, Volume I: Results and Discussion.), the IE sampling strategy used propensity score matching (PSM) to: (i) match eligible control districts to the pre-selected and eligible EQUIP-T districts (see below), and (ii) match schools from the control districts to a sample of randomly sampled treatment schools in the treatment districts. The same schools will be surveyed for each round of the IE (panel of schools) and standard 3 pupils will be interviewed at each round of the survey (no pupil panel).

    Identifying districts eligible for matching

    Eligible control and treatment districts were those not participating in any other education programme or project that may confound the measurement of EQUIP-T impact. To generate the list of eligible control and treatment districts, all districts that are contaminated because of other education programmes or projects or may be affected by programme spill-over were excluded as follows:

    -All districts located in Lindi and Mara regions as these are part of the EQUIP-T programme, but the impact evaluation does not cover these two regions; -Districts that will receive partial EQUIP-T programme treatment or will be subject to potential EQUIP-T programme spill-overs; -Districts that are receiving other education programmes/projects that aim to influence the same outcomes as the EQUIP-T programme and would confound measurement of EQUIP-T impact; -Districts that were part of pre-test 1 (two districts); and -Districts that were part of pre-test 2 (one district).

    Sampling frame

    To be able to select an appropriate sample of pupils and teachers within schools and districts, the sampling frame consisted of information at three levels:

    -District level; -School level; and -Within school level.

    The sampling frame data at the district and school levels was compiled from the following sources: the 2002 and 2012 Tanzania Population Censuses, Education Management Information System (EMIS) data from the Ministry of Education and Vocational Training (MoEVT) and the Prime Minister's Office for Regional and Local Government (PMO-RALG), and the UWEZO 2011 student learning assessment survey. For within school level sampling, the frames were constructed upon arrival at the selected schools and was used to sample pupils and teachers on the day of the school visit.

    Sampling stages

    Stage 1: Selection of control districts

    Because the treatment districts were known, the first step was to find sufficiently similar control districts that could serve as the counterfactual. PSM was used to match eligible control districts to the pre-selected, eligible treatment districts using the following matching variables: Population density, proportion of male headed households, household size, number of children per household, proportion of households that speak an ethnic language at home, and district level averages for household assets, infrastructure, education spending, parental education, school remoteness, pupil learning levels and pupil drop out.

    Stage 2: Selection of treatment schools

    In the second stage, schools in the treatment districts were selected using stratified systematic random sampling. The schools were selected using a probability proportional to size approach, where the measure of school size was the standard two enrolment of pupils. This means that schools with more pupils had a higher probability of being selected into the sample. To obtain a representative sample of programme treatment schools, the sample was implicitly stratified along four dimensions:

    -Districts; -PSLE scores for Kiswahili; -PSLE scores for mathematics; and -Total number of teachers per school.

    Stage 3: Selection of control schools

    As in stage one, a non-random PSM approach was used to match eligible control schools to the sample of treatment schools. The matching variables were similar to the ones used as stratification criteria: Standard two enrolment, PSLE scores for Kiswahili and mathematics, and the total number of teachers per school.

    The midline and endline surveys will be conducted for the same schools as the baseline survey (a panel of schools). However, the IE will not have a panel of pupils as a pupil only attends standard three once (unless repeating). Thus, the IE will have a repeated cross-section of pupils in a panel of schools.

    Stage 4: Selection of pupils and teachers within

  4. U.S. students' beliefs on the effects of online programs in higher education...

    • statista.com
    Updated Jun 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). U.S. students' beliefs on the effects of online programs in higher education 2023 [Dataset]. https://www.statista.com/statistics/1445408/us-students-beliefs-on-the-effects-of-online-programs-in-higher-education/
    Explore at:
    Dataset updated
    Jun 23, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Mar 3, 2023 - Mar 27, 2023
    Area covered
    United States
    Description

    According to a survey conducted in 2023, ** percent of students believed that fully online programs in higher education have made access to education beyond high school better for all students in comparison to fully in-person programs in the United States. However, ** percent were found to believe that fully online programs worsened students' communication and collaboration skills.

  5. i

    Large-Scale Financial Education Program Impact Evaluation 2011-2012 - Mexico...

    • catalog.ihsn.org
    • microdata.worldbank.org
    Updated Mar 29, 2019
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David McKenzie (2019). Large-Scale Financial Education Program Impact Evaluation 2011-2012 - Mexico [Dataset]. https://catalog.ihsn.org/index.php/catalog/5135
    Explore at:
    Dataset updated
    Mar 29, 2019
    Dataset provided by
    Gabriel Lara Ibarra
    David McKenzie
    Miriam Bruhn
    Time period covered
    2011 - 2012
    Area covered
    Mexico
    Description

    Abstract

    To educate consumers about responsible use of financial products, many governments, non-profit organizations and financial institutions have started to provide financial literacy courses. However, participation rates for non-compulsory financial education programs are typically extremely low.

    Researchers from the World Bank conducted randomized experiments around a large-scale financial literacy course in Mexico City to understand the reasons for low take-up among a general population, and to measure the impact of this financial education course. The free, 4-hour financial literacy course was offered by a major financial institution and covered savings, retirement, and credit use. Motivated by different theoretical and logistics reasons why individuals may not attend training, researchers randomized the treatment group into different subgroups, which received incentives designed to provide evidence on some key barriers to take-up. These incentives included monetary payments for attendance equivalent to $36 or $72 USD, a one-month deferred payment of $36 USD, free cost transportation to the training location, and a video CD with positive testimonials about the training.

    A follow-up survey conducted on clients of financial institutions six months after the course was used to measure the impacts of the training on financial knowledge, behaviors and outcomes, all relating to topics covered in the course.

    The baseline dataset documented here is administrative data received from a screener that was used to get people to enroll in the financial course. The follow-up dataset contains data from the follow-up questionnaire.

    Geographic coverage

    Mexico City

    Analysis unit

    -Individuals

    Universe

    Participants in a financial education evaluation

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    Researchers used three different approaches to obtain a sample for the experiment.

    The first one was to send 40,000 invitation letters from a collaborating financial institution asking about interest in participating. However, only 42 clients (0.1 percent) expressed interest.

    The second approach was to advertise through Facebook, with an ad displayed 16 million times to individuals residing in Mexico City, receiving 119 responses.

    The third approach was to conduct screener surveys on streets in Mexico City and outside branches of the partner institution. Together this yielded a total sample of 3,503 people. Researchers divided this sample into a control group of 1,752 individuals, and a treatment group of 1,751 individuals, using stratified randomization. A key variable used in stratification was whether or not individuals were financial institution clients. The analysis of treatment impacts is based on the sample of 2,178 individuals who were financial institution clients.

    The treatment group received an invitation to participate in the financial education course and the control group did not receive this invitation. Those who were selected for treatment were given a reminder call the day before their training session, which was at a day and time of their choosing.

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    The follow-up survey was conducted between February and July 2012 to measure post-training financial knowledge, behavior and outcomes. The questionnaire was relatively short (about 15 minutes) to encourage participation.

    Interviewers first attempted to conduct the follow-up survey over the phone. If the person did not respond to the survey during the first attempt, researchers offered one a 500 pesos (US$36) Walmart gift card for completing the survey during the second attempt. If the person was still unavailable for the phone interview, a surveyor visited his/her house to conduct a face-to-face interview. If the participant was not at home, the surveyor delivered a letter with information about the study and instructions for how to participate in the survey and to receive the Walmart gift card. Surveyors made two more attempts (three attempts in total) to conduct a face-to-face interview if a respondent was not at home.

    Response rate

    72.8 percent of the sample was interviewed in the follow-up survey. The attrition rate was slightly higher in the treatment group (29 percent) than in the control group (25.3 percent).

  6. Data from: Impact Evaluation of Youth Crime Watch Programs in Three Florida...

    • catalog.data.gov
    • icpsr.umich.edu
    • +1more
    Updated Mar 12, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Justice (2025). Impact Evaluation of Youth Crime Watch Programs in Three Florida School Districts, 1997-2007 [Dataset]. https://catalog.data.gov/dataset/impact-evaluation-of-youth-crime-watch-programs-in-three-florida-school-districts-1997-200-8fe65
    Explore at:
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    National Institute of Justicehttp://nij.ojp.gov/
    Description

    The purpose of this study was to assess both the school-level effects and the participant-level effects of Youth Crime Watch (YCW) programs. Abt Associates conducted a four-year impact evaluation of Youth Crime Watch (YCW) programs in three Florida school districts (Broward, Hillsborough, and Pinellas Counties). School-based YCW programs implement one or more of a variety of crime prevention activities, including youth patrol, in which YCW participants patrol their school campus and report misconduct and crime. The evaluation collected both School-Level Data (Part 1) and Student-Level Data (Part 2). The School-Level Data (Part 1) contain 9 years of data on 172 schools in the Broward, Hillsborough, and Pinellas school districts, beginning in the 1997-1998 school year and continuing through the 2005-2006 school year. A total of 103 middle schools and 69 high schools were included, yielding a total of 1,548 observations. These data provide panel data on reported incidents of crime and violence, major disciplinary actions, and school climate data across schools and over time. The Student-Level Data (Part 2) were collected between 2004 and 2007 and are comprised of two major components: (1) self-reported youth attitude and school activities survey data that were administered to a sample of students in middle schools in the Broward, Hillsborough, and Pinellas School Districts as part of a participant impact analysis, and (2) self-reported youth attitude and school activities survey data that were administered to a sample of YCW continuing middle school students and YCW high school students in the same three school districts as part of a process analysis. For Part 2, a total of 3,386 completed surveys were collected by the project staff including 1,319 "new YCW" student surveys, 1,581 "non-YCW" student surveys, and 486 "Pro" or "Process" student surveys. The 138 variables in the School-Level Data (Part 1) include Youth Crime Watch (YCW) program data, measures of crime and the level of school safety in a school, and other school characteristics. The 99 variables in the Student-Level Data (Part 2) include two groups of questions for assessing participant impact: (1) how the respondents felt about themselves, and (2) whether the respondent would report certain types of problems or crimes that they observed at the school. Part 2 also includes administrative variables and demographic/background information. Other variables in Part 2 pertain to the respondent's involvement in school-based extracurricular activities, involvement in community activities, attitudes toward school, attitudes about home environment, future education plans, attitudes toward the YCW advisor, attitudes about effects of YCW, participation in YCW, reasons for joining YCW, and reasons for remaining in YCW.

  7. National Center for Education Statistics (NCES) U.S. Department of Education...

    • data.pa.gov
    csv, xlsx, xml
    Updated Jul 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Department of Education (2025). National Center for Education Statistics (NCES) U.S. Department of Education [Dataset]. https://data.pa.gov/Post-Secondary-Education/National-Center-for-Education-Statistics-NCES-U-S-/r34x-ewhx
    Explore at:
    csv, xlsx, xmlAvailable download formats
    Dataset updated
    Jul 9, 2025
    Dataset provided by
    United States Department of Educationhttps://ed.gov/
    Authors
    U.S. Department of Education
    License

    https://www.usa.gov/government-workshttps://www.usa.gov/government-works

    Description

    The Institute of Education Sciences (IES) is the statistics, research, and evaluation arm of the U.S. Department of Education. We are independent and non-partisan. Our mission is to provide scientific evidence on which to ground education practice and policy and to share this information in formats that are useful and accessible to educators, parents, policymakers, researchers, and the public.

    IES conducts six broad types of work that addresses school readiness and education from infancy through adulthood and includes special populations such as English Learners and students with disabilities.

    • We provide data that describes how well the United States is educating its students. We collect and analyze official statistics on the condition of education, including adult education and literacy; support international assessments; and carry out the National Assessment of Educational Progress (NAEP).

     • We conduct surveys and sponsor research projects to understand where education needs improvement and how these improvements might be made.  Our longitudinal surveys provide nationally representative data on how students are progressing through school and entering the workforce. Our cross-sectional surveys provide a snapshot of how students and the education system are doing at specific points in time. We fund research that uses these and other data to gain a deeper understanding of the nature and context of needed education improvements.

     • We fund development and rigorous testing of new approaches for improving education outcomes for all students.  We support development of practical solutions for education from the earliest design stages through pilot studies and rigorous testing at scale. With IES support, researchers are learning what works for improving instruction, student behavior, teacher learning, and school and system organization.

     • We conduct large-scale evaluations of federal education programs and policies.  Our evaluations address complex issues of national importance, such as the impact of alternative pathways to teacher preparation, teacher and leader evaluation systems, school improvement initiatives, and school choice programs.

     • We provide resources to increase use of data and research in education decision making.  Through the What Works Clearinghouse, we conduct independent reviews of research on what works in education. The Regional Educational Laboratories offer opportunities to learn what works as well as coaching, training, and other support for research use. Our Statewide Longitudinal Data System grants enable states to more efficiently track education outcomes and provide useful, timely information to decision makers.

     • We support advancement of statistics and research through specialized training and development of methods and measures.  We fund pre-doctoral and post-doctoral training programs, as well as database training and short courses on cutting-edge topics for working statisticians and researchers. Our empirical work on new methods and measures ensures continued advances in the accuracy, usefulness, and cost-effectiveness of education data collections and research.

  8. d

    Impact Evaluation of Race to the Top and School Improvement Grants

    • catalog.data.gov
    • data.amerigeoss.org
    Updated Aug 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Center for Education Evaluation and Regional Assistance (NCEERA) (2023). Impact Evaluation of Race to the Top and School Improvement Grants [Dataset]. https://catalog.data.gov/dataset/impact-evaluation-of-race-to-the-top-and-school-improvement-grants-d8f63
    Explore at:
    Dataset updated
    Aug 12, 2023
    Dataset provided by
    National Center for Education Evaluation and Regional Assistance (NCEERA)
    Description

    The Impact Evaluation of Race to the Top and School Improvement Grants (RTT-SIG Impact Evaluation) is a study that is part of the Impact Evaluation of Race to the Top and School Improvement Grants (RTT-SIG Impact Evaluation) program. RTT-SIG Impact Evaluation (https://ies.ed.gov/ncee/projects/evaluation/other_racetotop.asp) is a cross-sectional survey that assesses the implementation of the Race to the Top (RTT) and School Improvement Grant (SIG) programs at the State, local education agency (LEA), and school levels, as well as whether the receipt of RTT and/or SIG funding to implement a school turnaround model has had an impact on outcomes for the lowest-achieving schools. Additionally, the study investigates whether RTT reforms were related to improvements in student outcomes and whether implementation of the four school turnaround models, and the strategies within those models, was related to improvement in outcomes for the lowest-achieving schools. The study was conducted using a combination of telephone interviews and web-based surveys targeted to school administrators at the state, LEA, and school levels. Key statistics produced from RTT-SIG Impact Evaluation include State, LEA, and school adoption levels of policies and practices promoted by RTT and SIG, as well as impacts on student outcomes of RTT and SIG funding.

  9. o

    Data and Code for: The Long-Run Impacts of Same-Race Teachers

    • openicpsr.org
    Updated Jul 24, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Seth Gershenson; Cassandra Hart; Joshua Hyman; Constance Lindsay; Nicholas Papageorge (2021). Data and Code for: The Long-Run Impacts of Same-Race Teachers [Dataset]. http://doi.org/10.3886/E145941V1
    Explore at:
    Dataset updated
    Jul 24, 2021
    Dataset provided by
    American Economic Association
    Authors
    Seth Gershenson; Cassandra Hart; Joshua Hyman; Constance Lindsay; Nicholas Papageorge
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    North Carollina, Tennessee
    Description

    This is code for replicating results in the paper "The Long-Run Impacts of Same-Race Teachers." The abstract for the paper is below.We examine the long-run impacts of exposure to a Black elementary school teacher for both Black and white students. Data from the Tennessee STAR class-size experiment show that Black students randomly assigned to at least one Black teacher in grades K-3 are 9 percentage points (13%) more likely to graduate from high school and 6 percentage points (19%) more likely to enroll in college than their Black schoolmates who are not. However, we find no statistically significant long-run effects on white students' long-run outcomes. Enrollment results are driven by enrollments in two-year colleges and concentrated among disadvantaged males. Neither pattern is evident in short-run analyses of test scores, underscoring the importance of examining long-run effects. Quasi-experimental methods applied to rich North Carolina administrative data produce generally similar findings. These effects do not appear to be driven by within-school racial differences in teacher effectiveness. While we cannot definitively identify the mechanisms at work, heterogeneity analyses provide suggestive evidence of larger effects in counties with higher unemployment rates and when Black teachers are the same sex as their students, both of which are consistent with role model effects being one of the multiple channels through which these effects likely operate.

  10. U.S. students' beliefs on taking out loans for online higher education...

    • statista.com
    Updated Apr 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Veera Korhonen (2025). U.S. students' beliefs on taking out loans for online higher education 2021-23 [Dataset]. https://www.statista.com/topics/3115/e-learning-and-digital-education/
    Explore at:
    Dataset updated
    Apr 23, 2025
    Dataset provided by
    Statistahttp://statista.com/
    Authors
    Veera Korhonen
    Description

    In 2023, seven percent of students strongly agreed that it was worthwhile for borrowers to take out loans for education after high school that is a predominantly online program in the United States. In comparison, 12 percent strongly disagreed with this belief.

  11. w

    Africa Program for Education Impact Evaluation (Round 2) 2009 - Gambia

    • microdata.worldbank.org
    • catalog.ihsn.org
    • +1more
    Updated Nov 10, 2015
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David K. Evans, The World Bank and Moussa P. Blimpo, Stanford University (2015). Africa Program for Education Impact Evaluation (Round 2) 2009 - Gambia [Dataset]. https://microdata.worldbank.org/index.php/catalog/292
    Explore at:
    Dataset updated
    Nov 10, 2015
    Dataset authored and provided by
    David K. Evans, The World Bank and Moussa P. Blimpo, Stanford University
    Time period covered
    2009
    Area covered
    The Gambia
    Description

    Analysis unit

    School, Classroom, Person

    Universe

    The survey covered all the public schools and government aided/supported schools.

    Kind of data

    Sample survey data [ssd]

    Sampling procedure

    The initial sample was made of all the 276 public schools and government aided/supported schools in regions 2, 3, 4, and 6.

    The schools were clustered in groups of 2 or 3 schools on the basis of proximity for the randomization. This was done mainly to limit contamination while allowing useful exchange/cooperation between/among close schools. The randomization was further stratified by the size of the schools and their hardship 1 status. The following procedures were observed at the school level: * Head teacher questionnaire - Responded by the head teacher of the school - The deputy head teacher can respond only if the head teacher is not present. - A senior teacher is allowed to respond in case either deputy or head teacher are not present.

    • Selection of classes for the classroom visit
    • The enumerator gets the list of all the classes and selects two classrooms other than the ones participating in the written test.
    • 528 classes were visited, 175 are WSD; 180 are grant only; and 173 are control schools.

    *Selection of students for the written test One grade 3 class and one grade 5 class were selected randomly in each school. In each of the classes, 20 students were selected randomly. The gender parity was observed throughout. In total 8959 students were tested and about a third were selected in each treatment group.

    *Selection of students for the pupils' questionnaire - 10 students (5 from grade 3 and 5 from grade 5) are randomly selected among the 40 who took the written test to respond to the questionnaire. - In total 2696 students were interviewed of which, 879 are WSD; 920 are grant only; and 897 are from the control schools.

    Sampling deviation

    Two regions were excluded: *Region 1 was excluded on the basis that it was too urban compare to the others. *Region 5 was excluded because of its prior exposition to a variant of the WSD.

    Of the 276 schools, 3 schools were excluded from the samples because they were new schools and had only grade 1 and 2 or were close during the time of the survey.

    Mode of data collection

    Face-to-face [f2f]

    Research instrument

    i) Head Teacher Questionnaire

    The head teacher questionnaire is designed to collect broad characteristics of the schools as a whole. The main sections of this questionnaire include the examination of the school facilities (main buildings, sanitary, water provision etc), enrollment and staffs, school management (leadership, involvement of the local community, records keeping etc.). The main respondent to this questionnaire is the head teacher. However, in the event of his absence, the deputy head teacher or a senior teacher answers the questions.

    ii) Classroom Visits

    The classroom observation is intended to collect valuable information about the classroom activities and teaching practices. In each of the two classrooms randomly selected per school, the enumerator seats in the back of the class for 15 to 20 minutes and takes note of the teaching activities such as the students participation, teacher control over the class, etc. At the end of the observation, the teacher is asked a few questions about the school and his or her teaching such as lesson plans and lesson notes.

    iii) Written Numeracy and Literacy Test

    The written numeracy and literacy test is made by experts in the field of testing to assess the overall performance of the students in classes 3 and 5. The test has 4 sections: - The math section with 32 basic arithmetic questions (addition, subtraction, multiplication, division) -A word match section with 13 questions where students are given a word (20 questions in total) and they are to identify that word among a list of 4 words - A vocabulary section where student are given a sentence with an underlined word and they are to identify the synonym of the underlined word among a list of 4 word - A missing word section (11 questions) where a word is removed from a sentence and the students are to find the correct word that fits the blank among a list of 4 words.

    iv) Pupils' Questionnaire & Oral Literacy Test

    The pupils' questionnaire is designed to collect some background information about the students and to give then an oral literacy test. This questionnaire collects information about the students' socio-demographic information, performance and progress, and welfare. In addition, the student are given an oral literacy test that has the following components: - Letter name knowledge: The student is given a panel of 100 letters and are asked to read as many as they could in 60 seconds. - Reading: The students are to read a small passage of 60 words and then they are asked a few questions about the content of the passage. - Listening and comprehension: Here the enumerator reads a small passage aloud and then asks a few questions about the passage to the students.

    v ) Teacher Questionnaire and Test

    The teacher questionnaire is designed to collect some background information about the teacher and to give then a numeracy and literacy test. This questionnaire collects information about the teachers' socio-demographic information and teaching experiences. In addition, the teacher is given a test that has the following components: - The math section with 33 arithmetic questions (addition, subtraction, multiplication, division) - A vocabulary section (10 questions) where student are given a sentence with an underlined word and they are to identify the synonym of the underlined word among a list of 4 word - A missing word section (10 questions) where a word is removed from a sentence and the students are to find the correct word that fits the blank among a list of 4 words. - A reading section where the teacher is to read a short story and he/she is asked 5 questions about the story.

    All questionnaires are provided as external resources.

  12. Online Data Science Training Programs Market Analysis, Size, and Forecast...

    • technavio.com
    pdf
    Updated Feb 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Online Data Science Training Programs Market Analysis, Size, and Forecast 2025-2029: North America (Mexico), Europe (France, Germany, Italy, and UK), Middle East and Africa (UAE), APAC (Australia, China, India, Japan, and South Korea), South America (Brazil), and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/online-data-science-training-programs-market-industry-analysis
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Feb 12, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2025 - 2029
    Area covered
    United Kingdom, Mexico, Germany
    Description

    Snapshot img

    Online Data Science Training Programs Market Size 2025-2029

    The online data science training programs market size is forecast to increase by USD 8.67 billion, at a CAGR of 35.8% between 2024 and 2029.

    The market is experiencing significant growth due to the increasing demand for data science professionals in various industries. The job market offers lucrative opportunities for individuals with data science skills, making online training programs an attractive option for those seeking to upskill or reskill. Another key driver in the market is the adoption of microlearning and gamification techniques in data science training. These approaches make learning more engaging and accessible, allowing individuals to acquire new skills at their own pace. Furthermore, the availability of open-source learning materials has democratized access to data science education, enabling a larger pool of learners to enter the field. However, the market also faces challenges, including the need for continuous updates to keep up with the rapidly evolving data science landscape and the lack of standardization in online training programs, which can make it difficult for employers to assess the quality of graduates. Companies seeking to capitalize on market opportunities should focus on offering up-to-date, high-quality training programs that incorporate microlearning and gamification techniques, while also addressing the challenges of continuous updates and standardization. By doing so, they can differentiate themselves in a competitive market and meet the evolving needs of learners and employers alike.

    What will be the Size of the Online Data Science Training Programs Market during the forecast period?

    Request Free SampleThe online data science training market continues to evolve, driven by the increasing demand for data-driven insights and innovations across various sectors. Data science applications, from computer vision and deep learning to natural language processing and predictive analytics, are revolutionizing industries and transforming business operations. Industry case studies showcase the impact of data science in action, with big data and machine learning driving advancements in healthcare, finance, and retail. Virtual labs enable learners to gain hands-on experience, while data scientist salaries remain competitive and attractive. Cloud computing and data science platforms facilitate interactive learning and collaborative research, fostering a vibrant data science community. Data privacy and security concerns are addressed through advanced data governance and ethical frameworks. Data science libraries, such as TensorFlow and Scikit-Learn, streamline the development process, while data storytelling tools help communicate complex insights effectively. Data mining and predictive analytics enable organizations to uncover hidden trends and patterns, driving innovation and growth. The future of data science is bright, with ongoing research and development in areas like data ethics, data governance, and artificial intelligence. Data science conferences and education programs provide opportunities for professionals to expand their knowledge and expertise, ensuring they remain at the forefront of this dynamic field.

    How is this Online Data Science Training Programs Industry segmented?

    The online data science training programs industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments. TypeProfessional degree coursesCertification coursesApplicationStudentsWorking professionalsLanguageR programmingPythonBig MLSASOthersMethodLive streamingRecordedProgram TypeBootcampsCertificatesDegree ProgramsGeographyNorth AmericaUSMexicoEuropeFranceGermanyItalyUKMiddle East and AfricaUAEAPACAustraliaChinaIndiaJapanSouth KoreaSouth AmericaBrazilRest of World (ROW)

    By Type Insights

    The professional degree courses segment is estimated to witness significant growth during the forecast period.The market encompasses various segments catering to diverse learning needs. The professional degree course segment holds a significant position, offering comprehensive and in-depth training in data science. This segment's curriculum covers essential aspects such as statistical analysis, machine learning, data visualization, and data engineering. Delivered by industry professionals and academic experts, these courses ensure a high-quality education experience. Interactive learning environments, including live lectures, webinars, and group discussions, foster a collaborative and engaging experience. Data science applications, including deep learning, computer vision, and natural language processing, are integral to the market's growth. Data analysis, a crucial application, is gaining traction due to the increasing demand for data-driven decisio

  13. d

    Gambia, The - Africa Program for Education Impact Evaluation 2011 - Dataset...

    • waterdata3.staging.derilinx.com
    Updated Mar 16, 2020
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2020). Gambia, The - Africa Program for Education Impact Evaluation 2011 - Dataset - waterdata [Dataset]. https://waterdata3.staging.derilinx.com/dataset/gambia-africa-program-education-impact-evaluation-2011
    Explore at:
    Dataset updated
    Mar 16, 2020
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Africa, The Gambia
    Description

    This impact evaluation was designed to evaluate Whole School Development (WSD) program, a comprehensive school management and capacity building program in The Gambia. WSD provided a grant and management training to principals, teachers, and community representatives in a set of schools. In order to be able to separate the impact of the capacity building component from the grant, the second intervention group received the grant but did not receive the training. These two interventions were compared to a control group that received neither the grant nor the training. Each of 273 Gambian primary schools were randomized to one of the three groups. A grant of US$500 was given to all the schools in the WSD and the grant-only groups after a school development plan was presented. The schools were required to spend the funds on activities pertaining broadly to learning and teaching. This study is part of the broader World Bank's Africa Program for Education Impact Evaluation. The Gambia Bureau of Statistics, under the supervision of the research team, collected the data for this study. The baseline data was collected in 2008 at the onset of the study, the first round of follow-up data was collected in 2009, the second round of follow-up data was collected in 2010, and the endline data was collected in 2011. The endline survey is documented here. All other rounds of this impact evaluation are published in the Microdata Library.

  14. T

    College and Career Outcomes of High School Graduates

    • educationtocareer.data.mass.gov
    application/rdfxml +5
    Updated Jul 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Executive Office of Education (2023). College and Career Outcomes of High School Graduates [Dataset]. https://educationtocareer.data.mass.gov/w/vj54-j4q3/default?cur=EaWn9f8yXXE&from=c-1ssdelCFR
    Explore at:
    application/rdfxml, csv, tsv, application/rssxml, json, xmlAvailable download formats
    Dataset updated
    Jul 13, 2023
    Dataset authored and provided by
    Executive Office of Education
    Description

    See notice below about this dataset

    This dataset provides the number of graduates who enrolled in each type of postsecondary education per district.

    Wage records are obtained from the Massachusetts Department of Unemployment Assistance (DUA) using a secure, anonymized matching process with limitations. For details on the process and suppression rules, please visit the Employment and Earnings of High School Graduates dashboard.

    This dataset is one of three containing the same data that is also published in the Employment and Earnings of High School Graduates dashboard: Average Earnings by Student Group Average Earnings by Industry College and Career Outcomes

    List of Outcomes

    • Total Postsecondary Enrollment
    • In-State Public 2-Year
    • In-State Public 4-Year
    • In-State Private
    • Out-of-State
    • Total Employed
    • Total Missing
    2025 Update on DESE Data on Employment and Earnings 

    The data link between high school graduates and future earnings makes it possible to follow students beyond high school and college into the workforce, enabling long-term evaluation of educational programs using workforce outcomes.

    While DESE has published these data in the past, as of June 2025 we are temporarily pausing updates due to an issue conducting the link that was brought to our attention in 2023 by a team of researchers. The issue impacts the earnings information for students who never attended a postsecondary institution or who only attended private or out-of-state colleges or universities, beginning with the 2017 high school graduation cohort, with growing impact in each successive high school graduation cohort.

    The issue does not impact the earnings information for students who attended a Massachusetts public institution of higher education, and earnings data for those students will continue to be updated.

    Once a solution is found, the past cohorts of data with low match rates will be updated. DESE and partner agencies are exploring linking strategies to maximize the utility of the information.

    More detailed information can be found in the attached memo provided by the research team from the Annenberg Institute. We thank them for calling this issue to our attention.

  15. d

    Data from: The impact of mother literacy and participation programs on child...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Banerji, Rukmini; Berry, James; Shotland, Marc (2023). The impact of mother literacy and participation programs on child learning: evidence from a randomized evaluation in India [Dataset]. http://doi.org/10.7910/DVN/WE0LSW
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Banerji, Rukmini; Berry, James; Shotland, Marc
    Area covered
    India
    Description

    Only data used in the analysis published in the Final Report to 3ie on the project, "The impact of mother literacy and participation programs on child learning: evidence from a randomized evaluation in India" (project code OW2.153). This project was funded as part of the Open Window Round 2. The data and analysis have not been verified by 3ie as the authors have not submitted the statistical code files to 3ie.

  16. d

    Data from: The Impact of Maternal Literacy and Participation Programs:...

    • search.dataone.org
    • dataverse.harvard.edu
    Updated Nov 21, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Banerji, Rukmini; Berry, James; Shotland, Marc (2023). The Impact of Maternal Literacy and Participation Programs: Evidence from a Randomized Evaluation in India [Dataset]. http://doi.org/10.7910/DVN/19PPE7
    Explore at:
    Dataset updated
    Nov 21, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Banerji, Rukmini; Berry, James; Shotland, Marc
    Time period covered
    Jan 1, 2011 - Jan 1, 2012
    Area covered
    India
    Description

    Using a randomized field experiment in India, we evaluate the effectiveness of adult literacy and parental involvement interventions in improving children's learning. Households were assigned to receive either adult literacy (language and math) classes for mothers, training for mothers on how to enhance their children's learning at home, or a combination of the two programs. All three interventions had significant but modest impacts on childrens math scores. The interventions also increased mothers' test scores in both language and math, as well as a range of other outcomes reflecting greater involvement of mothers in their children's education.

  17. o

    Online Credit Recovery Study: Effects on High School Students' Proximal and...

    • openicpsr.org
    Updated May 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jordan Rickles; Sarah Peko-Spicer; Kyle Neering (2024). Online Credit Recovery Study: Effects on High School Students' Proximal and Distal Outcomes [Dataset]. http://doi.org/10.3886/E202181V1
    Explore at:
    Dataset updated
    May 6, 2024
    Dataset provided by
    American Institutes for Research
    Authors
    Jordan Rickles; Sarah Peko-Spicer; Kyle Neering
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Los Angeles, California
    Description

    The American Institutes for Research conducted a multisite randomized study that tested an online learning model for credit recovery at 24 high schools in Los Angeles, California in 2018 and 2019. The study focused on first-year high school students who failed Algebra 1 or English 9 (their ninth-grade English course) and retook the course during the summer before their second year of high school. Within each participating school, we used a lottery to determine whether each student was placed in either the school’s typical teacher-directed class (business-as-usual control condition) or a class that used an online learning model (treatment condition). For the online learning model, an online provider supplied the main course content, and the school provided a subject-appropriate, credentialed in-class teacher who could supplement the digital content with additional instruction.The study compared outcomes of students assigned to the treatment condition to outcomes of students assigned to the control condition. Analyses focused both on proximal outcomes (ex: student course experiences, content knowledge, and credit recovery rates) and distal outcomes (ex: on-time graduation and cumulative credits earned by the end of the 4th year of high school). We estimated average treatment effects for the intent-to-treat sample using regression models that control for student characteristics and randomization blocks. We conducted separate analyses for students who failed Algebra 1 and students who failed at least one semester of their English 9 course.This ICPSR data deposit includes our final analytical dataset and three supplemental files. Data come from three sources: (1) extant district data on student information and academic outcomes, (2) end-of-course surveys of students’ and teachers’ experiences, and (3) end-of-course test of students’ content knowledge. Data fields include:Sample information: term, school (anonymized), teacher (anonymized), course, randomization block, student cohort, treatment statusDemographics: sex, race/ethnicity, National School Lunch Program status, inclusion in the Gifted/Talented program, Special Education status, and English language learner statusPre-treatment information (treatment group only): 9th grade GPA, 9th grade attendance rate, number of 9th grade courses failed, 8th grade test scoresOnline course engagement information: percentage of online course completed, average score on online activities, minutes spent in online platformStudent survey data: responses a survey administered at the end of the course for treatment and control students. Questions cover degree of student engagement with the course, perceptions of teacher support and course difficulty, and clarity of course expectations.End-of-course test data: answers and scores on an end-of-course assessment administered to treatment and control students to evaluate content knowledge (Algebra 1 or English 9). The test did not count towards the final course grade and included 17-20 multiple choice questions.Academic outcomes: grade in credit recovery course, credits attempted/earned in each year of high school, GPA in each year of high school, credits/GPA in math and ELA in each year of high school, indicator for on-time high school graduation, 10th grade PSAT scoresTeacher survey and logs: teacher-reported logs on the use of different instructional activities and responses to surveys about course pacing, content, goals, and degree of student support

  18. v

    Data from: Longitudinal impact of a youth tobacco education program

    • res1catalogd-o-tdatad-o-tgov.vcapture.xyz
    • data.virginia.gov
    • +1more
    Updated Jul 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institutes of Health (2025). Longitudinal impact of a youth tobacco education program [Dataset]. https://res1catalogd-o-tdatad-o-tgov.vcapture.xyz/dataset/longitudinal-impact-of-a-youth-tobacco-education-program
    Explore at:
    Dataset updated
    Jul 24, 2025
    Dataset provided by
    National Institutes of Health
    Description

    Background Information on the effectiveness of elementary school level, tobacco-use prevention programs is generally limited. This study assessed the impact of a structured, one-time intervention that was designed to modify attitudes and knowledge about tobacco. Participants were fifth-grade students from schools in western New York State. Methods Twenty-eight schools, which were in relatively close geographic proximity, were randomized into three groups; Group 1 was used to assess whether attitudes/knowledge were changed in the hypothesized direction by the intervention, and if those changes were retained four months later. Groups 2 and 3, were used as comparison groups to assess possible test-retest bias and historical effects. Groups 1 and 3 were pooled to assess whether attitudes/knowledge were changed by the intervention as measured by an immediate post-test. The non-parametric analytical techniques of Wilcoxon-Matched Pairs/Sign Ranks and the Mann-Whitney-Wilcoxon Rank Sums Tests were used to compare proportions of correct responses at each of the schools. Results Pooled analyses showed that short-term retention on most items was achieved. It was also found that retention on two knowledge items 'recognition that smokers have yellow teeth and fingers' and 'smoking one pack of cigarettes a day costs several hundred dollars per year' was maintained for four months. Conclusions The findings suggest that inexpensive, one-time interventions for tobacco-use prevention can be of value. Changes in attitudes and knowledge conducive to the goal of tobacco-use prevention can be achieved for short-term retention and some relevant knowledge items can be retained for several months.

  19. o

    "Data and Code for: The Effect of Charter Schools on School Segregation"

    • openicpsr.org
    delimited, stata
    Updated Feb 8, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tomas Monarrez; Brian Kisida; Matthew Chingos (2021). "Data and Code for: The Effect of Charter Schools on School Segregation" [Dataset]. http://doi.org/10.3886/E131961V1
    Explore at:
    delimited, stataAvailable download formats
    Dataset updated
    Feb 8, 2021
    Dataset provided by
    American Economic Association
    Authors
    Tomas Monarrez; Brian Kisida; Matthew Chingos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    1998 - 2018
    Area covered
    and metropolitan areas, census places, counties, US school districts
    Description

    We examine the impact of the charter school movement on racial segregation in public schools, defined using varying measures of racial sorting and isolation. We identify impacts using between-grade differences in charter expansion within school systems, and an instrumental variables approach leveraging event variation from school openings. Charter schools modestly increase school segregation for Black, Hispanic, Asian, and White students. Charters on average have driven a 6\% increase in sorting of Black and Hispanic students in public schools. Analysis across varied geographies reveals countervailing forces. In metropolitan areas, charters reduce demographic differences between districts, especially in areas fragmented into many jurisdictions.

  20. Data from: Head Start Impact Study (HSIS), 2002-2008 with Center Analysis...

    • childandfamilydataarchive.org
    Updated Apr 3, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Puma, Michael; Bell, Stephen; Cook, Ronna; Heid, Camilla A. (2018). Head Start Impact Study (HSIS), 2002-2008 with Center Analysis File [United States] [Dataset]. http://doi.org/10.3886/ICPSR36968.v2
    Explore at:
    Dataset updated
    Apr 3, 2018
    Dataset provided by
    Inter-university Consortium for Political and Social Researchhttps://www.icpsr.umich.edu/web/pages/
    Authors
    Puma, Michael; Bell, Stephen; Cook, Ronna; Heid, Camilla A.
    License

    https://www.icpsr.umich.edu/web/ICPSR/studies/36968/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/36968/terms

    Time period covered
    2000 - 2008
    Area covered
    United States
    Description

    Since its beginning in 1965 as a part of the War on Poverty, Head Start's goal has been to boost the school readiness of low income children. Based on a "whole child" model, the program provides comprehensive services that include preschool education; medical, dental, and mental health care; nutrition services; and efforts to help parents foster their child's development. Head Start services are designed to be responsive to each child's and family's ethnic, cultural, and linguistic heritage. In the 1998 reauthorization of Head Start, Congress mandated that the United States Department of Health and Human Services determine, on a national level, the impact of Head Start on the children it serves. This legislative mandate required that the impact study address two main research questions: What difference does Head Start make to key outcomes of development and learning (and in particular, the multiple domains of school readiness) for low-income children? What difference does Head Start make to parental practices that contribute to children's school readiness? Under what circumstances does Head Start achieve the greatest impact? What works for which children? What Head Start services are most related to impact? The Head Start Impact Study addresses these questions by reporting on the impacts of Head Start on children and families during the children's preschool, kindergarten, and first grade years. It was conducted with a nationally representative sample of nearly 5,000 three- and four-year old preschool children across 84 nationally representative grantee/delegate agencies in communities where there are more eligible children and families than can be served by the program. The children participating were randomly assigned to either a treatment group (which had access to Head Start services) or a comparison group (which did not have access to Head Start services, but could receive other community resources). Data collection began in the fall of 2002 and ended in spring 2006, following children through the spring of their first grade year. Baseline data were collected through parent interviews and child assessments in fall 2002. The annual spring data collection included child assessments, parent interviews, teacher surveys, and teacher-child ratings. In addition, during the preschool years only, data collection included classroom and family day care observations, center director interviews, care provider interviews, and care provider-child ratings. The study examined differences in outcomes in several domains related to school readiness: children's cognitive, social-emotional, health, and parenting outcomes (e.g., reading to the child, use of spanking and time out, exposing children to cultural enrichment activities, safety practices, parent-child relationships). It also examined whether impacts differed based on characteristics of the children and their families, including the child's pre-academic skills at the beginning of the study; the child's primary language; whether the child has special needs; the mother's race/ethnicity; the primary caregiver's level of depressive symptoms; household risk; and urban or rural location. The Head Start Impact Study differs from other evaluations of early childhood programs in that it: represents children from the majority of Head Start programs, represents a scaled-up federal program, represents the full range of quality within the national program, employs a randomized control design, the strongest design for testing impacts, examines all domains of children's school readiness, as well as parenting outcomes, follows children through their early years of elementary school, and compares children who have access to Head Start to a control group that includes many children in center-based and other forms of early childhood education programs. The Third Grade Follow-up to the Head Start Impact Study builds upon the existing randomized control design in the Head Start Impact Study (HSIS) in order to determine the longer-term impact of the Head Start program on the well-being of children and families through the end of third grade. The data collection for the Third Grade Follow-up to the Head Start Impact Study was conducted during the spring of the children's third grade year (2007 and 2008). In addition to the child assessments, parent interviews, teacher surveys, and teacher-ch

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Perry, Claire (2023). Vocational and Educational Programs: Impacts on Recidivism [Dataset]. http://doi.org/10.7910/DVN/28791

Data from: Vocational and Educational Programs: Impacts on Recidivism

Related Article
Explore at:
Dataset updated
Nov 21, 2023
Dataset provided by
Harvard Dataverse
Authors
Perry, Claire
Description

This paper looks at the relationship between educational and vocational program participation and recidivism. Using a cohort of individuals released from state prison in 1994 across five states and tracked for three years, this paper takes into consideration both re-arrest and re-confinement. It finds that vocational programs in particular have significant reductions in both re-arrest and re-confinement; results are primarily driven by programs in Illinois and New York. This paper builds off the extensive literature on the topic by accounting for varying levels of program completion, including different types of recidivism, controlling for state-to-state variation, and looking at both education and vocation programs. In addition to predicting recidivism based on program participation, this study seeks to control for motivation of people choosing to participate in educational and vocational programs using instrumental variable analysis and looks at time until recidivism using proportional hazard models.

Search
Clear search
Close search
Google apps
Main menu