Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Missing data is an inevitable aspect of every empirical research. Researchers developed several techniques to handle missing data to avoid information loss and biases. Over the past 50 years, these methods have become more and more efficient and also more complex. Building on previous review studies, this paper aims to analyze what kind of missing data handling methods are used among various scientific disciplines. For the analysis, we used nearly 50.000 scientific articles that were published between 1999 and 2016. JSTOR provided the data in text format. Furthermore, we utilized a text-mining approach to extract the necessary information from our corpus. Our results show that the usage of advanced missing data handling methods such as Multiple Imputation or Full Information Maximum Likelihood estimation is steadily growing in the examination period. Additionally, simpler methods, like listwise and pairwise deletion, are still in widespread use.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Norway Imports: 2-Digit: France: Office Machines and Data Processing Machines data was reported at 7,577.000 NOK th in Jun 2018. This records a decrease from the previous number of 11,463.000 NOK th for May 2018. Norway Imports: 2-Digit: France: Office Machines and Data Processing Machines data is updated monthly, averaging 24,472.500 NOK th from Jan 1988 (Median) to Jun 2018, with 366 observations. The data reached an all-time high of 87,234.000 NOK th in May 2001 and a record low of 5,450.000 NOK th in Jan 2018. Norway Imports: 2-Digit: France: Office Machines and Data Processing Machines data remains active status in CEIC and is reported by Statistics Norway. The data is categorized under Global Database’s Norway – Table NO.JA022: Imports: by SITC 2-Digit: France and Germany.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The Whitehall II-data sharing agreement does however prevent us from supplying the data ourselves. The Whitehall II data are available to bona fide researchers for research purposes. Please refer to the Whitehall II data sharing policy at http://www.ucl.ac.uk/whitehallII/data-sharing.It is however possible to provide the Stata-syntax we used in order to give some insight in our data-handling.
This study aims to assess the impacts of two types of interventions to be implemented by the Guinea Ministry of Pre-University and Civic Education (MEPU-EC) using performance-based rewards for teacher as motivation-enhancing strategies and providing simple guidance on effective classroom management strategies to encourage their use. In the context of the incentive scheme, teacher performance will be measured using an objective and comparable performance indicator.
The evaluation is designed to answer the following research questions: - What is the effect of performance-based incentives for teachers on teaching practices and behaviors (absenteeism, time on task) and on student learning outcomes? - To what extent do recognition rewards trigger different (better or worse) modifications in teachers’ behavior and practices compared to those triggered by in-king rewards? - What is the effect of providing guidance on effective classroom management on teaching practices and student learning outcomes? - What is the effect of performance-based incentives when teachers are also provided with guidance on effective classroom management practices? Moreover, cost effectiveness of the different treatment arms will be investigated.
The proposed evaluation strategy is a randomized control trial that spans two academic years (2012-2013 and 2013-2014) and targets all grade 3 and 4 teachers of a nationally representative sample of 420 schools. The first year of the impact evaluation focused on assessing the impact of performance-based teacher incentives only and comparing the two types of incentives: in-kind and recognition. The second year includes the additional intervention of delivering guidance on classroom management. Data on schools, teachers, and students is collected through (unannounced) attendance checks, time on task and general classroom observations (carried out in person and using a video), official inspection visits, administration of student’s curriculum-based standardized Math and French tests, teacher surveys and content-knowledge tests, as well student and principal questionnaires. Costing data will be collected through the financial reports of the IDA project and of the government budget (for the second year incentives and guidance intervention) since all expenditures related to the interventions evaluated are paid through these channels.
Baseline data is documented here.
National
Sample survey data [ssd]
The sample was designed to be representative, at the national level, of the target grades' teacher population in public French-speaking schools.
The sampling process at the schools took place as followed: - The population of public Francophone schools was extracted from the in 2011-12 Education Management Information System database and used as the sampling frame; - Schools were split into the 15 strata defined to capture the school location (8 regions and 2 zones, namely rural-urban). - Assuming that the number of teachers and students in grade 2 and 3 in in 2011-12 indicates the numbers of teachers to be expected in grades 3 and 4 in 2012-13, the number of grade 2 and 3 teachers/classes for each school was calculated. - The number of schools to be selected per strata was established using the Markwardt protocol (this represent the average between selecting a proportional and equal number of schools per strata). The selection probability of each individual school was established using the number of targeted teachers in the school and the number of schools in the strata. - Using a random starting point and the selection probability, 450 individual schools and 75 replacement schools were selected.
This sample was completed by adding the 16 pre-identified schools where the instruments were piloted in the 2011-12 academic year. Therefore, before launching the baseline fieldwork, a sample of 466 schools was targeted. Within each school, all grade 3 and 4 teachers and all of their students were targeted.
Randomization is at the school level but target beneficiaries are the teachers. 420 schools, all grade 3 and 4 teachers, all grade 3 and 4 students.
While in the field at baseline, the teams were unable to locate some of the schools and some of the located schools turned out not to have the targeted grades and thus had to be taken out of the sample. The final sample contains 420 schools. No replacement schools were used. The final sample therefore differs from the targeted sample and national representativeness is uncertain. It is important to note that this does not reduce the internal validity of the IE design since the random assignment of schools to the different experiment arms was carried-out once the realized sample of schools was stabilized.
Sample of teachers and students: once in the school in May 2012, the aim was to administer a standardized test to all grade 2 and 3 students (who will be grade 3 and 4 students in 2012-13, the first year of the intervention). Furthermore, in October 2012, the aim was to survey/inspect all grade 3 and 4 teachers for the 2012-13 academic year. At baseline, a total of 416 principal and 1177 teachers were surveyed; 1214 inspected, and 23183 students participated in the test.
Within each of the targeted teachers’ classes, the objective was that all students should take part in the test. However, once in the field, the teams were faced with larger schools (in terms of the number of students) than expected and thus did not have enough printed copies of the tests to administer it to all students. In about one third of the visited schools, instead of randomly selected students within each of the classroom, only subset of the classrooms (all students) were selected to participate in the test. Furthermore, when class size was too big, a random selection of students within the selected classes was carried out. There is no reason to believe that the selection of students within a class was not random but there is also no certainty that it was. Finally, because of teacher absenteeism and logistics difficulties, the tests were only administered in 353 out of the 420 targeted schools.
Face-to-face [f2f]
The following survey instruments were used: (i) a questionnaire administered to the school’s principal, (ii) a teacher questionnaire administered to targeted teachers (grade 3 and 4), (iii) a Math test (a few parallel booklets), a French test (a few parallel booklets) administered to students in the targeted teachers’ classrooms, and (iv) an inspection bulletin administered to targeted teachers in the context of two lessons, one in French and one in Math.
Below is presented a more detailed description of the various instruments.
Principal questionnaire (May and October 2012) A. School and principal identification B. Demographics C. Education and professional training D. Work experience and training needs E. Pedagogical practices and languages F. School basic characteristics G. School environment H. Interaction with colleagues (subordinate, supervisors, etc.) I. Support and monitoring of teachers J. Motivation
Teacher questionnaire(May and October 2012) A. Class and teacher identification B. Demographics C. Class characteristics D. Education and professional training E. Work experience and training needs F. Pedagogical practices and languages G. Interaction with colleagues H. Motivation I. Absenteeism and events disturbing teaching J. Remuneration K. Perception of key factors influencing student learning L. Performance recognition or punishment
Student tests (May 2012) 1.A Identification of school, class, and teachers 2.B. School-related student characteristics 1.C. Student environmental and familial backgrounds 2. French test questions 3. Math test questions
Inspection bulletin (October 2012) I. Class and inspector identification II. Teacher identification III. Summary of scores VI. General material and spatial classroom arrangement V.1 Lesson 1 – Identification of the lesson V.2 Lesson 1 – Teaching and learning material preparation V.3 Lesson 1 – Lesson planning (according to the Competency-based approach) V.4 Lesson 1 – Delivery of the lesson V.5 Lesson 1 – Analysis of the own’s performance VI.1 Lesson 2 – Identification of the lesson VI.2 Lesson 2 – Teaching and learning material preparation VI.3 Lesson 2 – Lesson planning (according to the Competency-based approach) VI.4 Lesson 2 – Delivery of the lesson VI.5 Lesson 2 – Analysis of the own’s performance
Response rates varies from high in the case of inspection and questionnaires to a little lower in the case of test. Balance analysis indicates that these response rates were orthogonal to treatment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data Package: Exploring the opportunities and challenges of implementing open research strategies within development institutions
DOI for this package: https://doi.org/10.5281/zenodo.844394
Project Description: https://doi.org/10.3897/rio.2.e8880
Data Management Plan: https://doi.org/10.3897/rio.3.e14672
Other Related Documents and Reports: https://riojournal.com/collection/18/
Funder: International Development Research Centre/Centre de rechereches pour le développement international, https://doi.org/10.13039/501100000193
Abstract
========
This is the Data Package for the project "Exploring the opportunities and challenges of implementing open research strategies within development institutions" the proposal for which was published as https://doi.org/10.3897/rio.2.e8880. The research project conducted open data pilot case studies with seven IDRC grantees to develop and implement open data management and sharing plans. The results of the case studies served to refine guidelines for the implementation of development research funders’ open research data policies.
Contents
========
The Data Package contains all the public data generated by the project. The package was curated and metadata generated, including an HTML Catalog using the Calcyte Tool (https://codeine.research.uts.edu.au/eresearch/calcyte) developed at University of Technology Sydney.
The project had two major phases:
1. A review, based on desk work and interviews with data management experts
2. Case studies, based on implementing open data practices within seven IDRC funded research projects
Review
------
The review, published at https://riojournal.com/article/14673/ was supported by desk work and interviews. The materials related to the interviews can be found in the directory:
* Policy and Implementation Review Interviews
Case Studies
------------
Seven IDRC-funded projects were contributed to the pilot project.
The materials generated by the case studies and used to support the final report (to be published with the collection at https://riojournal.com/collection/18/) are found in the following directories.
* Introductory_Data_Workshop_Materials
* Introductory_Workshop_Presentations
* Data Management Planning
* SciDataCon Presentations
* Final_Project_Workshop_Materials
* Final_Project_Workshop_Presentations
The files are encoded with a three letter code that identifies the relevant contributing project in each case. The contributing projects were:
* Crowd Sourcing Data to fight Social Crimes: Harassmap, Egypt (HMP)
* The Brazilian Virtual Herbarium: CRIA, Brazil(BVH)
* Strengthening the Economic Committee of the National Assembly in Vietnam: Centre for Analysis and Forecasting, Vietnam (ECV)
* The Impact of Copyright User Rights: Derechos Digitales, Columbia (DED)
* Establishing a clearinghouse for tobacco economic data in Africa: DataFirst, South Africa (TED)
* Les problèmes négligés des systèmes de santé en Afrique : une incitation aux réformes: LASDEL, Niger (NDF)
* Indigenous Knowledge in Climate Change: Natural Justice, South Africa (IKC)
More details will be found in the Case Studies and in the Final Report (forthcoming at https://riojournal.com/collection/18/)
References
==========
* Neylon C, Chan L (2016) Exploring the opportunities and challenges of implementing open research strategies within development institutions. Research Ideas and Outcomes 2: e8880. https://doi.org/10.3897/rio.2.e8880
* Neylon C (2017) Data Management Plan: IDRC Data Sharing Pilot Project. Research Ideas and Outcomes 3: e14672. https://doi.org/10.3897/rio.3.e14672
* Neylon C, Chan L (2016-17) Exploring the opportunities and challenges of implementing open research strategies within development institutions: A project of the International Development Research Center, Research Ideas and Outcomes Collection, https://riojournal.com/collection/18/
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Missing data is an inevitable aspect of every empirical research. Researchers developed several techniques to handle missing data to avoid information loss and biases. Over the past 50 years, these methods have become more and more efficient and also more complex. Building on previous review studies, this paper aims to analyze what kind of missing data handling methods are used among various scientific disciplines. For the analysis, we used nearly 50.000 scientific articles that were published between 1999 and 2016. JSTOR provided the data in text format. Furthermore, we utilized a text-mining approach to extract the necessary information from our corpus. Our results show that the usage of advanced missing data handling methods such as Multiple Imputation or Full Information Maximum Likelihood estimation is steadily growing in the examination period. Additionally, simpler methods, like listwise and pairwise deletion, are still in widespread use.