Facebook
TwitterThe Learning Resources Database is a catalog of interactive tutorials, videos, online classes, finding aids, and other instructional resources on National Library of Medicine (NLM) products and services. Resources may be available for immediate use via a browser or downloadable for use in course management systems.
Facebook
TwitterDatabase is provided by ASL Marketing and covers the United States of America. With ASL Marketing Reaching GenZ has never been easier. Current high school student data customized by: Class year Date of Birth Gender GPA Geo Household Income Ethnicity Hobbies College-bound Interests College Intent Email
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
If this Data Set is useful, and upvote is appreciated. This data approach student achievement in secondary education of two Portuguese schools. The data attributes include student grades, demographic, social and school related features) and it was collected by using school reports and questionnaires. Two datasets are provided regarding the performance in two distinct subjects: Mathematics (mat) and Portuguese language (por). In [Cortez and Silva, 2008], the two datasets were modeled under binary/five-level classification and regression tasks. Important note: the target attribute G3 has a strong correlation with attributes G2 and G1. This occurs because G3 is the final year grade (issued at the 3rd period), while G1 and G2 correspond to the 1st and 2nd-period grades. It is more difficult to predict G3 without G2 and G1, but such prediction is much more useful (see paper source for more details).
Facebook
TwitterDatabase holding records of all applicants and learners who have applied for (Further Education and Training) FET courses and details of Course Providers
Facebook
TwitterThe 2021-2022 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2021-2022 school year and the Fall 2022 semester, from August 2021 – December 2022. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from August 1, 2021 to June 24, 2022 correspond to the 2021-2022 school year. During this time frame, data from the AEI/Return to Learn Tracker and most state dashboards were not available. Inferred modalities with a probability below 0.6 were deemed inconclusive and were omitted. During the Fall 2022 semester, modalities for districts with a school closure reported by Burbio were updated to either “Remote”, if the closure spanned the entire week, or “Hybrid”, if the closure spanned 1-4 days of the week. Data from August
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Every student has a varied level of mathematical proficiency. Therefore, it is important to provide them with questions accordingly. Owing to advances in technology and artificial intelligence, the Learning Management System (LMS) has become a popular application to conduct online learning for students. The LMS can store multiple pieces of information on students through an online database, enabling it to recommend appropriate questions for each student based on an analysis of their previous responses to questions. Particularly, the LMS manages learners and provides an online platform that can evaluate their skills. Questions need to be classified according to their difficulty level so that the LMS can recommend them to learners appropriately and thereby increase their learning efficiency. In this study, we classified large-scale mathematical test items provided by ABLE Tech, which supports LMS-based online mathematical education platforms, using various machine learning techniques according to the difficulty levels of the questions. First, through t-test analysis, we identified the significant correlation variables according to the difficulty level. The t-test results showed that answer rate, type of question, and solution time were positively correlated with the difficulty of the question. Second, items were classified according to their difficulty level using various machine learning models, such as logistic regression (LR), random forest (RF), and extreme gradient boosting (xgboost). Accuracy, precision, recall, F1 score, the area under the curve of the receiver operating curve (AUC-ROC), Cohen’s Kappa and Matthew’s correlation coefficient (MCC) scores were used as the evaluation metrics. The correct answer rate, question type, and time for solving a question correlated significantly with the difficulty level. The machine learning-based xgboost model outperformed the statistical machine learning models, with a 85.7% accuracy, and 85.8% F1 score. These results can be used as an auxiliary tool in recommending suitable mathematical questions to various learners based on their difficulty level.
Facebook
TwitterThe DART for English Learners (ELs) dashboard consolidate state data on English learners into a single tool. This tool can be used to understand EL students’ demographics, progress, and outcomes as well as to compare districts with each other and the state. The information contained in this report is closely tied with the Office of Language Acquisition's program review requirements.
The DARTs provide a gauge of the overall condition of a district or school, but do not have all available information. They should be treated as a good starting point for exploring the data and identifying areas of focus for further inquiry. Please see the Info tab on the dashboard for detailed data analysis considerations.
Facebook
TwitterAttribution-ShareAlike 2.0 (CC BY-SA 2.0)https://creativecommons.org/licenses/by-sa/2.0/
License information was derived automatically
Subject: EducationSpecific: Online Learning and FunType: Questionnaire survey data (csv / excel)Date: February - March 2020Content: Students' views about online learning and fun Data Source: Project OLAFValue: These data provide students' beliefs about how learning occurs and correlations with fun. Participants were 206 students from the OU
Facebook
TwitterThis dataset contains the Country Learning Outcomes (CLO) of harmonized learning assessments, which includes PISA, TIMSS, PIRLS, LLECE, PASEC, SEA-PLM, AMPL-b, and SACMEQ. The country level estimates are also disaggregated by sex, urban/rural, and wealth quintile.Eligible assessments are also used to generate the Learning Deprivation component of the latest Learning Poverty estimates. The June 2022 release of Learning Poverty estimates involves several changes to the data underlying the country-level Learning Poverty figures. Some country-level estimates have changed or become available for the first time due to new learning data from recent assessments: AMPL-b 2021, TIMSS 2019, LLECE 2019, PASEC 2019, and SEA-PLM 2019. In the June 2022 release, country-level estimates of Learning Poverty are available for 122 countries. A new global aggregate was also created, and the accompanying Global Learning Poverty Database includes the measures of GAP and SEVERITY for both Learning Deprivation and Learning Poverty, as introduced by Azevedo (2020).
Facebook
TwitterBackground, Methodology:
Local Law 102 enacted in 2015 requires the Department of Education of the New York City School District to submit to the Council an annual report concerning physical education for the prior school year.
This report provides information about average frequency and average total minutes per week of physical education as defined in Local Law 102 as reported through the 2015-2016 STARS database. It is important to note that schools self-report their scheduling information in STARS. The report also includes information regarding the number and ratio of certified physical education instructors and designated physical education instructional space.
This report consists of six tabs:
Supplemental Programs
PE Instruction Borough-Level
This tab includes the average frequency and average total minutes per week of physical education by borough, disaggregated by grade, race and ethnicity, gender, special education status and English language learner status. This report only includes students who were enrolled in the same school across all academic terms in the 2015-16 school year. Data on students with disabilities and English language learners are as of the end of the 2015-16 school year. Data on adaptive PE is based on individualized education programs (IEP) finalized on or before 05/31/2016.
This tab includes the average frequency and average total minutes per week of physical education by district, disaggregated by grade, race and ethnicity, gender, special education status and English language learner status. This report only includes students who were enrolled in the same school across all academic terms in the 2015-16 school year. Data on students with disabilities and English language learners are as of the end of the 2015-16 school year. Data on adaptive PE is based on individualized education programs (IEP) finalized on or before 05/31/2016.
This tab includes the average frequency and average total minutes per week of physical education by school, disaggregated by grade, race and ethnicity, gender, special education status and English language learner status. This report only includes students who were enrolled in the same school across all academic terms in the 2015-16 school year. Data on students with disabilities and English language learners are as of the end of the 2015-16 school year. Data on adaptive PE is based on individualized education programs (IEP) finalized on or before 05/31/2016.
This tab provides the number of designated full-time and part-time physical education certified instructors. Does not include elementary, early childhood and K-8 physical education teachers that provide physical education instruction under a common branches license. Also includes ratio of full time instructors teaching in a physical education license to students by school. Data reported is for the 2015-2016 school year as of 10/31/2015.
This tab provides information on all designated indoor, outdoor and off-site spaces used by the school for physical education as reported through the Principal Annual Space Survey and the Outdoor Yard Report. It is important to note that information on each room category is self-reported by principals, and principals determine how each room is classified. Data captures if the PE space is co-located, used by another school or used for another purpose. Includes gyms, athletic fields, auxiliary exercise spaces, dance rooms, field houses, multipurpose spaces, outdoor yards, off-site locations, playrooms, swimming pools and weight rooms as designated PE Space.
This tab provides information on the department's supplemental physical education program and a list of schools that use it.I. Includes all Move-to-Improve (MTI) supplemental programs for the 2015-2016 school year.
Link to NY State PE Regulations: http://www.p12.nysed.gov/ciai/pe/documents/title8part135.pdf
Any questions regarding this report should be directed to: Nnennaya Okezie, Executive Director NYC Department of Education, Office of Intergovernmental Affairs Phone: 212-374-4947"
Idiosyncrasies or limitations of the data to be aware of:
12,085 students (5.96% of the 10th-12th grade base student population in our analysis) were permitted a substitution by the department in the 2015-16 school year.
Facebook
TwitterBy UCI [source]
This dataset provides an intimate look into student performance and engagement. It grants researchers access to numerous salient metrics of academic performance which illuminate a broad spectrum of student behaviors: how students interact with online learning material; quantitative indicators reflecting their academic outcomes; as well as demographic data such as age group, gender, prior education level among others.
The main objective of this dataset is to enable analysts and educators alike with empirical insights underpinning individualized learning experiences - specifically in identifying cases when students may be 'at risk'. Given that preventive early interventions have been shown to significantly mitigate chances of course or program withdrawal among struggling students - having accurate predictive measures such as this can greatly steer pedagogical strategies towards being more success oriented.
One unique feature about this dataset is its intricate detailing. Not only does it provide overarching summaries on a per-student basis for each presented courses but it also furnishes data related to assessments (scores & submission dates) along with information on individuals' interactions within VLEs (virtual learning environments) - spanning different types like forums, content pages etc... Such comprehensive collation across multiple contextual layers helps paint an encompassing portrayal of student experience that can guide better instructional design.
Due credit must be given when utilizing this database for research purposes through citation. Specifically referencing (Kuzilek et al., 2015) OU Analyse: Analysing At-Risk Students at The Open University published in Learning Analytics Review is required due to its seminal work related groundings regarding analysis methodologies stem from there.
Immaterial aspects aside - it is important to note that protection of student privacy is paramount within this dataset's terms and conditions. Stringent anonymization techniques have been implemented across sensitive variables - while detailed, profiles can't be traced back to original respondents.
How To Use This Dataset:
Understanding Your Objectives: Ideal objectives for using this dataset could be to identify at-risk students before they drop out of a class or program, improving course design by analyzing how assignments contribute to final grades, or simply examining relationships between different variables and student performance.
Set up your Analytical Environment: Before starting any analysis make sure you have an analytical environment set up where you can load the CSV files included in this dataset. You can use Python notebooks (Jupyter), R Studio or Tableau based software in case you want visual representation as well.
Explore Data Individually: There are seven separate datasets available: Assessments; Courses; Student Assessment; Student Info; Vle (Virtual Learning Environment); Student Registeration and Student Vle. Load these CSVs separately into your environment and do an initial exploration of each one: find out what kind of data they contain (numerical/categorical), if they have missing values etc.
Merge Datasets As the core idea is to track a student’s journey through multiple courses over time, combining these datasets will provide insights from wider perspectives. One way could be merging them using common key columns such as 'code_module', 'code_presentation', & 'id_student'. But make sure that merge should depend on what question you're trying to answer.
Identify Key Metrics Your key metrics will depend on your objectives but might include: overall grade averages per course or assessment type/student/region/gender/age group etc., number of clicks in virtual learning environment, student registration status etc.
Run Your Analysis Now you can run queries to analyze the data relevant to your objectives. Try questions like: What factors most strongly predict whether a student will fail an assessment? or How does course difficulty or the number of allotments per week change students' scores?
Visualization: Visualizing your data can be crucial for understanding patterns and relationships between variables. Use graphs like bar plots, heatmaps, and histograms to represent different aspects of your analyses.
Actionable Insights: The final step is interpreting these results in ways that are meaningf...
Facebook
Twitterhttps://www.icpsr.umich.edu/web/ICPSR/studies/4283/termshttps://www.icpsr.umich.edu/web/ICPSR/studies/4283/terms
The National Center for Early Development and Learning (NCEDL) Multi-State Study of Pre-Kindergarten examined the pre-kindergarten programs of six states: California, Illinois, New York, Ohio, Kentucky, and Georgia. For this study, pre-kindergarten (pre-k) included center-based programs for four-year-olds that are fully or partially funded by state education agencies and that are operated in schools or under the direction of state and local education agencies. The study had two primary purposes: To describe the variations of experiences for children in pre-kindergarten and kindergarten programs in school-related settings (public schools and state-funded pre-k classrooms in community-based settings), and To examine the relationships between variations in pre-kindergarten/kindergarten experiences and children's outcomes in early elementary school. The study addressed six primary groups of research questions: What is the nature and distribution of education and experience of teachers and teacher assistants in pre-k public school programs? What is the nature and distribution of global quality and specific practices in key areas such as literacy, math, and teacher-child relationships in a diverse sample of pre-k public school programs for four-year-olds as well as in a similarly diverse sample of kindergarten classes? How do quality and practices vary as a result of child and teacher characteristics (e.g., child gender, race, home language, family income, and teacher's years of education) and classroom, program, community, and state structural variables (e.g., teacher-child ratio, funding base of the program, teacher salary, and degree of state regulation) for children with different demographic characteristics (e.g., race, gender, home language, and family income)? Do quality and practice vary in relation to combinations of these variables? For example, are quality and practice a function of family poverty and teacher pay or education? Can children's outcomes at the end of their pre-kindergarten year be predicted by the children's experiences in pre-k programs? Are the various dimensions of quality and/or practice differentially related to outcomes? Are these relationships constant across a population of children with different characteristics (e.g., race, gender, home language, and family income)? Do pre-kindergarten program quality and practices predict children's transitions to kindergarten and children's skills at the end of the kindergarten year? Are these transitions moderated by children's characteristics, like race, gender, and family income? The six states in the study were selected based on the significant amount of resources they have committed to pre-k initiatives. States were also selected to maximize the diversity in geography, program settings (public school or community), program intensity (full day versus part day), and educational requirements for teachers. Within each state, a random sample of 40 centers/schools was selected. One classroom in each center/school was selected at random for observation, and four children in each classroom were selected for individual assessment. The children were followed from the beginning of pre-k through the end of kindergarten. In five of the six states, families were also visited in their homes. Classroom Services and Specific Instructional Practices Within the 40 classrooms in each participating state, carefully trained data collectors conducted classroom observations twice each year, while additional surveys were used to gather information from administrators/principals, teachers, and parents. Data were gathered on program services, (e.g., healthcare, meals, and transportation), program curriculum, teacher training and education, teachers' opinions of child development, and their instructional practices on subjects such as language, literacy, mathematics concepts, and social-emotional competencies. Data were also collected as to what types of steps were taken to aid children in their transitions from pre-k to kindergarten. Children Within each participating pre-k classroom, four randomly selected children were assessed using a battery of individual instruments to measure language, literacy, mathematics, and related concept development, as well as social competence. A panel of expert reviewers aided the researchers in selecting a variety of standardized and nonstan
Facebook
TwitterThe Department of Education's South African School Administration and Management System, SA-SAMS, is made available to all schools free of charge for uploading data to the Learner Unit Record Information and Tracking System, LURITS, and other education information management systems. LURITS includes unit-record data for each learner in South Africa, from Grade R through to Grade 12. The system also tracks each learner's movement from school to school. The LURITS system is dependent on receiving data from computerised school administration systems. Schools without computerised school administration systems provide paper-based reports to districts for uploading to LURITS. The dataset provided by DataFirst is the LURITS data, prepared as research-ready.
The data has national coverage
Individuals and establishments
The data covers Grade R to Grade 12 learners, as well as educators and schools in South Africa
Administrative records data [adm]
Data is inputted to the LURITS Module of the South African School Administration and Management System, SA-SAMS, four times per year.
Other [other]
Facebook
TwitterDataset Name: Fictional Student Performance Dataset
Description: The "Fictional Student Performance Dataset" is a comprehensive collection of fictional student records designed for educational and analytical purposes. This dataset comprises 500 student profiles and their associated attributes, making it a valuable resource for exploring various aspects of student performance and data analysis.
Attributes:
StudentID: A unique identifier for each student, facilitating individual tracking and analysis. Name: The name of each student, ensuring the dataset's personalization. Age: The age of each student, providing demographic information. Gender: The gender of each student, offering insights into gender-based performance trends. Grade: A continuous variable representing the academic performance of students, which can be used for regression and prediction tasks. Attendance: A percentage value denoting the attendance rate of each student, enabling attendance-related analyses. FinalExamScore: A continuous variable indicating the final exam score achieved by each student, making it suitable for evaluation and prediction tasks. Use Cases:
Educational Research: This dataset is ideal for educational institutions and researchers to analyze student performance and identify factors that influence academic outcomes. Machine Learning Practice: It is an excellent resource for data science enthusiasts and students looking to practice various machine learning techniques, such as regression, classification, and clustering. Predictive Modeling: The "Grade" and "FinalExamScore" attributes can be used to develop predictive models to forecast student performance. Gender-Based Analysis: Explore gender-based trends in student performance and attendance. Attendance Impact: Investigate the correlation between attendance and academic success. Disclaimer: Please note that this dataset is entirely fictional and created for educational and practice purposes. Any resemblance to real individuals or institutions is purely coincidental.
Citation: If you use this dataset in your research or projects, kindly acknowledge its source as the "Fictional Student Performance Dataset"
Data Generation: The dataset was generated using a combination of randomization and scripting to ensure that it does not contain any real or personally identifiable information.
Feel free to explore and utilize this dataset for educational purposes, data analysis, or machine learning exercises. It is intended to foster learning and experimentation in data science.
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The Cambridge Structural Database (CSD) offers a wealth of knowledge that can be tapped to enhance the undergraduate learning experience. With over 1.25 million structures, the CSD can be incorporated into several courses and at all levels of the chemistry curriculum, including general, organic, and inorganic chemistry, in addition to course-based undergraduate research experience (CURE). Herein, we feature examples that demonstrate the use of the CSD in the chemistry curriculum at a primarily undergraduate institution (PUI) in organic chemistry and a special topic course on crystal engineering.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Will all children be able to read by 2030? The ability to read with comprehension is a foundational skill that every education system around the world strives to impart by late in primary school—generally by age 10. Moreover, attaining the ambitious Sustainable Development Goals (SDGs) in education requires first achieving this basic building block, and so does improving countries’ Human Capital Index scores. Yet past evidence from many low- and middle-income countries has shown that many children are not learning to read with comprehension in primary school. To understand the global picture better, we have worked with the UNESCO Institute for Statistics (UIS) to assemble a new dataset with the most comprehensive measures of this foundational skill yet developed, by linking together data from credible cross-national and national assessments of reading. This dataset covers 115 countries, accounting for 81% of children worldwide and 79% of children in low- and middle-income countries. The new data allow us to estimate the reading proficiency of late-primary-age children, and we also provide what are among the first estimates (and the most comprehensive, for low- and middle-income countries) of the historical rate of progress in improving reading proficiency globally (for the 2000-17 period). The results show that 53% of all children in low- and middle-income countries cannot read age-appropriate material by age 10, and that at current rates of improvement, this “learning poverty” rate will have fallen only to 43% by 2030. Indeed, we find that the goal of all children reading by 2030 will be attainable only with historically unprecedented progress. The high rate of “learning poverty” and slow progress in low- and middle-income countries is an early warning that all the ambitious SDG targets in education (and likely of social progress) are at risk. Based on this evidence, we suggest a new medium-term target to guide the World Bank’s work in low- and middle- income countries: cut learning poverty by at least half by 2030. This target, together with improved measurement of learning, can be as an evidence-based tool to accelerate progress to get all children reading by age 10.
For further details, please refer to https://thedocs.worldbank.org/en/doc/e52f55322528903b27f1b7e61238e416-0200022022/original/Learning-poverty-report-2022-06-21-final-V7-0-conferenceEdition.pdf
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This data set aims at the study of English as a second language (L2) in learners studying specific acedemic domains.
Are included 671 texts written by students of various academic domains in a French university. All learners responded to the same task prompt designed to elicit language related to their specific domain, and had their CEFR level assessed with the DIALANG test. The data set includes structured textual data with rich Universal-Dependency linguistic annotation and metadata.
This data set can be used in several types on NLP tasks, to gain insight on the learning of English as L2.
This data is collected as part of the Analytics for Language Learning project (A4LL) – ANR-22-CE38-0015-01
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This file includes Report Card English Learner Assessment data from historical school years, excluding the current available school year which is posted separately. Data is disaggregated by school, district, and the state level and includes counts of students by the following group: grade level.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Framing the investigation of diverse cancers as a machine learning problem has recently shown significant potential in multi-omics analysis and cancer research. Empowering these successful machine learning models are the high-quality training datasets with sufficient data volume and adequate preprocessing. However, while there exist several public data portals including The Cancer Genome Atlas (TCGA) multi-omics initiative or open-bases such as the LinkedOmics, these databases are not off-the-shelf for existing machine learning models. we propose MLOmics, an open cancer multi-omics database aiming at serving better the development and evaluation of bioinformatics and machine learning models. MLOmics contains 8,314 patient samples covering all 32 cancer types with four omics types, stratified features, and extensive baselines. Complementary support for downstream analysis and bio-knowledge linking are also included to support interdisciplinary analysis.
Facebook
Twitterhttps://lida.dataverse.lt/api/datasets/:persistentId/versions/3.2/customlicense?persistentId=hdl:21.12137/7DZBLThttps://lida.dataverse.lt/api/datasets/:persistentId/versions/3.2/customlicense?persistentId=hdl:21.12137/7DZBLT
The purpose of the study: to provide impartial information for the school, its students, and their parents (caregivers, foster parents) about the achievements to make decisions on the further improvements of teaching and studying on student, teacher, class, school, municipality, and national level. The objectives of National Survey of Student Achievement (NASA): to collect the information for monitoring the national students’ achievements, planning the novelties, and implementing the novelties for monitoring the success; to evaluate the educational content, and substantiating students’ achievement criteria based on collected data; to prepare the necessary tools (i.e., standardized tests, etc.) for students and teachers for the impartial evaluation of their work results; to prepare the necessary tools (i.e., standardized tests, etc.) for the municipality’s education subdivisions and school principals for collecting the required data of work result assessments and planning of activities. National Survey of Student Achievement, first implemented in 2002, became the responsibility of the Education Supply Centre. Due to economic reasons, the assessments were not provided from 2009 to 2011. In 2012, the renewed assessment implementation was consigned to the National Examination Centre. Since the 2nd of September, 2019, the National Agency of Education took over the activities of the National Examination Centre and continues to carry them on to this day. In 2012, 5 NASA surveys were carried out. One line in SPSS Statistics from the 2012 National Survey of Student Achievement coincides with the achievements or questionnaire answers of one particular student or a teacher. The information provided in databases is impersonal - a student or a teacher is identified based on code, without providing the class or school’s name. Each school that has participated in the 2012 National Survey of Student Achievement received a unique five-number school code. The code used for identifying the schools of both grade 4 and grade 8 students and teachers consists of a school code and the numbers identifying a class and a student. The class code in the student’s database coincides with the code in the teacher’s database. To connect these databases, the variable named “ID_klase” would have to be used as an identifier. This dataset contains data from a survey of primary school the 4th grade teachers. All the provided questionnaire answers from teachers appear in teacher databases from the 2012 National Survey of Student Achievement. The same questionnaire was given to all the teachers. The teacher questionnaire consisted of general questions (to analyse the educational context), as well as personal questions or questions about the objective field. Dataset "NSSA 2012: 4th Grade Teachers Study, 2012" metadata and data were prepared implementing project "Disparities in School Achievement from a Person and Variable-Oriented Perspective: A Prototype of a Learning Analytics Tool NO-GAP" from 2020 to 2023. Project leader is chief research fellow Rasa Erentaitė. Project is funded by the European Regional Development Fund according to the 2014–2020 Operational Programme for the European Union Funds’ Investments, under measure’s No. 01.2.2-LMT-K-718 activity “Research Projects Implemented by World-class Researcher Groups to develop R&D activities relevant to economic sectors, which could later be commercialized” under a grant agreement with the Lithuanian Research Council (LMTLT).
Facebook
TwitterThe Learning Resources Database is a catalog of interactive tutorials, videos, online classes, finding aids, and other instructional resources on National Library of Medicine (NLM) products and services. Resources may be available for immediate use via a browser or downloadable for use in course management systems.