Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is composed by data from 77 college students (55% woman) enrolled in the 2nd and 3rd year of a private Medical School from the state of Minas Gerais, Brasil. They answered to 12 psychological or educational tests: 1) Inductive Reasoning Developmental Test (TDRI), 2) Metacognitive Control Test (TCM), 3) TDRI' Self-Appraisal scale (SA_TDRI), 4) TCM' Self-Appraisal scale (SA_TCM), 5) Brazilian High School Exam (ENEM), 6) Processing Speed Test (SP), 7) Perceptual Discrimination Test (DIS), 8) Perceptual Control Test (PC), 9) Conceptual Control Test (CC), 10) Short-term Memory Test (STM), 11) Working Memory Test (WM), and the 12) Brazilian Learning Approaches Scale (DeepAp).
This is a MD iMAP hosted service layer. Find more information at http://imap.maryland.gov. Maryland has 200+ higher education facilities located throughout the entire State. Maryland boasts a highly educated workforce with 300 - 000+ graduates from higher education institutions every year. Higher education opportunities range from two year - public and private institutions - four year - public and private institutions and regional education centers. Collectively - Maryland's higher education facilities offer every kind of educational experience - whether for the traditional college students or for students who have already begun a career and are working to learn new skills. Maryland is proud that nearly one-third of its residents 25 and older have a bachelor's degree or higher - ranking in the top 5 amongst all states. Maryland's economic diversity and educational vitality is what makes it one of the best states in the nation in which to live - learn - work and raise a family. Last Updated: 06/2013 Feature Service Layer Link: https://mdgeodata.md.gov/imap/rest/services/Education/MD_EducationFacilities/FeatureServer ADDITIONAL LICENSE TERMS: The Spatial Data and the information therein (collectively "the Data") is provided "as is" without warranty of any kind either expressed implied or statutory. The user assumes the entire risk as to quality and performance of the Data. No guarantee of accuracy is granted nor is any responsibility for reliance thereon assumed. In no event shall the State of Maryland be liable for direct indirect incidental consequential or special damages of any kind. The State of Maryland does not accept liability for any damages or misrepresentation caused by inaccuracies in the Data or as a result to changes to the Data nor is there responsibility assumed to maintain the Data in any manner or form. The Data can be freely distributed as long as the metadata entry is not modified or deleted. Any data derived from the Data must acknowledge the State of Maryland in the metadata.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Medical Doctors in Turkey increased to 2.18 per 1000 people in 2021 from 2.05 per 1000 people in 2020. This dataset includes a chart with historical data for Turkey Medical Doctors.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The dataset contains answers from a questionnaire distributed to all medical students at UiT as well as first year graduates from November 2019 to February 2020. The purpose of the questionnaire was to investigate how the UiT Medical students acquire practical competence in emergency medicine-related skills, and to investigate whether students with extracurricular healthcare-related work experience had more training and confidence in such skills than students without such experience. Data such as ECHR work experience (yes, no) and workplace, work length (<6 months, 6 months-1 year, 1-3 years, >3 years), work hours (<10h, 10-100h, 101-200h, 201-300h, 301-500h, >500h) and number of workplaces (1, >1), as well as year of study (years 1-6 and first year graduates), previous healthcare-related education (no, commenced but unfinished, finished), previous military medic-training (no, basic, advanced), and number of TAMS events participated in (0, 1, 2-5, 6-10, >10) were recorded as well, and included in the data analysis as predictors and confounders. Several items probing amount of training as well as confidence level for the respective procedures were created as well, as Likert-based items. The alternatives for training amount were 0, 1-5, 6-10, 11-30, >30 times for most items, however, for some, training amount in practice (0, 1-5, 6-10, 11-30, >30 times) and real-life situations (0, 1, 2-5, 6-10, >10) were probed separately. Confidence level was probed as degree of agreement, from strongly disagree to strongly agree. At the bottom of the dataset, variables from calculations of the data are included, such as median, mean and sum of the variables addressing training amount and confidence level, respectively. These composite scores were applied for statistical analyses. Abstract Objectives: To study the association between medical students' extracurricular healthcare-related (ECHR) work experience and their self-reported practical experience and confidence in selected emergency medicine procedures. Study design: Cross-sectional study. Materials and methods: Medical students and first-year graduates were invited to answer a Likert-based questionnaire probing self-reported practical experience and confidence with selected emergency medicine procedures. Participants also reported ECHR work experience, year of study, previous healthcare-related education, military medic-training and participation in the local student association for emergency medicine (TAMS). Differences within the variables were analyzed with independent samples t-tests, and correlation between training and confidence was calculated. Analysis of covariance and mixed models were applied to study associations between training and confidence, and work experience (primary outcomes) and the other reported factors (secondary outcomes) respectively. Cohen’s D was applied to better illustrate the strength of association for primary outcomes. Results: 539 participants responded (70%). Among these, 81% had ECHR work experience. There was a strong correlation (r=0.878) between training and confidence. Work experience accounted for 5.9% and 3.5% of the total variance in training and confidence (primary outcomes), and respondents with work experience scored significantly higher than respondents without work experience. Year of study, previous education, military medic-training and TAMS-participation accounted for 49.3% and 58.5%, 8.7% and 5.1%, 6.8% and 4.7%, and 23.6% and 12.3% of the total variance in training and confidence respectively (secondary outcomes). Cohen’s D was 0.48 for training amount and 0.32 for confidence level, suggesting medium and weak-medium sized associations to work experience, respectively. Conclusions: ECHR work experience is common among medical students, and was associated with more training and higher confidence in the investigated procedures. Significant associations were also seen between training and confidence, and year of study, previous healthcare-related education and TAMS participation, but military medic-training showed no association.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Survey questions and unprocessed raw answers for two cross-sectional surveys on the impact of the COVID-19 pandemic on the education of medical and biomedical graduates based at US and Swedish universities. This dataset relates to two manuscripts by the authors published at "BMC Medical Education" and "Biochemistry and Molecular Biology Education". The survey was assessed by the Swedish Ethical Review Authority (Dnr 2021-00481) and the Institutional Review Board of the University of California San Diego (Project #201972XX), and found to be exempt by both. Participants provided informed consent to publication of the anonymous survey results and we followed the general principles and recommendations provided by the Helsinki Declaration and the Belmont Report. The dataset is from anonymous participants and does not contain any personally identifiable information.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual distribution of students across grade levels in Crane Medical Prep High School
Attribution 2.5 (CC BY 2.5)https://creativecommons.org/licenses/by/2.5/
License information was derived automatically
The Distribution Priority Area (DPA) classification system for GPs is a mechanism used by the Government to encourage a more equitable distribution of GPs who are restricted under s19AB of the Health Insurance Act 1973, including International Medical Graduates (IMGs) and Foreign Graduates of Accredited Medical Schools (FGAMS).\r \r • The DPA classification system is also used to distribute Australian-trained Bonded doctors who have a return of service obligation.\r • The Distribution Priority Area (DPA) replaced the Districts of Workforce Shortage (DWS) system for GPs in 2019. DWS remains in use for non-GP specialists.\r • DPA considers the demographics of a region, including age, gender and socio-economic groupings, along with MBS activity data, to determine a benchmark figure that reflects community need for GP service.\r • DPA is calculated for 824 distinct, non-overlapping GP catchments throughout Australia. Catchments assessed below the benchmark are classified DPA.\r • Areas classified MM 2-7 under the Modified Monash Model (MMM) geographical remoteness classification system are automatically DPA. MM1 Inner metropolitan locations are automatically non-DPA. Areas that held DPA status prior to this update will continue to hold DPA status under a No Losers policy.\r \r The attached file for the DPA is as at March 2025 (most recent annual update). As the files may be updated at short notice for program purposes, please ensure you use the Department of Health and Aged Care Health Workforce Locator (https://www.health.gov.au/resources/apps-and-tools/health-workforce-locator) for the most up-to-date and official DPA location status.\r
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data sources: World Health Organization [5]; 2002 AMA Physician Masterfile as per Hagopian et al. [19]; American Medical Association [115].a2002 data were reported by Hagopian et al. [19] except for the numbers of IMGs trained in Cameroon, Tanzania, and Sudan. Their numbers are included in brackets because they are not part of the total counts reported in the last row of the table. These migrants were identified among SSA-IMGs in the 2011 AMA Physician Masterfile who completed residency by 2002. But the number of physicians available in Cameroon, Sudan, and Tanzania in 2002 came from the Hagopian et al. paper. In their dataset, “other” includes 12 countries with “at least one graduate in the US.” In our 2011 dataset, except otherwise specified, “other” refers to the 16 sub-Saharan African countries with fewer than 15 SSA-IMGs each in the 2011 AMA Physician Masterfile. The numbers of physicians in source countries for the year 2011 are from the Global Health Workforce Statistics of the World Health Organization [5]. “Active” emigration rate is the emigration rate among potentially active physicians. We defined all migrant physicians age ≤70 as potentially active.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
We sent out the survey via institutional email to 125 M1 students, the entire class excepting the 5 students (Jonathan, Charles, I, and 2 others) who were involved in setting up the survey. Each survey would collect data for 7 days after it was sent out, and the first 13 participants to respond in each round were given an electronic 5$ amazon gift card from funds provided by CUSM's Student Scholars Presentation and Dissemination Initiative committee. The 4 rounds of surveys were sent out on 12/12/22, 1/3/23, 1/17/23, and 1/31/23. All survey round questionnaires were identical and consisted of survey items from two instruments: the Copenhagen Burnout Inventory (CBI) and the Patient Health Questionnaire-9 (PHQ-9). All three sections of the CBI (personal, work-related, and client-related burnout) were used, and the order of the questions was randomized. The order of the PHQ-9 questions was also randomized, but the CBI and PHQ-9 questions were delivered separately. Due to privacy concerns about the stigmatization of mental health, no demographic questions, such as race or age, were included.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset comprises physician-level entries from the 1906 American Medical Directory, the first in a series of semi-annual directories of all practicing physicians published by the American Medical Association [1]. Physicians are consistently listed by city, county, and state. Most records also include details about the place and date of medical training. From 1906-1940, Directories also identified the race of black physicians [2].This dataset comprises physician entries for a subset of US states and the District of Columbia, including all of the South and several adjacent states (Alabama, Arkansas, Delaware, Florida, Georgia, Kansas, Kentucky, Louisiana, Maryland, Mississippi, Missouri, North Carolina, Oklahoma, South Carolina, Tennessee, Texas, Virginia, West Virginia). Records were extracted via manual double-entry by professional data management company [3], and place names were matched to latitude/longitude coordinates. The main source for geolocating physician entries was the US Census. Historical Census records were sourced from IPUMS National Historical Geographic Information System [4]. Additionally, a public database of historical US Post Office locations was used to match locations that could not be found using Census records [5]. Fuzzy matching algorithms were also used to match misspelled place or county names [6].The source of geocoding match is described in the “match.source” field (Type of spatial match (census_YEAR = match to NHGIS census place-county-state for given year; census_fuzzy_YEAR = matched to NHGIS place-county-state with fuzzy matching algorithm; dc = matched to centroid for Washington, DC; post_places = place-county-state matched to Blevins & Helbock's post office dataset; post_fuzzy = matched to post office dataset with fuzzy matching algorithm; post_simp = place/state matched to post office dataset; post_confimed_missing = post office dataset confirms place and county, but could not find coordinates; osm = matched using Open Street Map geocoder; hand-match = matched by research assistants reviewing web archival sources; unmatched/hand_match_missing = place coordinates could not be found). For records where place names could not be matched, but county names could, coordinates for county centroids were used. Overall, 40,964 records were matched to places (match.type=place_point) and 931 to county centroids ( match.type=county_centroid); 76 records could not be matched (match.type=NA).Most records include information about the physician’s medical training, including the year of graduation and a code linking to a school. A key to these codes is given on Directory pages 26-27, and at the beginning of each state’s section [1]. The OSM geocoder was used to assign coordinates to each school by its listed location. Straight-line distances between physicians’ place of training and practice were calculated using the sf package in R [7], and are given in the “school.dist.km” field. Additionally, the Directory identified a handful of schools that were “fraudulent” (school.fraudulent=1), and institutions set up to train black physicians (school.black=1).AMA identified black physicians in the directory with the signifier “(col.)” following the physician’s name (race.black=1). Additionally, a number of physicians attended schools identified by AMA as serving black students, but were not otherwise identified as black; thus an expanded racial identifier was generated to identify black physicians (race.black.prob=1), including physicians who attended these schools and those directly identified (race.black=1).Approximately 10% of dataset entries were audited by trained research assistants, in addition to 100% of black physician entries. These audits demonstrated a high degree of accuracy between the original Directory and extracted records. Still, given the complexity of matching across multiple archival sources, it is possible that some errors remain; any identified errors will be periodically rectified in the dataset, with a log kept of these updates.For further information about this dataset, or to report errors, please contact Dr Ben Chrisinger (Benjamin.Chrisinger@tufts.edu). Future updates to this dataset, including additional states and Directory years, will be posted here: https://dataverse.harvard.edu/dataverse/amd.References:1. American Medical Association, 1906. American Medical Directory. American Medical Association, Chicago. Retrieved from: https://catalog.hathitrust.org/Record/000543547.2. Baker, Robert B., Harriet A. Washington, Ololade Olakanmi, Todd L. Savitt, Elizabeth A. Jacobs, Eddie Hoover, and Matthew K. Wynia. "African American physicians and organized medicine, 1846-1968: origins of a racial divide." JAMA 300, no. 3 (2008): 306-313. doi:10.1001/jama.300.3.306.3. GABS Research Consult Limited Company, https://www.gabsrcl.com.4. Steven Manson, Jonathan Schroeder, David Van Riper, Tracy Kugler, and Steven Ruggles. IPUMS National Historical Geographic Information System: Version 17.0 [GNIS, TIGER/Line & Census Maps for US Places and Counties: 1900, 1910, 1920, 1930, 1940, 1950; 1910_cPHA: ds37]. Minneapolis, MN: IPUMS. 2022. http://doi.org/10.18128/D050.V17.05. Blevins, Cameron; Helbock, Richard W., 2021, "US Post Offices", https://doi.org/10.7910/DVN/NUKCNA, Harvard Dataverse, V1, UNF:6:8ROmiI5/4qA8jHrt62PpyA== [fileUNF]6. fedmatch: Fast, Flexible, and User-Friendly Record Linkage Methods. https://cran.r-project.org/web/packages/fedmatch/index.html7. sf: Simple Features for R. https://cran.r-project.org/web/packages/sf/index.html
https://data.gov.sg/open-data-licencehttps://data.gov.sg/open-data-licence
Dataset from Health Promotion Board. For more information, visit https://data.gov.sg/datasets/d_8f51207e4426b9e734d58a1d836f0770/view
https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html
Objective: There is a gap in research on gender-based discrimination (GBD) in medical education and practice in Germany. This study therefore examines the extent and forms of GBD among female medical students and physicians in Germany. Causes, consequences and possible interventions of GBD are discussed. Methods: Female medical students (n=235) and female physicians (n=157) from five university hospitals in northern Germany were asked about their personal experiences with GBD in an online survey on self-efficacy expectations and individual perceptions of the “glass ceiling effect” using an open-ended question regarding their own experiences with GBD. The answers were analyzed by content analysis using inductive category formation and relative category frequencies. Results: From both interviewed groups, approximately 75% of each reported having experienced GBD. Their experiences fell into five main categories: sexual harassment with subcategories of verbal and physical, discrimination based on existing/possible motherhood with subcategories of structural and verbal, direct preference for men, direct neglect of women, and derogatory treatment based on gender. Conclusion: The study contributes to filling the aforementioned research gap. At the hospitals studied, GBD is a common phenomenon among both female medical students and physicians, manifesting itself in multiple forms. Transferability of the results beyond the hospitals studied to all of Germany seems plausible. Much is known about the causes, consequences and effective countermeasures against GBD. Those responsible for training and employers in hospitals should fulfill their responsibility by implementing measures from the set of empirically evaluated interventions. Methods Female medical students and physicians from five university hospitals in northern Germany were given an online open question concerning their personal experiences with gernderbased discrimination. The answers were evaluated by qualitative content analysis (Mayring) and by relative frequencies.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This study examined how students’ academic performance changed after undergoing a transition to online learning during the coronavirus disease 2019 (COVID-19) pandemic, based on the test results of 16 integrated courses conducted in 3 semesters at Hanyang This study was conducted at Hanyang University College of Medicine (HYUCM), a private medical school in Seoul, South Korea. The average number of students per year is about 100. In HYUCM, the transition to online teaching was first implemented after COVID-19. Almost all face-to-face classroom lectures were replaced by online recorded videos, while fewer than 5% of classes were conducted as live online lectures. The major examinations’ raw scores were collected for each student. Because the total score was different for each examination, percent-correct scores were used in subsequent analyses. For courses that conducted more than 1 major examination, student achievement was calculated as an average of the percent-correct scores obtained from the examinations.
This dataset includes survey data from clerkship-year medical students at the NYU School of Medicine as well as test score data from the Objective Structured Clinical Experience (OSCE), which tests clerkship-year students’ ability to counsel standardized actor-patients with obesity effectively. The OSCE is administered after a three-day interclerkship intensive program entitled, “Fostering Change in Our Patients,” which provides instruction on nutrition, obesity physiology, weight management, and disordered eating. The OSCE includes students interviewing and counseling a standardized patient-actor about weight management, and the student’s communication and counseling proficiency was evaluated by the standardized patient.
Only data from students who consented to include their data in the Medical Education Research Registry (117 out of 151 students in the Class of 2019) were included. The survey included questions on obesity attitudes and physician competency on counseling patients with obesity. Questions elicited student beliefs about the causes of obesity and their attitudes towards people with obesity. Of the 117 students who consented to the medical student registry, 71 (61%) responded to the survey.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
PurposePoint-of-care ultrasound (POCUS) is a sensitive, safe, and efficient tool used in many clinical settings and is an essential part of medical education in the United States. Numerous studies present improved diagnostic performances and positive clinical outcomes among POCUS users. However, others stress the degree to which the modality is user-dependent, rendering high-quality POCUS training necessary in medical education. In this study, the authors aimed to investigate the potential of an artificial intelligence (AI) based quality indicator tool as a teaching device for cardiac POCUS performance.MethodsThe authors integrated the quality indicator tool into the pre-clinical cardiac ultrasound course for 4th-year medical students and analyzed their performances. The analysis included 60 students who were assigned to one of two groups as follows: the intervention group using the AI-based quality indicator tool and the control group. Quality indicator users utilized the tool during both the course and the final test. At the end of the course, the authors tested the standard echocardiographic views, and an experienced clinician blindly graded the recorded clips. Results were analyzed and compared between the groups.ResultsThe results showed an advantage in quality indictor users’ median overall scores (P = 0.002) with a relative risk of 2.3 (95% CI: 1.10, 4.93, P = 0.03) for obtaining correct cardiac views. In addition, quality indicator users also had a statistically significant advantage in the overall image quality in various cardiac views.ConclusionsThe AI-based quality indicator improved cardiac ultrasound performances among medical students who were trained with it compared to the control group, even in cardiac views in which the indicator was inactive. Performance scores, as well as image quality, were better in the AI-based group. Such tools can potentially enhance ultrasound training, warranting the expansion of the application to more views and prompting further studies on long-term learning effects.
LLM Health Benchmarks Dataset The Health Benchmarks Dataset is a specialized resource for evaluating large language models (LLMs) in different medical specialties. It provides structured question-answer pairs designed to test the performance of AI models in understanding and generating domain-specific knowledge.
Primary Purpose This dataset is built to: - Benchmark LLMs in medical specialties and subfields. - Assess the accuracy and contextual understanding of AI in healthcare. - Serve as a standardized evaluation suite for AI systems designed for medical applications.
Key Features
Covers 50+ medical and health-related topics, including both clinical and non-clinical domains. Includes ~7,500 structured question-answer pairs. Designed for fine-grained performance evaluation in medical specialties.
Applications
LLM Evaluation: Benchmarking AI models for domain-specific performance. Healthcare AI Research: Standardized testing for AI in healthcare. Medical Education AI: Testing AI systems designed for tutoring medical students.
Dataset Structure The dataset is organized by medical specialties and subfields, each represented as a split. Below is a snapshot:
Specialty | Number of Rows |
---|---|
Lab Medicine | 158 |
Ethics | 174 |
Dermatology | 170 |
Gastroenterology | 163 |
Internal Medicine | 178 |
Oncology | 180 |
Orthopedics | 177 |
General Surgery | 178 |
Pediatrics | 180 |
...(and more) | ... |
Each split contains: - Questions: The medical questions for the specialty. - Answers: Corresponding high-quality answers.
Usage Instructions Here’s how you can load and use the dataset:
from datasets import load_dataset
Load the dataset
dataset = load_dataset("yesilhealth/Health_Benchmarks")
Access specific specialty splits
oncology = dataset["Oncology"]
internal_medicine = dataset["Internal_Medicine"]
View sample data
print(oncology[:5])
Evaluation Workflow
Model Input: Provide the questions from each split to the LLM. Model Output: Collect the AI-generated answers. Scoring: Compare model answers to ground truth answers using metrics such as: Exact Match (EM) F1 Score Semantic Similarity
Citation If you use this dataset for research or development, please cite:
plaintext @dataset{yesilhealth_health_benchmarks, title={Health Benchmarks Dataset}, author={Yesil Health AI}, year={2024}, url={https://huggingface.co/datasets/yesilhealth/Health_Benchmarks} }
License This dataset is licensed under the Apache 2.0 License.
Feedback For questions, suggestions, or feedback, feel free to contact us via email at [hello@yesilhealth.com].
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract: Introduction: Clinical reasoning is considered one of the main skills that must be developed by medical students, as it allows the establishment of diagnostic hypotheses and directs investigative and diagnostic strategies using a rational approach. Although educators have traditionally focused the teaching method on the analytical model, many medical professors face the challenge in their daily lives of finding new strategies to help their students develop clinical reasoning. Objective: To carry out an integrative literature review to identify the strategies used in the teaching-learning process of clinical reasoning in Brazilian medical schools. Method: The methodology used consists of six steps: 1. creation of the research question; 2. definition of inclusion and exclusion criteria; 3. list of information to be extracted; 4. evaluation of included studies; 5. interpretation of results and 6. presentation of the review. Results: Most studies indicate that the teaching of clinical reasoning is carried out through discussions of clinical cases, incidentally, in different disciplines or through the use of active methodologies such as PBL, TBL and CBL. Only three studies presented at conferences disclosed experiences related to the implementation of a mandatory curricular discipline specifically aimed at teaching clinical reasoning. The teaching of clinical reasoning is prioritized in internships in relation to the clinical and pre-clinical phases. Final considerations: There are few studies that analyze how clinical reasoning is taught to medical students in Brazilian medical schools. Although more studies are needed, we can observe the lack of theoretical knowledge about clinical reasoning as one of the main causes of the students’ difficulty in developing clinical reasoning.
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
This is the dataset from the publication "Effect of head-mounted displays on students' acquisition of surgical suturing techniques compared to an e-learning and tutor-led course: A randomized controlled trial" by Peters et al. (2023). The dataset has been described and evaluated in detail with respect to its usefulness for the development of AI-based assessment models for open suturing in "AIxSuture: vision-based assessment of open suturing skills" by Hoffmann et al. (2024). It contains 314 5-minute videos showing 157 students performing surgical suturing before and after a 1-hour training course. In addition, the number of sutures performed within the 5 minutes is recorded. The evaluation was performed in a blinded and anonymized manner by 3 experienced oral and maxillofacial surgery residents (1 in the penultimate year and 2 in the final year). The raters all had degrees in both medicine and dentistry. The assessment was performed using the Objective Structured Assessment of Technical Skills (OSATS). Eight items were scored. Finally, the global rating score (GRS) was calculated based on these 8 items. The inter-rater variability ranged from 0.8 to 0.83.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset tracks annual total students amount from 1987 to 2023 for Medical Lake Middle School
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Historical Dataset of Crane Medical Prep High School is provided by PublicSchoolReview and contain statistics on metrics:Total Students Trends Over Years (2014-2023),Total Classroom Teachers Trends Over Years (2015-2023),Distribution of Students By Grade Trends,Student-Teacher Ratio Comparison Over Years (2015-2023),Asian Student Percentage Comparison Over Years (2014-2020),Hispanic Student Percentage Comparison Over Years (2014-2023),Black Student Percentage Comparison Over Years (2014-2023),White Student Percentage Comparison Over Years (2019-2023),Two or More Races Student Percentage Comparison Over Years (2014-2021),Diversity Score Comparison Over Years (2014-2023),Free Lunch Eligibility Comparison Over Years (2014-2023),Reading and Language Arts Proficiency Comparison Over Years (2015-2022),Math Proficiency Comparison Over Years (2015-2021),Overall School Rank Trends Over Years (2015-2022),Graduation Rate Comparison Over Years (2017-2022)
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is composed by data from 77 college students (55% woman) enrolled in the 2nd and 3rd year of a private Medical School from the state of Minas Gerais, Brasil. They answered to 12 psychological or educational tests: 1) Inductive Reasoning Developmental Test (TDRI), 2) Metacognitive Control Test (TCM), 3) TDRI' Self-Appraisal scale (SA_TDRI), 4) TCM' Self-Appraisal scale (SA_TCM), 5) Brazilian High School Exam (ENEM), 6) Processing Speed Test (SP), 7) Perceptual Discrimination Test (DIS), 8) Perceptual Control Test (PC), 9) Conceptual Control Test (CC), 10) Short-term Memory Test (STM), 11) Working Memory Test (WM), and the 12) Brazilian Learning Approaches Scale (DeepAp).