89 datasets found
  1. h

    MC-test

    • huggingface.co
    Updated May 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kewei Lian (2025). MC-test [Dataset]. https://huggingface.co/datasets/kevinLian/MC-test
    Explore at:
    Dataset updated
    May 11, 2025
    Authors
    Kewei Lian
    Description

    kevinLian/MC-test dataset hosted on Hugging Face and contributed by the HF Datasets community

  2. Mc Test Service Inc Import Shipments, Overseas Suppliers

    • volza.com
    csv
    Updated Jun 19, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Volza FZ LLC (2025). Mc Test Service Inc Import Shipments, Overseas Suppliers [Dataset]. https://www.volza.com/us-importers/mc-test-service-inc-1801530.aspx
    Explore at:
    csvAvailable download formats
    Dataset updated
    Jun 19, 2025
    Dataset provided by
    Volza
    Authors
    Volza FZ LLC
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    2014 - Sep 30, 2021
    Variables measured
    Count of exporters, Count of importers, Sum of export value, Count of import shipments
    Description

    Find out import shipments and details about Mc Test Service Inc Import Data report along with address, suppliers, products and import shipments.

  3. d

    Test Dataset MC Upgrade

    • search.test.dataone.org
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dou Mok (2024). Test Dataset MC Upgrade [Dataset]. https://search.test.dataone.org/view/urn%3Auuid%3A5ac7a15b-9254-4e04-8f24-af08fa5faaee
    Explore at:
    Dataset updated
    Dec 16, 2024
    Dataset provided by
    urn:node:mnTestARCTIC
    Authors
    Dou Mok
    Time period covered
    Dec 15, 2024
    Area covered
    Description

    This is an abstract that is a bit on the shorter side but is still okay.

  4. h

    Math-steptok-mc-test

    • huggingface.co
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anikait Singh, Math-steptok-mc-test [Dataset]. https://huggingface.co/datasets/Asap7772/Math-steptok-mc-test
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Authors
    Anikait Singh
    Description

    Asap7772/Math-steptok-mc-test dataset hosted on Hugging Face and contributed by the HF Datasets community

  5. Multiple choice assessment for elimination testing and negative marking:...

    • figshare.com
    xlsx
    Updated Oct 10, 2018
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tinne De Laet; Jef Vanderoost (2018). Multiple choice assessment for elimination testing and negative marking: test data, student gender, student GPA, questionnaire data [Dataset]. http://doi.org/10.6084/m9.figshare.6148721.v1
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Oct 10, 2018
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Tinne De Laet; Jef Vanderoost
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    The data results from a study on multiple choice assessments and more specifically the comparison of negative marking and elimination testing on guessing and test anxiety. Data collection is as described by Vanderoost J. et al (Vanderoost J., Janssen R., Eggermont J., Callens R., and De Laet T.; Elimination testing with adapted scoring reduces guessing and anxiety in multiple-choice assessments, but does not increase grade average with respect to traditional negative marking; https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0203931)The data results from a study at the KU Leuven, Flanders, Belgium. The students are 1st and 2nd year students in the master of Medicine. All students had prior experience with multiple-choice exams, which were scored using negative marking (NM).Thanks to the exam test design the following data is available for each student: master level (1st or 2nd master), examination moment for PED and GO, exam scores for PED and GO, and for each question: score, answering pattern, and knowledge level. Additionally, as both gender and ability could influence the exam score, the answering patterns, and the knowledge levels, gender, and grade point average (GPA) of each student was retrieved from the university data base. This study uses GPA of the entire academic year (without resits) as a measure for student ability

  6. mctest.com - Historical whois Lookup

    • whoisdatacenter.com
    csv
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    AllHeart Web Inc, mctest.com - Historical whois Lookup [Dataset]. https://whoisdatacenter.com/domain/mctest.com/
    Explore at:
    csvAvailable download formats
    Dataset provided by
    AllHeart Web
    Authors
    AllHeart Web Inc
    License

    https://whoisdatacenter.com/terms-of-use/https://whoisdatacenter.com/terms-of-use/

    Time period covered
    Mar 15, 1985 - Jul 13, 2025
    Description

    Explore the historical Whois records related to mctest.com (Domain). Get insights into ownership history and changes over time.

  7. h

    QAEgo4D-MC-test

    • huggingface.co
    Updated May 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    QAEgo4D-MC-test [Dataset]. https://huggingface.co/datasets/Becomebright/QAEgo4D-MC-test
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    May 9, 2025
    Authors
    Shangzhe Di
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This benchmark was collected by QAEgo4D and updated by GroundVQA. We conducted some processing for the experiments presented in our paper ReKV.

  8. SPSS Data 37 Intensivists 74 Novices 25 original Questions.sav

    • figshare.com
    bin
    Updated May 7, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Morten Engberg (2020). SPSS Data 37 Intensivists 74 Novices 25 original Questions.sav [Dataset]. http://doi.org/10.6084/m9.figshare.12269294.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    May 7, 2020
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Morten Engberg
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    SPSS Data for 37 Intensivists and 74 Novices.Their scores on the 25 original questions.

  9. q

    Data from: Using Constructed Response Questions on High-Stakes Assessments...

    • qubeshub.org
    Updated Jun 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Norris Armstrong*; Sandii Constable (2024). Using Constructed Response Questions on High-Stakes Assessments in Large Classes With Limited Support [Dataset]. https://qubeshub.org/publications/4834/?v=1
    Explore at:
    Dataset updated
    Jun 11, 2024
    Dataset provided by
    QUBES
    Authors
    Norris Armstrong*; Sandii Constable
    Description

    Large lecture courses often rely heavily, and sometimes exclusively, on Multiple Choice (MC) exams to assess student learning. Constructed Response (CR) questions, in which students generate their own answers, can provide more insight into student learning than MC questions alone but are seldom used in large courses because they are more challenging and labor intensive to score. We describe a strategy for using CR questions on assessments even in very large lectures courses with the support of undergraduate assistants.

    Primary Image: A partially graded exam with short answer questions.

  10. R

    Singlemcq Dataset

    • universe.roboflow.com
    zip
    Updated Jul 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    new-workspace-4vus5 (2022). Singlemcq Dataset [Dataset]. https://universe.roboflow.com/new-workspace-4vus5/singlemcq
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 21, 2022
    Dataset authored and provided by
    new-workspace-4vus5
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Mcq Bounding Boxes
    Description

    Here are a few use cases for this project:

    1. Educational Resource Organization: SingleMcq can be used by educators and educational platforms to categorize and organize multiple-choice questions (MCQs) based on their class types, making it easier for teachers to find relevant questions for their lesson plans, assessments, and quizzes.

    2. Personalized Learning Applications: Integration of SingleMcq into e-learning platforms can facilitate personalized learning by identifying the MCQ class types and generating study materials, quizzes, and practice tests tailored to a student's specific strengths, weaknesses, and learning preferences.

    3. Optical Mark Recognition (OMR) System Enhancement: SingleMcq can enhance existing OMR systems that process answer sheets by automatically identifying the MCQ class types, streamlining the grading process and reducing manual input required by educators.

    4. Test Data Generation: Test preparation services and assessment developers can use SingleMcq to analyze a wide range of multiple-choice questions based on their classes and generate new test materials, ensuring a diverse range of questions for practice and evaluation purposes.

    5. Accessibility and Language Services: By recognizing MCQ class types in different black and white documents, SingleMcq can help streamline the translation and creation of accessible content for visually impaired users, making educational materials more inclusive for a wider audience.

  11. P

    MedQA Dataset

    • paperswithcode.com
    Updated Jul 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Di Jin; Eileen Pan; Nassim Oufattole; Wei-Hung Weng; Hanyi Fang; Peter Szolovits (2022). MedQA Dataset [Dataset]. https://paperswithcode.com/dataset/medqa-usmle
    Explore at:
    Dataset updated
    Jul 29, 2022
    Authors
    Di Jin; Eileen Pan; Nassim Oufattole; Wei-Hung Weng; Hanyi Fang; Peter Szolovits
    Description

    Multiple choice question answering based on the United States Medical License Exams (USMLE). The dataset is collected from the professional medical board exams. It covers three languages: English, simplified Chinese, and traditional Chinese, and contains 12,723, 34,251, and 14,123 questions for the three languages, respectively.

  12. f

    Studies investigating performance differences between paper-based and...

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anja J. Boevé; Rob R. Meijer; Casper J. Albers; Yta Beetsma; Roel J. Bosker (2023). Studies investigating performance differences between paper-based and computer-based tests with multiple-choice questions. [Dataset]. http://doi.org/10.1371/journal.pone.0143616.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Anja J. Boevé; Rob R. Meijer; Casper J. Albers; Yta Beetsma; Roel J. Bosker
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Studies investigating performance differences between paper-based and computer-based tests with multiple-choice questions.

  13. o

    Woodchuck Science Quiz Dataset

    • opendatabay.com
    .undefined
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Datasimple (2025). Woodchuck Science Quiz Dataset [Dataset]. https://www.opendatabay.com/data/ai-ml/5c3501b0-04b8-4de8-88ee-11e5b5eb0279
    Explore at:
    .undefinedAvailable download formats
    Dataset updated
    Jul 5, 2025
    Dataset authored and provided by
    Datasimple
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Area covered
    Education & Learning Analytics
    Description

    This dataset provides a unique opportunity for NLP researchers to develop models capable of answering multiple-choice questions based on a given context paragraph. It is particularly well-suited for the development and testing of question-answering systems that can handle real-world, noisy data. Originating from grade school science content, this dataset can be utilised to create interactive tools such as a question-answering chatbot, a multiple-choice quiz game, or systems that generate multiple-choice questions for students.

    Columns

    The dataset is primarily composed of three files: validation.csv, train.csv, and test.csv. Each file contains the following columns:

    • id: A unique identifier for each question record.
    • question: The text of the question (String).
    • choices: A list of multiple-choice answers for the question (List of Strings).
    • answerKey: The integer index corresponding to the correct answer within the choices list.
    • fact1: The first piece of supporting information (String).
    • fact2: The second piece of supporting information (String).
    • combinedfact: A combined piece of supporting information (String).
    • formatted_question: The question text with the multiple-choice answers inserted into it (String).

    Distribution

    The data files are typically provided in CSV format. For the test.csv file, there are 920 unique records for the id, question, choices, answerKey, and formatted_question columns. The fact1, fact2, and combinedfact columns are noted as having 100% null values in some distributions. This is a free dataset, listed on a data marketplace with a quality rating of 5 out of 5 and is available globally. The current version is 1.0.

    Usage

    This dataset is ideal for: * Developing and evaluating Natural Language Processing (NLP) models for question answering. * Creating question-answering chatbots that can respond to science-based queries. * Designing multiple-choice quiz games for educational purposes. * Generating multiple-choice questions to aid student learning and assessment. * Research into handling noisy, real-world data in Q&A systems.

    Coverage

    The dataset's scope is global in terms of availability. Its content focuses on grade school science, making it relevant for primary and secondary education contexts. While a specific time range for data collection is not provided, the dataset was listed on 16/06/2025.

    License

    CC0

    Who Can Use It

    • NLP Researchers and Data Scientists focusing on question answering, text classification, and natural language understanding.
    • Educators and Content Developers looking to create educational tools, quizzes, or automated question generation systems.
    • Game Developers interested in building educational quiz games.
    • Anyone working on AI and Machine Learning models that require structured question-answer pairs for training and testing.

    Dataset Name Suggestions

    • Grade School Science Q&A
    • Educational NLP Challenge Data
    • Multi-Choice Science Questions
    • Woodchuck Science Quiz Dataset
    • Primary/Secondary Science QA

    Attributes

    Original Data Source: Woodchuck (Grade School Science Multi-Choice Q&A)

  14. d

    Designing multiple choice tests to support students’ basic psychological...

    • search.dataone.org
    • borealisdata.ca
    Updated Dec 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniels, Lia (2024). Designing multiple choice tests to support students’ basic psychological needs [Dataset]. http://doi.org/10.5683/SP3/728IVW
    Explore at:
    Dataset updated
    Dec 4, 2024
    Dataset provided by
    Borealis
    Authors
    Daniels, Lia
    Description

    Five documents archived: data, codebook, survey, syntax, syntax with notes. Ethics Number Pro00141097. Project Description: This was a preregistered experimental study to test the effect of three multiple choice tests on students' psychological need fulfillment and performance. The IV had three levels: control test of 20 flawed items; high-quality test with 20 non-flawed items; wellbeing test of 20 non-flawed items and intentional design elements to support need fulfillment rooted in self-determination theory.

  15. AI2 ARC - Advanced Science Question

    • kaggle.com
    Updated Nov 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    The Devastator (2023). AI2 ARC - Advanced Science Question [Dataset]. https://www.kaggle.com/datasets/thedevastator/advanced-science-question-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Nov 30, 2023
    Dataset provided by
    Kaggle
    Authors
    The Devastator
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    AI2 ARC - Advanced Science Question

    Promoting research in advanced question-answering

    By ai2_arc (From Huggingface) [source]

    About this dataset

    The ai2_arc dataset, also known as the A Challenge Dataset for Advanced Question-Answering in Grade-School Level Science, is a comprehensive and valuable resource created to facilitate research in advanced question-answering. This dataset consists of a collection of 7,787 genuine grade-school level science questions presented in multiple-choice format.

    The primary objective behind assembling this dataset was to provide researchers with a powerful tool to explore and develop question-answering models capable of tackling complex scientific inquiries typically encountered at a grade-school level. The questions within this dataset are carefully crafted to test the knowledge and understanding of various scientific concepts in an engaging manner.

    The ai2_arc dataset is further divided into two primary sets: the Challenge Set and the Easy Set. Each set contains numerous highly curated science questions that cover a wide range of topics commonly taught at a grade-school level. These questions are designed specifically for advanced question-answering research purposes, offering an opportunity for model evaluation, comparison, and improvement.

    In terms of data structure, the ai2_arc dataset features several columns providing vital information about each question. These include columns such as question, which contains the text of the actual question being asked; choices, which presents the multiple-choice options available for each question; and answerKey, which indicates the correct answer corresponding to each specific question.

    Researchers can utilize this comprehensive dataset not only for developing advanced algorithms but also for training machine learning models that exhibit sophisticated cognitive capabilities when it comes to comprehending scientific queries from a grade-school perspective. Moreover, by leveraging these meticulously curated questions, researchers can analyze performance metrics such as accuracy or examine biases within their models' decision-making processes.

    In conclusion, the ai2_arc dataset serves as an invaluable resource for anyone involved in advanced question-answering research within grade-school level science education. With its extensive collection of genuine multiple-choice science questions spanning various difficulty levels, researchers can delve into the intricate nuances of scientific knowledge acquisition, processing, and reasoning, ultimately unlocking novel insights and innovations in the field

    Research Ideas

    • Developing advanced question-answering models: The ai2_arc dataset provides a valuable resource for training and evaluating advanced question-answering models. Researchers can use this dataset to develop and test algorithms that can accurately answer grade-school level science questions.
    • Evaluating natural language processing (NLP) models: NLP models that aim to understand and generate human-like responses can be evaluated using this dataset. The multiple-choice format of the questions allows for objective evaluation of the model's ability to comprehend and provide correct answers.
    • Assessing human-level performance: The dataset can be used as a benchmark to measure the performance of human participants in answering grade-school level science questions. By comparing the accuracy of humans with that of AI systems, researchers can gain insights into the strengths and weaknesses of both approaches

    Acknowledgements

    If you use this dataset in your research, please credit the original authors. Data Source

    License

    License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy, modify, distribute and perform the work, even for commercial purposes, all without asking permission. See Other Information.

    Columns

    File: ARC-Challenge_test.csv | Column name | Description | |:--------------|:--------------------------------------------------------------------------------| | question | The text content of each question being asked. (Text) | | choices | A list of multiple-choice options associated with each question. (List of Text) | | answerKey | The correct answer option (choice) for a particular question. (Text) |

    File: ARC-Easy_test.csv | Column name | Description ...

  16. Test data set for macros accompanying the publication Multi-parameter...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daphne S. Bindels; Marten Postma; Lindsay Haarbosch; Laura van Weeren; Theodorus W J Gadella Jr; Daphne S. Bindels; Marten Postma; Lindsay Haarbosch; Laura van Weeren; Theodorus W J Gadella Jr (2020). Test data set for macros accompanying the publication Multi-parameter screening method for developing optimized red fluorescent proteins [Dataset]. http://doi.org/10.5281/zenodo.3338264
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Daphne S. Bindels; Marten Postma; Lindsay Haarbosch; Laura van Weeren; Theodorus W J Gadella Jr; Daphne S. Bindels; Marten Postma; Lindsay Haarbosch; Laura van Weeren; Theodorus W J Gadella Jr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This a bundle of test data can be used to run the macros accompanying the publication Multi-parameter screening method for developing optimized red fluorescent proteins.

    These data sets can be used to run the following macros that can be found on GitHub:

    1. https://github.com/molcyto/MC-Ratio-96-wells
    2. https://github.com/molcyto/MC-Ratio-Petri-dish
    3. https://github.com/molcyto/MC-FLIM-Petri-dish
    4. https://github.com/molcyto/MC-Bleach-96-wells
    5. https://github.com/molcyto/MC-Scatter5D
    6. https://github.com/molcyto/MC-FLIM-96-wells

    Funding:
    This work was supported by the NWO CW-Echo grant 711.011.018 (M.A.H. and T.W.J.G.), grant 12149 (T.W.J.G.) from the Foundation for Technological Sciences (STW) from the Netherlands

  17. P

    OpenBookQA Dataset

    • paperswithcode.com
    • opendatalab.com
    • +1more
    Updated Feb 19, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Todor Mihaylov; Peter Clark; Tushar Khot; Ashish Sabharwal (2021). OpenBookQA Dataset [Dataset]. https://paperswithcode.com/dataset/openbookqa
    Explore at:
    Dataset updated
    Feb 19, 2021
    Authors
    Todor Mihaylov; Peter Clark; Tushar Khot; Ashish Sabharwal
    Description

    OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. It consists of 5,957 multiple-choice elementary-level science questions (4,957 train, 500 dev, 500 test), which probe the understanding of a small “book” of 1,326 core science facts and the application of these facts to novel situations. For training, the dataset includes a mapping from each question to the core science fact it was designed to probe. Answering OpenBookQA questions requires additional broad common knowledge, not contained in the book. The questions, by design, are answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. Additionally, the dataset includes a collection of 5,167 crowd-sourced common knowledge facts, and an expanded version of the train/dev/test questions where each question is associated with its originating core fact, a human accuracy score, a clarity score, and an anonymized crowd-worker ID.

  18. d

    Mc Donald Elementary School

    • catalog.data.gov
    • data.wa.gov
    • +2more
    Updated Jun 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    data.wa.gov (2025). Mc Donald Elementary School [Dataset]. https://catalog.data.gov/dataset/mc-donald-elementary-school
    Explore at:
    Dataset updated
    Jun 29, 2025
    Dataset provided by
    data.wa.gov
    Description

    Lead in Drinking Water in School test results - Mc Donald Elementary School

  19. w

    Global Test Creator App Market Research Report: By App Type (Web-Based Test...

    • wiseguyreports.com
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Test Creator App Market Research Report: By App Type (Web-Based Test Creator Apps, Cloud-Based Test Creator Apps, Mobile Test Creator Apps), By Test Format (Multiple-Choice Tests, True/False Tests, Essay Tests, Short Answer Tests, Drag-and-Drop Tests, Matching Tests), By Subject Area for Assessments (Academic Assessments, Technical Skills, Aptitude Tests, Personality Assessments, IQ Tests, Industry-Specific Tests), By Deployment Model (SaaS (Software-as-a-Service), On-Premise, Hybrid) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/test-creator-app-market
    Explore at:
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 8, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20234.24(USD Billion)
    MARKET SIZE 20244.79(USD Billion)
    MARKET SIZE 203212.7(USD Billion)
    SEGMENTS COVEREDApp Type ,Test Format ,Subject Area for Assessments ,Deployment Model ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSRising demand for online testing Technological advancements Growing adoption of mobile devices Increased focus on personalized learning Integration with learning management systems
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDThinkific ,SurveyMonkey ,Zoho Forms ,Typeform ,Google Forms ,Teachable ,Crowdsignal ,MindTap ,Wufoo ,JotForm ,Newrow ,Cognitively ,Kahoot! ,ProProfs ,ExamSoft
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESAIpowered test creation Integration with learning management systems Gamification and personalization Mobilefirst development Accessibility and inclusion
    COMPOUND ANNUAL GROWTH RATE (CAGR) 12.98% (2025 - 2032)
  20. f

    Data from: So far away: threshold copying distances for multiple-choice...

    • tandf.figshare.com
    docx
    Updated Mar 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Andrew Carkeet; Vy T Dinh; Leo Ho; Bernice Lee; Yosef K Asfha; Hui Yee Reiko Tang; Ying Xuan Toh (2025). So far away: threshold copying distances for multiple-choice answers in different exam room settings [Dataset]. http://doi.org/10.6084/m9.figshare.28578874.v1
    Explore at:
    docxAvailable download formats
    Dataset updated
    Mar 12, 2025
    Dataset provided by
    Taylor & Francis
    Authors
    Andrew Carkeet; Vy T Dinh; Leo Ho; Bernice Lee; Yosef K Asfha; Hui Yee Reiko Tang; Ying Xuan Toh
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Paper-based multiple-choice exams are commonly used to assess students. Answer sheets for these exams have a configuration which affords a potential opportunity for cheating. A proportion of students report cheating on assessments. This research assessed maximum distances at which multiple-choice answer sheets could be copied in different rooms and for different viewing conditions. Participants were 10 healthy observers. Stimuli were generated on a University standard multiple-choice answer template with 40 answer responses recorded for each sheet. Responses were recorded at a range of test distances. Method of constant stimuli and probit analysis was used to estimate the threshold copying distance at which 62.5% of responses were correctly identified. With the copied sheets flat on a desk, testing took place in a tiered lecture theatre, a flat exam room, and with the exam positioned at different angles of regard: straight-ahead, at 45 degrees to straight ahead (oblique), and sideways. Threshold distances were greater in the tiered lecture theatre than the flat exam room and were greater in the straight-ahead position than the oblique position, in turn greater than the sideways viewing position. In the straight-ahead position in the tiered lecture theatre, exam answer sheets could be copied from 7.12 m; and in a flat room, from 3.34 m. For the sideways viewing condition threshold copying distances were 2.58 m (tiered lecture), and 2.36 m (flat room). Multiple-choice answer sheets can be copied from relatively large distances, a potential opportunity for academic dishonesty. Tiered lecture rooms should not be used as venues for multiple-choice exams. Multiple-choice answer sheets can be redesigned to reduce the risk of copying. These results will be of practical and theoretical interest to educators, administrators and students.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Kewei Lian (2025). MC-test [Dataset]. https://huggingface.co/datasets/kevinLian/MC-test

MC-test

kevinLian/MC-test

Explore at:
Dataset updated
May 11, 2025
Authors
Kewei Lian
Description

kevinLian/MC-test dataset hosted on Hugging Face and contributed by the HF Datasets community

Search
Clear search
Close search
Google apps
Main menu