9 datasets found
  1. f

    Data Sheet 1_Use of multimodal glosses in teaching English vocabulary for...

    • frontiersin.figshare.com
    docx
    Updated Jan 22, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pitambar Paudel (2025). Data Sheet 1_Use of multimodal glosses in teaching English vocabulary for non-English specialised undergraduates in public university in Nepal.docx [Dataset]. http://doi.org/10.3389/feduc.2025.1443803.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jan 22, 2025
    Dataset provided by
    Frontiers
    Authors
    Pitambar Paudel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Knowledge of vocabulary is an essential aspect of language development. Most of the non-English specialised students feel hesitation in communicating in English due to limited vocabulary. Effective vocabulary teaching and learning can be aided by multimodal glosses. In this rationale, this mixed methods participatory action research is intended to investigate the effect of multimodal glosses in improving the English vocabulary of non-English specilised EFL students in a public university in Nepal. The study was conducted in a three-month intervention experiment for an intact class of 60 non-English specilised undergraduates. The data were collected from tests (pre-test, progress-test, and post-test), and interviews. The data were analysed using quantitative statistics (mean, standard deviation, and T-test), and the data from the unstructured interview were analysed descriptively. The overall results revealed that the use of multimodal glosses led to significant improvements in students’ English vocabulary and its use. The findings suggest that the study’s intervention, the use of multimodal glosses, was effective in improving non-English specialised undergraduates’ ability to develop, comprehend, and use English vocabulary. Thus, students and teachers are to be aware of using multimodal glosses contextually to increase, understand, and adopt English vocabulary appropriately.

  2. m

    Data from: Multimodal Immersion in English Language Learning in Higher...

    • data.mendeley.com
    Updated Sep 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eka Rahmanu (2024). Multimodal Immersion in English Language Learning in Higher Education: A Systematic Review [Dataset]. http://doi.org/10.17632/rnf4m4dg58.2
    Explore at:
    Dataset updated
    Sep 16, 2024
    Authors
    Eka Rahmanu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This systematic review examines 34 research articles published from 2013 to 2024. The primary focus of the study is to explore more about the application of multimodal pedagogies in higher education, the methods and materials used to assist learners in acquiring English language skills, the English language skills acquired through the usage of multimodality, and the main results of using many modes. This systematic review employs the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) standards. It adopts a thorough search strategy across electronic databases, which include Web of Science and Scopus.

  3. f

    Table_1_Multimodality and English for Special Purposes: Signification and...

    • frontiersin.figshare.com
    docx
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anne F. J. Hellwig; Pauline T. Jones; Erika Matruglio; Helen Georgiou (2023). Table_1_Multimodality and English for Special Purposes: Signification and Transduction in Architecture and Civil Engineering Models.docx [Dataset]. http://doi.org/10.3389/fcomm.2022.901719.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Anne F. J. Hellwig; Pauline T. Jones; Erika Matruglio; Helen Georgiou
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The applied disciplines of architecture and civil engineering require students to communicate multimodally, and to manipulate meaning across media and modes, such as image, writing or moving image. In their disciplinary studies for example, students must be able to transform the language of lectures and textbooks into models and diagrams. In their future workplaces, they will commonly be required to transform reports and legal documents into floor plans and digital & physical 3D models. Such multimodal literacy, however, is not typically reflected in their related subject-specific English language courses, especially in Germany, where a text-centric approach is favored. To better reflect the demands placed upon them, students in two courses of English for Architecture and Civil Engineering were tasked with creating digital, multimodal artifacts to explain a concept from either of these fields to a lay audience. The resultant artifacts used a wide variety of semiotic resources to make meaning, including a total of 26 separate architectural and civil engineering models. This is a quantity sufficiently large enough to invite closer examination, and also reflects the important role models play in the fields of architecture and civil engineering, both at university and in the workplace. This paper suggests that models of this kind exist within a system of signs, in which meaning is created in the relationships between the signs. The process of transforming one resource into another also invites the consideration of the artifacts in terms of the notion of “transduction”, to discern how meaning changes between contexts, practices and modes and to contribute to existing literature on multimodal texts in tertiary education, particularly within a language-learning context.

  4. o

    Simple Multimodal Algorithmic Reasoning Task Dataset (SMART-101)

    • explore.openaire.eu
    • data.niaid.nih.gov
    Updated Mar 23, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anoop Cherian; Kuan-Chuan Peng; Suhas Lohit; Kevin A. Smith; Joshua B. Tenenbaum (2023). Simple Multimodal Algorithmic Reasoning Task Dataset (SMART-101) [Dataset]. http://doi.org/10.5281/zenodo.7761799
    Explore at:
    Dataset updated
    Mar 23, 2023
    Authors
    Anoop Cherian; Kuan-Chuan Peng; Suhas Lohit; Kevin A. Smith; Joshua B. Tenenbaum
    Description

    Introduction Recent times have witnessed an increasing number of applications of deep neural networks towards solving tasks that require superior cognitive abilities, e.g., playing Go, generating art, ChatGPT, etc. Such a dramatic progress raises the question: how generalizable are neural networks in solving problems that demand broad skills? To answer this question, we propose SMART: a Simple Multimodal Algorithmic Reasoning Task (and the associated SMART-101 dataset) for evaluating the abstraction, deduction, and generalization abilities of neural networks in solving visuo-linguistic puzzles designed specifically for children of younger age (6--8). Our dataset consists of 101 unique puzzles; each puzzle comprises a picture and a question, and their solution needs a mix of several elementary skills, including pattern recognition, algebra, and spatial reasoning, among others. To train deep neural networks, we programmatically augment each puzzle to 2,000 new instances; each instance varied in appearance, associated natural language question, and its solution. To foster research and make progress in the quest for artificial general intelligence, we are publicly releasing our SMART-101 dataset, consisting of the full set of programmatically-generated instances of 101 puzzles and their solutions. The dataset was introduced in our paper Are Deep Neural Networks SMARTer than Second Graders? by Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A. Smith, and Joshua B. Tenenbaum, CVPR 2023 Files in the unzipped folder: ./README.md: This Markdown file ./SMART101-Data: Folder containing all the puzzle data. See below for details. ./puzzle_type_info.csv: Puzzle categorization (into 8 skill classes). Dataset Organization The dataset consists of 101 folders (numbered from 1-101); each folder corresponds to one distinct puzzle (root puzzle). There are 2000 puzzle instances programmatically created for each root puzzle, numbered from 1-2000. Every root puzzle index (in [1,101]) folder contains: (i) img/ and (ii) puzzle_<index>.csv. The folder img/ is the location where the puzzle instance images are stored, and puzzle_<index>.csv the non-image part of a puzzle. Specifically, a row of puzzle_<index>.csv is the following tuple: <id, Question, image, A, B, C, D, E, Answer>, where id is the puzzle instance id (in [1,2000]), Question is the puzzle question associated with the instance, image is the name of the image (in img/ folder) corresponding to this instance id, A, B, C, D, E are the five answer candidates, and Answer is the answer to the question. At a Glance The size of the unzipped dataset is ~12GB. The dataset consists of 101 folders (numbered from 1-101); each folder corresponds to one distinct puzzle (root puzzle). There are 2000 puzzle instances programmatically created for each root puzzle, numbered from 1-2000. Every root puzzle index (in [1,101]) folder contains: (i) img/ and (ii) puzzle_<index>.csv. The folder img/ is the location where the puzzle instance images are stored, and puzzle_<index>.csv contains the non-image part of a puzzle. Specifically, a row of puzzle_<index>.csv is the following tuple: <id, Question, image, A, B, C, D, E, Answer>, where id is the puzzle instance id (in [1,2000]), Question is the puzzle question associated with the instance, image is the name of the image (in img/ folder) corresponding to this instance id, A, B, C, D, E are the five answer candidates, and Answer is the correct answer to the question. Other Details In our paper Are Deep Neural Networks SMARTer than Second Graders?, we provide four different dataset splits for evaluation: (i) Instance Split (IS), (ii) Answer Split (AS), (iii) Puzzle Split (PS), and (iv) Few-shot Split (FS). Below, we provide the details of each split to make fair comparisons to the results reported in our paper. Puzzle Split (PS) We use the following root puzzle ids as the Train and Test sets. Split Root Puzzle Id Sets Test { 94,95, 96, 97, 98, 99, 101, 61,62, 65, 66,67, 69, 70, 71,72,73,74,75,76,77} Train {1,2,...,101} \ Test Evaluation is done on all the Test puzzles and their accuracies averaged. For the 'Test' puzzles, we use the instance indices 1701-2000 in the evaluation. Few-shot Split (FS) We randomly select k number of instances from the Test sets (that are used in the PS split above) for training in FS split (e.g., k=100). These k few-shot samples are taken from instance indices 1-1600 of the respective puzzles and evaluation is conducted on all instance ids from 1701-2000. Instance Split (IS) We split the instances under every root puzzle as: Train = 1-1600, Val = 1601-1700, Test = 1701-2000. We train the neural network models using the Train split puzzle instances from all the root puzzles together and evaluate on the Test split of all puzzles. Answer Split (AS) We find the median answer value among all the...

  5. ChatGPT's responses to TUG-K 4.0 survey (October 2023)

    • zenodo.org
    pdf
    Updated Jun 20, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Giulia Polverini; Giulia Polverini; Bor Gregorcic; Bor Gregorcic (2024). ChatGPT's responses to TUG-K 4.0 survey (October 2023) [Dataset]. http://doi.org/10.5281/zenodo.12180242
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 20, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Giulia Polverini; Giulia Polverini; Bor Gregorcic; Bor Gregorcic
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Dec 24, 2023
    Description

    This dataset was created in October 2023, after the initial public release of ChatGPT vision. It contains 60 completed Test of Understanding Graphs in Kinematics 4.0 (TUG-K 4.0) surveys. The responses are sorted by item number. An analysis of the responses was published in https://doi.org/10.1103/PhysRevPhysEducRes.20.010109

    v2 of the dataset takes care of some accidentally duplicated responses present in v1. Because the analysis in the above cited paper used data directly out of the chatbot, the updates in no way impact the analysis or findings in the paper.

  6. c

    Peer-to-peer Deaf Multiliteracies, 2017-2020

    • datacatalogue.cessda.eu
    • beta.ukdataservice.ac.uk
    Updated Jun 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Zeshan, U (2025). Peer-to-peer Deaf Multiliteracies, 2017-2020 [Dataset]. http://doi.org/10.5255/UKDA-SN-854728
    Explore at:
    Dataset updated
    Jun 4, 2025
    Dataset provided by
    University of Central Lancashire
    Authors
    Zeshan, U
    Time period covered
    Jul 1, 2017 - Dec 31, 2020
    Area covered
    Ghana, Uganda, United Kingdom, India
    Variables measured
    Individual, Group
    Measurement technique
    The Peer to Peer Deaf Multiliteracies project undertook interventions on language and literacy learning, with classes including primary school children as well as adults. The data that the project generated from both the children and adult learners include: a) language and literacy testing results, based on A1/A2 level of the Common European Framework of Reference for second language learning for adults, and on the English Language Ladder for children; b) multimedia portfolios compiled by the tutors showing monthly samples of the learners’ work; and c) text-based records of classes, namely reports written by tutors (monthly) and by research assistants (periodically). In addition, the data collection includes multimedia learning materials for language and literacy generated by the groups of learners with the tutors and posted on an online learning platform, as well as anonymous user statistics generated by the platform software. For working with children, the only sampling condition was an age range between 6-12 years of age. Within this age range, the project worked with groups of children as determined by the schools hosting the interventions. For young adults (with a maximum age limit of 35 years but the large majority in their 20s), prospective deaf candidates were selected by the local project teams based on interviews that established fluency in their local sign language and familiarity with the English alphabet.
    Description

    This project on multiliteracies involved groups of deaf learners in India, Uganda, and Ghana, both in primary schools and with young adult learners. The Peer-to-Peer Deaf Multiliteracies project examined how some of the dynamics that contribute to learners’ marginalisation can be changed by involving deaf individuals in the design of new teaching approaches, and by using children and young people's lived experiences and existing multilingual-multimodal skills as the starting point for theme-based learning. The aim was for participants to develop not only English literacy, but "multiliteracies", i.e. skills in sign languages, ICT, written English, creative expression through drawing and acting, and other forms of multimodal communication. The data collection includes reports from classroom settings compiled by tutors and by research assistants, pre-and post-tests on language and literacy abilities with learners, samples from an online learning platform, and multimedia portfolios collected from learners. A total of 124 young deaf adults and 79 deaf primary school children took part in the research

    The exclusion of deaf children and young adults from access to school systems in the developing world results in individuals and communities being denied quality education; this not only leads to unemployment, underemployment, low income, and a high risk of poverty, but also represents a needless waste of human talent and potential. To target this problem, this project extends work conducted under a pilot project addressing issues of literacy education with young deaf people in the Global South. Creating, implementing and evaluating our innovative intervention based on the peer teaching of English literacy through sign language-based tutoring, everyday real life texts such as job application forms, and the use of a bespoke online resource, enabled us to generate a sustainable, cost-effective and learner-directed way to foster literacy learning amongst deaf individuals. To reach further target groups and conduct more in-depth research, the present project extends our work to new groups of learners in India, Uganda, Ghana, Rwanda and Nepal, both in primary schools (ca 60 children in India, Ghana, and Uganda) and with young adult learners (ca 100 learners in interventions, plus ca 60 young adults in scoping workshops in Nepal and Rwanda). In the targeted countries, marginalisation begins in schools, since many have no resources for teaching through sign language, even though this is the only fully accessible language to a deaf child. This project intends to examine how we can change some of the dynamics that contribute to this, by involving deaf individuals in the design of new teaching approaches, and by using children and young people's everyday experiences and existing literacy practices as the basis for their learning. Participants in such a programme not only develop English literacy, but "multiliteracies", i.e. skills in sign languages, technology, written English, gesture, mouthing, and other forms of multimodal communication. Developing a multilingual toolkit is an essential element of multiliteracies. Being 'literate' in the modern world involves a complex set of practices and competencies and engagement with various modes (e.g. face-to-face, digital, remote), increasing one's abilities to act independently. Our emphases on active learning, contextualised assessments and building portfolios to document progress increases the benefit to deaf learners in terms of their on-going educational and employment capacity. Apart from the actual teaching and interventions, the research also investigates factors in existing systems of educational provisions for deaf learners and how these may systematically undermine and isolate deaf communities and their sign languages. Our analyses identify the local dynamics of cultural contexts that our programmes and future initiatives need to address and evaluate in order to be sustainable. One challenge we encountered in the pilot was the lack of trained deaf peer tutors. There is a need for investment in local capacity building and for the creation of opportunities and pathways for deaf people to obtain formal qualifications. Therefore, we develop training in literacy teaching and in research methods for all deaf project staff. We also develop and adapt appropriate assessment tools and metrics to confirm what learning has taken place and how, with both children and young adults. This includes adapting the Common European Framework of Reference for Languages (CEFR) for young deaf adult learners and the 'Language Ladder' for deaf children so that we use locally-valid test criteria. To document progress in more detail and in relation to authentic, real life literacy demands we need to create our own metrics, which we do by using portfolio based assessments that are learner-centred and closely linked to the local curricula.

  7. w

    Global Ai Voice Generator Market Research Report: By Deployment...

    • wiseguyreports.com
    Updated Jun 10, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Ai Voice Generator Market Research Report: By Deployment (Cloud-based, On-premises), By Application (Customer Service, Marketing and Sales, Healthcare, Education, Gaming), By Input (Text-to-speech, Speech-to-text, Multimodal), By Language (English, Chinese, Spanish, Japanese, Other) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/reports/ai-voice-generator-market
    Explore at:
    Dataset updated
    Jun 10, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 6, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 20237.44(USD Billion)
    MARKET SIZE 202410.21(USD Billion)
    MARKET SIZE 2032128.3(USD Billion)
    SEGMENTS COVEREDDeployment ,Application ,Input ,Language ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSTechnological Advancements Growing Adoption in Content Creation Increasing Demand from Enterprises Integration with Conversational AI Voice Assistants Prevalence
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDDeepgram ,Sonantic ,Baidu ,Adobe ,Nuance ,Amazon ,Murf ,Cepstral ,Readspeaker ,Microsoft ,Google ,Veritone ,IBM ,Acapela ,Cereproc
    MARKET FORECAST PERIOD2024 - 2032
    KEY MARKET OPPORTUNITIES1 Personalized voice assistants 2 Enhanced customer service 3 Improved accessibility 4 Content creation 5 Language learning
    COMPOUND ANNUAL GROWTH RATE (CAGR) 37.21% (2024 - 2032)
  8. f

    Data from: Linguistic Repertoires as Biographical Indexes: Indigenous...

    • scielo.figshare.com
    jpeg
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    André Marques do Nascimento (2023). Linguistic Repertoires as Biographical Indexes: Indigenous Students’ Multimodal (Self)Representations Through Linguistic Portraits [Dataset]. http://doi.org/10.6084/m9.figshare.14328524.v1
    Explore at:
    jpegAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    SciELO journals
    Authors
    André Marques do Nascimento
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ABSTRACT This paper adopts a conception of language as repertoire, conceived as an emergent set of semiotic resources that reflects life trajectories located in specific times and spaces. From this theoretical perspective, it analyzes dimensions of the configuration of the communicative repertoires of Indigenous individuals in the postcolonial contemporaneity. The empirical data under analysis was generated in a linguistic education context and consists of oral, written and multimodal registers of emerging interactions on the production and presentation of linguistic portraits. The analysis aims to highlight the pedagogical relevance of (self)representation of linguistic repertoires as a starting point for language education and as a research tool on linguistic resources, practices and ideologies.

  9. j

    The research data collected for the dissertation "The impact of multimodal...

    • jyx.jyu.fi
    Updated Feb 20, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marjo Markkanen (2025). The research data collected for the dissertation "The impact of multimodal language teaching on learner autonomy, motivation, and free time language usage" Väitöstutkimusta "Monimediaisen kieltenopetuksen vaikutuksia oppijan autonomiaan, motivaatioon ja vapaa-ajan kielen käyttöön" varten kerätty aineisto [Dataset]. http://doi.org/10.17011/jyx/dataset/79166
    Explore at:
    Dataset updated
    Feb 20, 2025
    Authors
    Marjo Markkanen
    License

    https://rightsstatements.org/page/InC/1.0/https://rightsstatements.org/page/InC/1.0/

    Description

    The research data were collected for the dissertation "The impact of multimodal language teaching on learner autonomy, motivation, and free time language usage". The data were collected from a group of students in a B2 German class during Years 8 and 9 at comprehensive school (N = 14) and it comprise four questionnaires, two interviews, eight learning tasks, related outputs and feedback questionnaires, and the learners' visual narratives.

  10. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Pitambar Paudel (2025). Data Sheet 1_Use of multimodal glosses in teaching English vocabulary for non-English specialised undergraduates in public university in Nepal.docx [Dataset]. http://doi.org/10.3389/feduc.2025.1443803.s001

Data Sheet 1_Use of multimodal glosses in teaching English vocabulary for non-English specialised undergraduates in public university in Nepal.docx

Related Article
Explore at:
docxAvailable download formats
Dataset updated
Jan 22, 2025
Dataset provided by
Frontiers
Authors
Pitambar Paudel
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

Knowledge of vocabulary is an essential aspect of language development. Most of the non-English specialised students feel hesitation in communicating in English due to limited vocabulary. Effective vocabulary teaching and learning can be aided by multimodal glosses. In this rationale, this mixed methods participatory action research is intended to investigate the effect of multimodal glosses in improving the English vocabulary of non-English specilised EFL students in a public university in Nepal. The study was conducted in a three-month intervention experiment for an intact class of 60 non-English specilised undergraduates. The data were collected from tests (pre-test, progress-test, and post-test), and interviews. The data were analysed using quantitative statistics (mean, standard deviation, and T-test), and the data from the unstructured interview were analysed descriptively. The overall results revealed that the use of multimodal glosses led to significant improvements in students’ English vocabulary and its use. The findings suggest that the study’s intervention, the use of multimodal glosses, was effective in improving non-English specialised undergraduates’ ability to develop, comprehend, and use English vocabulary. Thus, students and teachers are to be aware of using multimodal glosses contextually to increase, understand, and adopt English vocabulary appropriately.

Search
Clear search
Close search
Google apps
Main menu