100+ datasets found
  1. d

    School Learning Modalities, 2021-2022

    • catalog.data.gov
    • datahub.hhs.gov
    • +4more
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention (2025). School Learning Modalities, 2021-2022 [Dataset]. https://catalog.data.gov/dataset/school-learning-modalities
    Explore at:
    Dataset updated
    Mar 26, 2025
    Dataset provided by
    Centers for Disease Control and Prevention
    Description

    The 2021-2022 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2021-2022 school year and the Fall 2022 semester, from August 2021 – December 2022. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from August 1, 2021 to June 24, 2022 correspond to the 2021-2022 school year. During this time frame, data from the AEI/Return to Learn Tracker and most state dashboards were not available. Inferred modalities with a probability below 0.6 were deemed inconclusive and were omitted. During the Fall 2022 semester, modalities for districts with a school closure reported by Burbio were updated to either “Remote”, if the closure spanned the entire week, or “Hybrid”, if the closure spanned 1-4 days of the week. Data from August

  2. c

    School Learning Modalities, 2020-2021

    • s.cnmilf.com
    • datahub.hhs.gov
    • +3more
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention (2025). School Learning Modalities, 2020-2021 [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/school-learning-modalities-2020-2021
    Explore at:
    Dataset updated
    Mar 26, 2025
    Dataset provided by
    Centers for Disease Control and Prevention
    Description

    The 2020-2021 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2020-2021 school year, from August 2020 – June 2021. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the _location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from September 1, 2020 to June 25, 2021 correspond to the 2020-2021 school year. During this timeframe, all four sources of data were available. Inferred modalities with a probability below 0.75 were deemed inconclusive and were omitted. Data for the month of July may show “In Person” status although most school districts are effectively closed during this time for summer break. Users may wish to exclude July data from use for this reason where applicable. Sources K-12 School Opening Tracker. Burbio 2021; https

  3. h

    full-modality-data

    • huggingface.co
    Updated Mar 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nguyen Quang Trung (2025). full-modality-data [Dataset]. https://huggingface.co/datasets/ngqtrung/full-modality-data
    Explore at:
    Dataset updated
    Mar 23, 2025
    Authors
    Nguyen Quang Trung
    Description

    ngqtrung/full-modality-data dataset hosted on Hugging Face and contributed by the HF Datasets community

  4. g

    School Learning Modalities, 2020-2021 | gimi9.com

    • gimi9.com
    Updated Mar 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). School Learning Modalities, 2020-2021 | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_school-learning-modalities-2020-2021/
    Explore at:
    Dataset updated
    Mar 7, 2023
    Description

    🇺🇸 미국 English The 2020-2021 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2020-2021 school year, from August 2020 – June 2021. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels.

  5. Student Enrollment and Attendance Data by Teaching Modality - 2020 - 2021

    • opendata.winchesterva.gov
    • data.virginia.gov
    xlsx
    Updated Jul 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Virginia State Data (2024). Student Enrollment and Attendance Data by Teaching Modality - 2020 - 2021 [Dataset]. https://opendata.winchesterva.gov/dataset/student-enrollment-and-attendance-data-by-teaching-modality-2020-2021
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 23, 2024
    Dataset provided by
    United States Department of Educationhttp://ed.gov/
    Authors
    Virginia State Data
    Description

    Student enrollment data disaggregated by students from low-income families, students from each racial and ethnic group, gender, English learners, children with disabilities, children experiencing homelessness, children in foster care, and migratory students for each mode of instruction.

  6. Modality-based Multitasking and Practice - fMRI

    • openneuro.org
    Updated Mar 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marie Mueckstein; Kai Görgen; Stephan Heinzel; Urs Granacher; A. Michael Rapp; Christine Stelzel (2024). Modality-based Multitasking and Practice - fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds005038.v1.0.2
    Explore at:
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Marie Mueckstein; Kai Görgen; Stephan Heinzel; Urs Granacher; A. Michael Rapp; Christine Stelzel
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contsin the raw fMRI data of a preregistered study. Dataset includes:

    session pre 1. anat/ anatomical scans (T1-weighted images) for each subject 2. func/ whole-brain EPI data from all task runs (8x single task, 2x dual task, 1x resting state and 2x localizer task) 3. fmap/ fieldmaps with magnitude1, magnitude2 and phasediff

    session post 2. func/ whole-brain EPI data from all task runs (8x single task, 2x dual task) 3. fmap/ fieldmaps with magnitude1, magnitude2 and phasediff

    Please note, some participants did not complete the post session. We updated our consent form to get explicit permission to publish the individual data, although not all participants resigned the new version. Those participants are excluded here but part of the t-maps on neurovault (compare participants.tsv).

    Tasks were always included either visual or/and auditory input and required either manual or/and vocal responses (visual+manual and auditory+vocal are modality compatible and visual+vocal and auditory+manual are modality incompatible). Tasks were presented as either single task, or dual task. Participants completed a practice intervention prior to session post in which one group worked for 80 minutes outside the scanner on modality incompatible dual-tasks, one on modality compatible dual-task and the third one paused for 80 min.

    For exact tasks description and material and scripts, please see the preregistration: https://osf.io/whpz8

  7. f

    Learning modalities reported by source.

    • plos.figshare.com
    xls
    Updated Oct 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark J. Panaggio; Mike Fang; Hyunseung Bang; Paige A. Armstrong; Alison M. Binder; Julian E. Grass; Jake Magid; Marc Papazian; Carrie K. Shapiro-Mendoza; Sharyn E. Parks (2023). Learning modalities reported by source. [Dataset]. http://doi.org/10.1371/journal.pone.0292354.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 4, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Mark J. Panaggio; Mike Fang; Hyunseung Bang; Paige A. Armstrong; Alison M. Binder; Julian E. Grass; Jake Magid; Marc Papazian; Carrie K. Shapiro-Mendoza; Sharyn E. Parks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    During the COVID-19 pandemic, many public schools across the United States shifted from fully in-person learning to alternative learning modalities such as hybrid and fully remote learning. In this study, data from 14,688 unique school districts from August 2020 to June 2021 were collected to track changes in the proportion of schools offering fully in-person, hybrid and fully remote learning over time. These data were provided by Burbio, MCH Strategic Data, the American Enterprise Institute’s Return to Learn Tracker and individual state dashboards. Because the modalities reported by these sources were incomplete and occasionally misaligned, a model was needed to combine and deconflict these data to provide a more comprehensive description of modalities nationwide. A hidden Markov model (HMM) was used to infer the most likely learning modality for each district on a weekly basis. This method yielded higher spatiotemporal coverage than any individual data source and higher agreement with three of the four data sources than any other single source. The model output revealed that the percentage of districts offering fully in-person learning rose from 40.3% in September 2020 to 54.7% in June of 2021 with increases across 45 states and in both urban and rural districts. This type of probabilistic model can serve as a tool for fusion of incomplete and contradictory data sources in order to obtain more reliable data in support of public health surveillance and research efforts.

  8. 4

    Data underlying the publication: Robust Multi-Modal Density Estimation

    • data.4tu.nl
    zip
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anna Mészáros; Julian Schumann, Data underlying the publication: Robust Multi-Modal Density Estimation [Dataset]. http://doi.org/10.4121/61f283ae-c30c-42d1-9a7c-89b454e013b3.v1
    Explore at:
    zipAvailable download formats
    Dataset provided by
    4TU.ResearchData
    Authors
    Anna Mészáros; Julian Schumann
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Dataset funded by
    NWO
    Description

    This dataset contains three folders related to the samples used for validating the approach proposed in "ROME: Robust Multi-Modal Density Estimation". The folder "Fitted Distributions" contains the distributions obtained by using both ROME and other methods which we compare to in the publication. "Log Likelihoods" contains the likelihood values for the samples that make up the previously mentioned distributions. Lastly, "Results" contain the metric values which provide the basis for the final data analysis and results reported in the publication.


    This data is to be used in conjunction to the code available at https://github.com/anna-meszaros/ROME/tree/main , and contains the results obtained using this code.

  9. d

    Monthly Modal Time Series

    • catalog.data.gov
    • data.transportation.gov
    • +3more
    Updated Jul 8, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federal Transit Administration (2025). Monthly Modal Time Series [Dataset]. https://catalog.data.gov/dataset/monthly-modal-time-series
    Explore at:
    Dataset updated
    Jul 8, 2025
    Dataset provided by
    Federal Transit Administration
    Description

    Modal Service data and Safety & Security (S&S) public transit time series data delineated by transit/agency/mode/year/month. Includes all Full Reporters--transit agencies operating modes with more than 30 vehicles in maximum service--to the National Transit Database (NTD). This dataset will be updated monthly. The monthly ridership data is released one month after the month in which the service is provided. Records with null monthly service data reflect late reporting. The S&S statistics provided include both Major and Non-Major Events where applicable. Events occurring in the past three months are excluded from the corresponding monthly ridership rows in this dataset while they undergo validation. This dataset is the only NTD publication in which all Major and Non-Major S&S data are presented without any adjustment for historical continuity.

  10. Mode of travel

    • gov.uk
    Updated Apr 16, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Department for Transport (2025). Mode of travel [Dataset]. https://www.gov.uk/government/statistical-data-sets/nts03-modal-comparisons
    Explore at:
    Dataset updated
    Apr 16, 2025
    Dataset provided by
    GOV.UKhttp://gov.uk/
    Authors
    Department for Transport
    Description

    Accessible Tables and Improved Quality

    As part of the Analysis Function Reproducible Analytical Pipeline Strategy, processes to create all National Travel Survey (NTS) statistics tables have been improved to follow the principles of Reproducible Analytical Pipelines (RAP). This has resulted in improved efficiency and quality of NTS tables and therefore some historical estimates have seen very minor change, at least the fifth decimal place.

    All NTS tables have also been redesigned in an accessible format where they can be used by as many people as possible, including people with an impaired vision, motor difficulties, cognitive impairments or learning disabilities and deafness or impaired hearing.

    If you wish to provide feedback on these changes then please email national.travelsurvey@dft.gov.uk.

    Revision to table NTS9919

    On the 16th April 2025, the figures in table NTS9919 have been revised and recalculated to include only day 1 of the travel diary where short walks of less than a mile are recorded (from 2017 onwards), whereas previous versions included all days. This is to more accurately capture the proportion of trips which include short walks before a surface rail stage. This revision has resulted in fewer available breakdowns than previously published due to the smaller sample sizes.

    Trips, stages, distance and time spent travelling

    NTS0303: https://assets.publishing.service.gov.uk/media/66ce0f118e33f28aae7e1f75/nts0303.ods">Average number of trips, stages, miles and time spent travelling by mode: England, 2002 onwards (ODS, 53.9 KB)

    NTS0308: https://assets.publishing.service.gov.uk/media/66ce0f128e33f28aae7e1f76/nts0308.ods">Average number of trips and distance travelled by trip length and main mode; England, 2002 onwards (ODS, 191 KB)

    NTS0312: https://assets.publishing.service.gov.uk/media/66ce0f12bc00d93a0c7e1f71/nts0312.ods">Walks of 20 minutes or more by age and frequency: England, 2002 onwards (ODS, 35.1 KB)

    NTS0313: https://assets.publishing.service.gov.uk/media/66ce0f12bc00d93a0c7e1f72/nts0313.ods">Frequency of use of different transport modes: England, 2003 onwards (ODS, 27.1 KB)

    NTS0412: https://assets.publishing.service.gov.uk/media/66ce0f1325c035a11941f653/nts0412.ods">Commuter trips and distance by employment status and main mode: England, 2002 onwards (ODS, 53.8 KB)

    NTS0504: https://assets.publishing.service.gov.uk/media/66ce0f141aaf41b21139cf7d/nts0504.ods">Average number of trips by day of the week or month and purpose or main mode: England, 2002 onwards (ODS, 141 KB)

    <h2 id=

  11. d

    Replication Data for: The Choice of Aspect in the Russian Modal Construction...

    • search.dataone.org
    • dataverse.no
    Updated Jan 5, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bernasconi, Beatrice (2024). Replication Data for: The Choice of Aspect in the Russian Modal Construction with prixodit'sja/prijtis' [Dataset]. http://doi.org/10.18710/KR5RRK
    Explore at:
    Dataset updated
    Jan 5, 2024
    Dataset provided by
    DataverseNO
    Authors
    Bernasconi, Beatrice
    Time period covered
    Jan 1, 1950 - Jan 1, 2020
    Description

    This dataset includes all the data files that were used for the studies in my Master Thesis: "The Choice of Aspect in the Russian Modal Construction with prixodit'sja/prijtis'". The data files are numbered so that they are shown in the same order as they are presented in the thesis. They include the database and the code used for the statistical analysis. Their contents are described in the ReadMe files. The core of the work is a quantitative and empirical study on the choice of aspect by Russian native speakers in the modal construction prixodit’sja/prijtis’ + inf. The hypothesis is that in the modal construction prixodit’sja/prijtis’ + inf the aspect of the infinitive is not fully determined by grammatical context but, to some extent, open to construal. A preliminary analysis was carried out on data gathered from the Russian National Corpus (www.ruscorpora.ru). Four hundred and forty-seven examples with the verb prijtis' were annotated manually for several factors and a statistical test (CART) was run. Results demonstrated that no grammatical factor plays a big role in the use of one aspect rather than the other. Data for this study can be consulted in the files from 01 to 03 and include a ReadMe file, the database in .csv format and the code used for the statistical test. An experiment with native speakers was then carried out. A hundred and ten native speakers of Russian were surveyed and asked to evaluate the acceptability of the infinitive in examples with prixodit’sja/prijtis’ delat’/sdelat’ šag/vid/vybor. The survey presented seventeen examples from the Russian National Corpus that were submitted two times: the first time with the same aspect as in the original version, the second time with the other aspect. Participants had to evaluate each case by choosing among “Impossible”, “Acceptable” and “Excellent” ratings. They were also allowed to give their opinion about the difference between aspects in each example. A Logistic Regression with Mixed Effects was run on the answers. Data for this study can be consulted in the files from 04 to 010 and include a ReadMe file, the text and the answers of the questionnaire, the database in .csv, .txt and pdf formats and the code used for the statistical test. Results showed that prijtis’ often admits both aspects in the infinitive, while prixodit’sja is more restrictive and prefers imperfective. Overall, “Acceptable” and “Excellent” responses were higher than “Impossible” responses for both aspects, even when the aspect evaluated didn’t match with the original. Personal opinions showed that the choice of aspect often depends on the meaning the speaker wants to convey. Only in very few cases the grammatical context was considered to be a constraint on the choice.

  12. f

    Data_Sheet_1.docx

    • frontiersin.figshare.com
    docx
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiujun Li; Xudong Zhao; Wendian Shi; Yang Lu; Christopher M. Conway (2023). Data_Sheet_1.docx [Dataset]. http://doi.org/10.3389/fpsyg.2018.00146.s001
    Explore at:
    docxAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    Frontiers
    Authors
    Xiujun Li; Xudong Zhao; Wendian Shi; Yang Lu; Christopher M. Conway
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A current controversy in the area of implicit statistical learning (ISL) is whether this process consists of a single, central mechanism or multiple modality-specific ones. To provide insight into this question, the current study involved three ISL experiments to explore whether multimodal input sources are processed separately in each modality or are integrated together across modalities. In Experiment 1, visual and auditory ISL were measured under unimodal conditions, with the results providing a baseline level of learning for subsequent experiments. Visual and auditory sequences were presented separately, and the underlying grammar used for both modalities was the same. In Experiment 2, visual and auditory sequences were presented simultaneously with each modality using the same artificial grammar to investigate whether redundant multisensory information would result in a facilitative effect (i.e., increased learning) compared to the baseline. In Experiment 3, visual and auditory sequences were again presented simultaneously but this time with each modality employing different artificial grammars to investigate whether an interference effect (i.e., decreased learning) would be observed compared to the baseline. Results showed that there was neither a facilitative learning effect in Experiment 2 nor an interference effect in Experiment 3. These findings suggest that participants were able to track simultaneously and independently two sets of sequential regularities under dual-modality conditions. These findings are consistent with the theories that posit the existence of multiple, modality-specific ISL mechanisms rather than a single central one.

  13. D

    Data from: Towards Cross-Modality Modeling for Time Series Analytics: A...

    • researchdata.ntu.edu.sg
    Updated May 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    DR-NTU (Data) (2025). Towards Cross-Modality Modeling for Time Series Analytics: A Survey in the LLM Era [Dataset]. http://doi.org/10.21979/N9/I0HOYZ
    Explore at:
    Dataset updated
    May 13, 2025
    Dataset provided by
    DR-NTU (Data)
    License

    https://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/I0HOYZhttps://researchdata.ntu.edu.sg/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.21979/N9/I0HOYZ

    Description

    The proliferation of edge devices has generated an unprecedented volume of time series data across different domains, motivating various well-customized methods. Recently, Large Language Models (LLMs) have emerged as a new paradigm for time series analytics by leveraging the shared sequential nature of textual data and time series. However, a fundamental cross-modality gap between time series and LLMs exists, as LLMs are pre-trained on textual corpora and are not inherently optimized for time series. Many recent proposals are designed to address this issue. In this survey, we provide an up-to-date overview of LLMs-based cross-modality modeling for time series analytics. We first introduce a taxonomy that classifies existing approaches into four groups based on the type of textual data employed for time series modeling. We then summarize key cross-modality strategies, e.g., alignment and fusion, and discuss their applications across a range of downstream tasks. Furthermore, we conduct experiments on multimodal datasets from different application domains to investigate effective combinations of textual data and cross-modality strategies for enhancing time series analytics. Finally, we suggest several promising directions for future research. This survey is designed for a range of professionals, researchers, and practitioners interested in LLM-based time series modeling.

  14. Data from: MMIFR: Multimodal Industry Focused Data Repository

    • zenodo.org
    • data.niaid.nih.gov
    Updated Jun 16, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Li Shiqi; Wei Xujun; Fang Zhijun; Chen Mingxuan; Li Shiqi; Wei Xujun; Fang Zhijun; Chen Mingxuan (2023). MMIFR: Multimodal Industry Focused Data Repository [Dataset]. http://doi.org/10.5281/zenodo.8045588
    Explore at:
    Dataset updated
    Jun 16, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Li Shiqi; Wei Xujun; Fang Zhijun; Chen Mingxuan; Li Shiqi; Wei Xujun; Fang Zhijun; Chen Mingxuan
    Description

    The MMIFR data repository consists of three distinct components: MMIFR-D, MMIFR-FS, and MMIFR-P. The MMIFR-D dataset comprises a comprehensive assemblage of 5907 images accompanied by corresponding textual descriptions, notably facilitating the application of industrial equipment classification. In contrast, the MMIFR-FS dataset serves as an alternative variant characterized by the inclusion of 129 distinct classes and 5907 images, specifically catering to the task of few-shot learning within the industrial domain. Additionally, the MMIFR-P dataset consists of 142 textual-visual information pairs, making it suitable for detecting pairs of industrial equipment.

  15. Multi-Modal Imaging Data-Integration Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Multi-Modal Imaging Data-Integration Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/multi-modal-imaging-data-integration-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jul 5, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Multi-Modal Imaging Data-Integration Market Outlook




    According to our latest research, the global Multi-Modal Imaging Data-Integration market size reached USD 1.67 billion in 2024. The market is expected to expand at a robust CAGR of 9.5% during the forecast period, reaching a projected value of USD 3.78 billion by 2033. This impressive growth is driven by the increasing demand for integrated imaging solutions in clinical diagnostics and research, as well as technological advancements in imaging modalities and data analytics platforms. As per our detailed analysis, the integration of multiple imaging modalities is revolutionizing the way healthcare professionals diagnose and treat complex diseases, offering comprehensive insights that single-modality imaging cannot provide.




    One of the primary growth factors propelling the Multi-Modal Imaging Data-Integration market is the rising prevalence of chronic diseases such as cancer, cardiovascular disorders, and neurological conditions. These diseases often require precise and multifaceted diagnostic approaches, which multi-modal imaging excels at delivering. By combining data from modalities like MRI, CT, PET, and ultrasound, clinicians can achieve a more holistic view of patient pathology, leading to improved treatment planning and patient outcomes. Moreover, the increasing adoption of personalized medicine is further driving the need for integrated imaging data, as tailored therapeutic strategies rely heavily on accurate, multi-dimensional diagnostic information.




    Another significant driver is the rapid technological evolution in both imaging hardware and software. Innovations such as artificial intelligence (AI) and machine learning are enabling more effective integration and interpretation of complex imaging datasets. Advanced integration techniques, including software-based and hybrid solutions, are making it feasible to seamlessly combine anatomical, functional, and molecular information from various imaging platforms. This technological leap is not only enhancing diagnostic precision but also reducing the time and cost associated with traditional, single-modality imaging workflows. The ongoing investment in research and development by both public and private sectors is ensuring a steady pipeline of improvements in multi-modal imaging data-integration.




    The growing adoption of digital health solutions, including cloud-based imaging data repositories and telemedicine platforms, is also contributing to market expansion. Healthcare institutions are increasingly recognizing the value of integrated imaging data in facilitating remote consultations, multidisciplinary team discussions, and collaborative research. The shift toward value-based care models emphasizes outcomes and efficiency, making multi-modal data-integration an attractive proposition for hospitals, diagnostic centers, and research institutes. Additionally, regulatory support for interoperability and data standardization is gradually lowering barriers to adoption, fostering a more conducive environment for market growth.




    From a regional perspective, North America continues to dominate the Multi-Modal Imaging Data-Integration market, accounting for the largest revenue share in 2024. This leadership is attributed to the region’s advanced healthcare infrastructure, high adoption rates of cutting-edge imaging technologies, and significant investments in healthcare IT. Europe follows closely, benefiting from robust government initiatives and a strong focus on research collaborations. The Asia Pacific region is emerging as the fastest-growing market, driven by expanding healthcare access, rising investments in medical technology, and an increasing burden of chronic diseases. Latin America and the Middle East & Africa, while currently holding smaller shares, are expected to witness steady growth due to improving healthcare systems and rising awareness of integrated imaging benefits.





    Imaging Modality Analysis




    The Imaging Modality segment forms the b

  16. S

    Data from: Typical Concept-Driven Modality-missing Deep Cross-Modal...

    • scidb.cn
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xia Xinyu; Zhu Lei; Nie Xiushan; Dong Guohua; Zhang Huaxiang (2025). Typical Concept-Driven Modality-missing Deep Cross-Modal Retrieval [Dataset]. http://doi.org/10.57760/sciencedb.24176
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Xia Xinyu; Zhu Lei; Nie Xiushan; Dong Guohua; Zhang Huaxiang
    License

    https://api.github.com/licenses/mithttps://api.github.com/licenses/mit

    Description

    Cross-modal retrieval takes one modality data as a query and retrieves semantically relevant data in another modality. Most existing cross-modal retrieval methods are designed for scenarios with complete modality data. However, in real-world applications, incomplete modality data often exists, which these methods struggle to handle effectively. In this paper, we propose a typical concept-driven modality-missing deep cross-modal retrieval model. Specifically, we first propose a multi-modal Transformer integrated with multi-modal pretraining networks, which can fully capture the multi-modal fine-grained semantic interaction in the incomplete modality data, extract multi-modal fusion semantics and construct cross-modal subspace, and at the same time supervise the learning process to generate typical concepts. In addition, the typical concepts are used as the cross-attention key and value to drive the training of the modal mapping network, so that it can adaptively preserve the implicit multi-modal semantic concepts of the query modality data, generate cross-modal retrieval features, and fully preserve the pre-extracted multi-modal fusion semantics. More information about the source code: https://gitee.com/MrSummer123/CPCMR

  17. D

    Replication Data for: When modality and tense meet. The future marker budet...

    • dataverse.azure.uit.no
    • dataverse.no
    • +1more
    Updated Nov 22, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Elmira Zhamaletdinova; Elmira Zhamaletdinova (2023). Replication Data for: When modality and tense meet. The future marker budet ‘will’ in impersonal constructions with the modal adverb možno ‘be possible’ [Dataset]. http://doi.org/10.18710/MOJBDK
    Explore at:
    text/comma-separated-values(657010), txt(10575), text/comma-separated-values(54088)Available download formats
    Dataset updated
    Nov 22, 2023
    Dataset provided by
    DataverseNO
    Authors
    Elmira Zhamaletdinova; Elmira Zhamaletdinova
    License

    https://dataverse.no/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18710/MOJBDKhttps://dataverse.no/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.18710/MOJBDK

    Time period covered
    1826 - 2015
    Area covered
    Russian Federation
    Description

    Dataset description: This is a study of examples of Russian impersonal constructions with the modal word možno ‘can, be possible’ with and without the future copula budet ‘will be,’ i.e., možno + budet + INF and možno + INF. The data was collected in 2020-2021 from the old version of the Russian National Corpus (ruscorpora.ru). In the spreadsheet 01DataMoznoBudet, the data merges the results of four searches conducted to extract examples of sentences with the following construction types: možno + budet + INF.PFV, možno + budet + INF.IPFV, možno + INF.PFV and možno + INF.IPFV. The results for each search were downloaded, pseudorandomized, and the first 200 examples were manually annotated, based on the syntactic analyses given in the corpus. The syntactic and morphological categories used in the corpus are explained here: https://ruscorpora.ru/corpus/main. In the spreadsheet 01DataZavtraMoznoBudet, the data merges the results of four searches conducted to extract examples of sentences with the following structure: zavtra + možno + budet + INF.PFV, zavtra + možno + budet + INF.IPFV, zavtra + možno + INF.PFV and zavtra + možno + INF.IPFV. All of the examples (103 sentences) were imported to a spreadsheet and annotated manually, based on the syntactic analyses given in the corpus. The syntactic and morphological categories used in the corpus are explained here: https://ruscorpora.ru/corpus/main. Article abstract: This paper examines Russian impersonal constructions with the modal word možno ‘can, be possible’ with and without the future copula budet ‘will be,’ i.e., možno + budet + INF and možno + INF. My contribution can be summarized as follows. First, corpus-based evidence reveals that možno + INF constructions are vastly more frequent than constructions with copula. Second, the meaning of constructions without the future copula is more flexible: while the possibility is typically located in the present, the situation denoted by the infinitive may be located in the present or the future. Third, I show that the možno + INF construction is more ambiguous and can denote present, gnomic or future situations. Fourth, I identify a number of contextual factors that unambiguously locate the situation in the future. I demonstrate that such factors are more frequently used with the future copula, and thus motivate the choice between the two constructions. Finally, I illustrate the interpretations in a straightforward manner by means of schemas of the type used in cognitive linguistics.

  18. Multi-modality medical image dataset for medical image processing in Python...

    • zenodo.org
    zip
    Updated Aug 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Candace Moore; Candace Moore; Giulia Crocioni; Giulia Crocioni (2024). Multi-modality medical image dataset for medical image processing in Python lesson [Dataset]. http://doi.org/10.5281/zenodo.13305760
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Candace Moore; Candace Moore; Giulia Crocioni; Giulia Crocioni
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains a collection of medical imaging files for use in the "Medical Image Processing with Python" lesson, developed by the Netherlands eScience Center.

    The dataset includes:

    1. SimpleITK compatible files: MRI T1 and CT scans (training_001_mr_T1.mha, training_001_ct.mha), digital X-ray (digital_xray.dcm in DICOM format), neuroimaging data (A1_grayT1.nrrd, A1_grayT2.nrrd). Data have been downloaded from here.
    2. MRI data: a T2-weighted image (OBJECT_phantom_T2W_TSE_Cor_14_1.nii in NIfTI-1 format). Data have been downloaded from here.
    3. Example images for the machine learning lesson: chest X-rays (rotatechest.png, other_op.png), cardiomegaly example (cardiomegaly_cc0.png).
    4. Additional anonymized data: TBA

    These files represent various medical imaging modalities and formats commonly used in clinical research and practice. They are intended for educational purposes, allowing students to practice image processing techniques, machine learning applications, and statistical analysis of medical images using Python libraries such as scikit-image, pydicom, and SimpleITK.

  19. Main concerns related to distance learning modality during COVID-19...

    • statista.com
    Updated Jan 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Statista (2025). Main concerns related to distance learning modality during COVID-19 Philippines 2021 [Dataset]. https://www.statista.com/statistics/1262570/philippines-major-concerns-related-to-distance-learning-modality-during-covid-19/
    Explore at:
    Dataset updated
    Jan 13, 2025
    Dataset authored and provided by
    Statistahttp://statista.com/
    Time period covered
    Apr 4, 2021 - Apr 13, 2021
    Area covered
    Philippines
    Description

    According to a survey in 2021, 45 percent of Filipino respondents were concerned about limited or no access to gadgets or devices for distance learning during the coronavirus (COVID-19) pandemic in the Philippines. On the other hand, 42 percent were concerned about learning losses or a general decline in knowledge and skills.

  20. h

    Modality-Interference-in-MLLMs-DATA

    • huggingface.co
    Updated May 4, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luis Rui (2025). Modality-Interference-in-MLLMs-DATA [Dataset]. https://huggingface.co/datasets/luisrui/Modality-Interference-in-MLLMs-DATA
    Explore at:
    Dataset updated
    May 4, 2025
    Authors
    Luis Rui
    Description

    luisrui/Modality-Interference-in-MLLMs-DATA dataset hosted on Hugging Face and contributed by the HF Datasets community

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Centers for Disease Control and Prevention (2025). School Learning Modalities, 2021-2022 [Dataset]. https://catalog.data.gov/dataset/school-learning-modalities

School Learning Modalities, 2021-2022

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Mar 26, 2025
Dataset provided by
Centers for Disease Control and Prevention
Description

The 2021-2022 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2021-2022 school year and the Fall 2022 semester, from August 2021 – December 2022. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from August 1, 2021 to June 24, 2022 correspond to the 2021-2022 school year. During this time frame, data from the AEI/Return to Learn Tracker and most state dashboards were not available. Inferred modalities with a probability below 0.6 were deemed inconclusive and were omitted. During the Fall 2022 semester, modalities for districts with a school closure reported by Burbio were updated to either “Remote”, if the closure spanned the entire week, or “Hybrid”, if the closure spanned 1-4 days of the week. Data from August

Search
Clear search
Close search
Google apps
Main menu