100+ datasets found
  1. c

    School Learning Modalities, 2021-2022

    • s.cnmilf.com
    • datahub.hhs.gov
    • +4more
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention (2025). School Learning Modalities, 2021-2022 [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/school-learning-modalities
    Explore at:
    Dataset updated
    Mar 26, 2025
    Dataset provided by
    Centers for Disease Control and Prevention
    Description

    The 2021-2022 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2021-2022 school year and the Fall 2022 semester, from August 2021 – December 2022. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the _location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from August 1, 2021 to June 24, 2022 correspond to the 2021-2022 school year. During this time frame, data from the AEI/Return to Learn Tracker and most state dashboards were not available. Inferred modalities with a probability below 0.6 were deemed inconclusive and were omitted. During the Fall 2022 semester, modalities for districts with a school closure reported by Burbio were updated to either “Remote”, if the closure spanned the entire week, or “Hybrid”, if the closure spanned 1-4 days of the week. Data from August

  2. f

    Learning modalities reported by source.

    • plos.figshare.com
    xls
    Updated Oct 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mark J. Panaggio; Mike Fang; Hyunseung Bang; Paige A. Armstrong; Alison M. Binder; Julian E. Grass; Jake Magid; Marc Papazian; Carrie K. Shapiro-Mendoza; Sharyn E. Parks (2023). Learning modalities reported by source. [Dataset]. http://doi.org/10.1371/journal.pone.0292354.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Oct 4, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Mark J. Panaggio; Mike Fang; Hyunseung Bang; Paige A. Armstrong; Alison M. Binder; Julian E. Grass; Jake Magid; Marc Papazian; Carrie K. Shapiro-Mendoza; Sharyn E. Parks
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    During the COVID-19 pandemic, many public schools across the United States shifted from fully in-person learning to alternative learning modalities such as hybrid and fully remote learning. In this study, data from 14,688 unique school districts from August 2020 to June 2021 were collected to track changes in the proportion of schools offering fully in-person, hybrid and fully remote learning over time. These data were provided by Burbio, MCH Strategic Data, the American Enterprise Institute’s Return to Learn Tracker and individual state dashboards. Because the modalities reported by these sources were incomplete and occasionally misaligned, a model was needed to combine and deconflict these data to provide a more comprehensive description of modalities nationwide. A hidden Markov model (HMM) was used to infer the most likely learning modality for each district on a weekly basis. This method yielded higher spatiotemporal coverage than any individual data source and higher agreement with three of the four data sources than any other single source. The model output revealed that the percentage of districts offering fully in-person learning rose from 40.3% in September 2020 to 54.7% in June of 2021 with increases across 45 states and in both urban and rural districts. This type of probabilistic model can serve as a tool for fusion of incomplete and contradictory data sources in order to obtain more reliable data in support of public health surveillance and research efforts.

  3. c

    School Learning Modalities, 2020-2021

    • s.cnmilf.com
    • datahub.hhs.gov
    • +3more
    Updated Mar 26, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Centers for Disease Control and Prevention (2025). School Learning Modalities, 2020-2021 [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/school-learning-modalities-2020-2021
    Explore at:
    Dataset updated
    Mar 26, 2025
    Dataset provided by
    Centers for Disease Control and Prevention
    Description

    The 2020-2021 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2020-2021 school year, from August 2020 – June 2021. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the _location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from September 1, 2020 to June 25, 2021 correspond to the 2020-2021 school year. During this timeframe, all four sources of data were available. Inferred modalities with a probability below 0.75 were deemed inconclusive and were omitted. Data for the month of July may show “In Person” status although most school districts are effectively closed during this time for summer break. Users may wish to exclude July data from use for this reason where applicable. Sources K-12 School Opening Tracker. Burbio 2021; https

  4. Z

    Data from: MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark...

    • data.niaid.nih.gov
    • explore.openaire.eu
    Updated Apr 19, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiancheng Yang (2023). MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4269851
    Explore at:
    Dataset updated
    Apr 19, 2023
    Dataset provided by
    Bingbing Ni
    Jiancheng Yang
    Rui Shi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This data repository for MedMNIST v1 is out of date! Please check the latest version of MedMNIST v2.

    Abstract

    We present MedMNIST, a collection of 10 pre-processed medical open datasets. MedMNIST is standardized to perform classification tasks on lightweight 28x28 images, which requires no background knowledge. Covering the primary data modalities in medical image analysis, it is diverse on data scale (from 100 to 100,000) and tasks (binary/multi-class, ordinal regression and multi-label). MedMNIST could be used for educational purpose, rapid prototyping, multi-modal machine learning or AutoML in medical image analysis. Moreover, MedMNIST Classification Decathlon is designed to benchmark AutoML algorithms on all 10 datasets; We have compared several baseline methods, including open-source or commercial AutoML tools. The datasets, evaluation code and baseline methods for MedMNIST are publicly available at https://medmnist.github.io/.

    Please note that this dataset is NOT intended for clinical use.

    We recommend our official code to download, parse and use the MedMNIST dataset:

    pip install medmnist

    Citation and Licenses

    If you find this project useful, please cite our ISBI'21 paper as: Jiancheng Yang, Rui Shi, Bingbing Ni. "MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis," arXiv preprint arXiv:2010.14925, 2020.

    or using bibtex: @article{medmnist, title={MedMNIST Classification Decathlon: A Lightweight AutoML Benchmark for Medical Image Analysis}, author={Yang, Jiancheng and Shi, Rui and Ni, Bingbing}, journal={arXiv preprint arXiv:2010.14925}, year={2020} }

    Besides, please cite the corresponding paper if you use any subset of MedMNIST. Each subset uses the same license as that of the source dataset.

    PathMNIST

    Jakob Nikolas Kather, Johannes Krisam, et al., "Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study," PLOS Medicine, vol. 16, no. 1, pp. 1–22, 01 2019.

    License: CC BY 4.0

    ChestMNIST

    Xiaosong Wang, Yifan Peng, et al., "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases," in CVPR, 2017, pp. 3462–3471.

    License: CC0 1.0

    DermaMNIST

    Philipp Tschandl, Cliff Rosendahl, and Harald Kittler, "The ham10000 dataset, a large collection of multisource dermatoscopic images of common pigmented skin lesions," Scientific data, vol. 5, pp. 180161, 2018.

    Noel Codella, Veronica Rotemberg, Philipp Tschandl, M. Emre Celebi, Stephen Dusza, David Gutman, Brian Helba, Aadi Kalloo, Konstantinos Liopyris, Michael Marchetti, Harald Kittler, and Allan Halpern: “Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC)”, 2018; arXiv:1902.03368.

    License: CC BY-NC 4.0

    OCTMNIST/PneumoniaMNIST

    Daniel S. Kermany, Michael Goldbaum, et al., "Identifying medical diagnoses and treatable diseases by image-based deep learning," Cell, vol. 172, no. 5, pp. 1122 – 1131.e9, 2018.

    License: CC BY 4.0

    RetinaMNIST

    DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD), "The 2nd diabetic retinopathy – grading and image quality estimation challenge," https://isbi.deepdr.org/data.html, 2020.

    License: CC BY 4.0

    BreastMNIST

    Walid Al-Dhabyani, Mohammed Gomaa, Hussien Khaled, and Aly Fahmy, "Dataset of breast ultrasound images," Data in Brief, vol. 28, pp. 104863, 2020.

    License: CC BY 4.0

    OrganMNIST_{Axial,Coronal,Sagittal}

    Patrick Bilic, Patrick Ferdinand Christ, et al., "The liver tumor segmentation benchmark (lits)," arXiv preprint arXiv:1901.04056, 2019.

    Xuanang Xu, Fugen Zhou, et al., "Efficient multiple organ localization in ct image using 3d region proposal network," IEEE Transactions on Medical Imaging, vol. 38, no. 8, pp. 1885–1898, 2019.

    License: CC BY 4.0

  5. u

    3D Microvascular Image Data and Labels for Machine Learning

    • rdr.ucl.ac.uk
    bin
    Updated Apr 30, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Natalie Holroyd; Claire Walsh; Emmeline Brown; Emma Brown; Yuxin Zhang; Carles Bosch Pinol; Simon Walker-Samuel (2024). 3D Microvascular Image Data and Labels for Machine Learning [Dataset]. http://doi.org/10.5522/04/25715604.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Apr 30, 2024
    Dataset provided by
    University College London
    Authors
    Natalie Holroyd; Claire Walsh; Emmeline Brown; Emma Brown; Yuxin Zhang; Carles Bosch Pinol; Simon Walker-Samuel
    License

    Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
    License information was derived automatically

    Description

    These images and associated binary labels were collected from collaborators across multiple universities to serve as a diverse representation of biomedical images of vessel structures, for use in the training and validation of machine learning tools for vessel segmentation. The dataset contains images from a variety of imaging modalities, at different resolutions, using difference sources of contrast and featuring different organs/ pathologies. This data was use to train, test and validated a foundational model for 3D vessel segmentation, tUbeNet, which can be found on github. The paper descripting the training and validation of the model can be found here. Filenames are structured as follows: Data - [Modality]_[species Organ]_[resolution].tif Labels - [Modality]_[species Organ]_[resolution]_labels.tif Sub-volumes of larger dataset - [Modality]_[species Organ]_subvolume[dimensions in pixels].tif Manual labelling of blood vessels was carried out using Amira (2020.2, Thermo-Fisher, UK). Training data: opticalHREM_murineLiver_2.26x2.26x1.75um.tif: A high resolution episcopic microscopy (HREM) dataset, acquired in house by staining a healthy mouse liver with Eosin B and imaged using a standard HREM protocol. NB: 25% of this image volume was withheld from training, for use as test data. CT_murineTumour_20x20x20um.tif: X-ray microCT images of a microvascular cast, taken from a subcutaneous mouse model of colorectal cancer (acquired in house). NB: 25% of this image volume was withheld from training, for use as test data. RSOM_murineTumour_20x20um.tif: Raster-Scanning Optoacoustic Mesoscopy (RSOM) data from a subcutaneous tumour model (provided by Emma Brown, Bohndiek Group, University of Cambridge). The image data has undergone filtering to reduce the background ​(Brown et al., 2019)​. OCTA_humanRetina_24x24um.tif: retinal angiography data obtained using Optical Coherence Tomography Angiography (OCT-A) (provided by Dr Ranjan Rajendram, Moorfields Eye Hospital). Test data: MRI_porcineLiver_0.9x0.9x5mm.tif: T1-weighted Balanced Turbo Field Echo Magnetic Resonance Imaging (MRI) data from a machine-perfused porcine liver, acquired in-house. Test Data MFHREM_murineTumourLectin_2.76x2.76x2.61um.tif: a subcutaneous colorectal tumour mouse model was imaged in house using Multi-fluorescence HREM in house, with Dylight 647 conjugated lectin staining the vasculature ​(Walsh et al., 2021)​. The image data has been processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 480x480x640 voxels was manually labelled (MFHREM_murineTumourLectin_subvolume480x480x640.tif). MFHREM_murineBrainLectin_0.85x0.85x0.86um.tif: an MF-HREM image of the cortex of a mouse brain, stained with Dylight-647 conjugated lectin, was acquired in house ​(Walsh et al., 2021)​. The image data has been downsampled and processed using an asymmetric deconvolution algorithm described by ​Walsh et al., 2020​. NB: A sub-volume of 1000x1000x99 voxels was manually labelled. This sub-volume is provided at full resolution and without preprocessing (MFHREM_murineBrainLectin_subvol_0.57x0.57x0.86um.tif). 2Photon_murineOlfactoryBulbLectin_0.2x0.46x5.2um.tif: two-photon data of mouse olfactory bulb blood vessels, labelled with sulforhodamine 101, was kindly provided by Yuxin Zhang at the Sensory Circuits and Neurotechnology Lab, the Francis Crick Institute ​(Bosch et al., 2022)​. NB: A sub-volume of 500x500x79 voxel was manually labelled (2Photon_murineOlfactoryBulbLectin_subvolume500x500x79.tif). References: ​​Bosch, C., Ackels, T., Pacureanu, A., Zhang, Y., Peddie, C. J., Berning, M., Rzepka, N., Zdora, M. C., Whiteley, I., Storm, M., Bonnin, A., Rau, C., Margrie, T., Collinson, L., & Schaefer, A. T. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications 2022 13:1, 13(1), 1–16. https://doi.org/10.1038/s41467-022-30199-6 ​Brown, E., Brunker, J., & Bohndiek, S. E. (2019). Photoacoustic imaging as a tool to probe the tumour microenvironment. DMM Disease Models and Mechanisms, 12(7). https://doi.org/10.1242/DMM.039636 ​Walsh, C., Holroyd, N. A., Finnerty, E., Ryan, S. G., Sweeney, P. W., Shipley, R. J., & Walker-Samuel, S. (2021). Multifluorescence High-Resolution Episcopic Microscopy for 3D Imaging of Adult Murine Organs. Advanced Photonics Research, 2(10), 2100110. https://doi.org/10.1002/ADPR.202100110 ​Walsh, C., Holroyd, N., Shipley, R., & Walker-Samuel, S. (2020). Asymmetric Point Spread Function Estimation and Deconvolution for Serial-Sectioning Block-Face Imaging. Communications in Computer and Information Science, 1248 CCIS, 235–249. https://doi.org/10.1007/978-3-030-52791-4_19 ​ 

  6. Modality-based Multitasking and Practice - fMRI

    • openneuro.org
    Updated Mar 21, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marie Mueckstein; Kai Görgen; Stephan Heinzel; Urs Granacher; A. Michael Rapp; Christine Stelzel (2024). Modality-based Multitasking and Practice - fMRI [Dataset]. http://doi.org/10.18112/openneuro.ds005038.v1.0.2
    Explore at:
    Dataset updated
    Mar 21, 2024
    Dataset provided by
    OpenNeurohttps://openneuro.org/
    Authors
    Marie Mueckstein; Kai Görgen; Stephan Heinzel; Urs Granacher; A. Michael Rapp; Christine Stelzel
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    This dataset contsin the raw fMRI data of a preregistered study. Dataset includes:

    session pre 1. anat/ anatomical scans (T1-weighted images) for each subject 2. func/ whole-brain EPI data from all task runs (8x single task, 2x dual task, 1x resting state and 2x localizer task) 3. fmap/ fieldmaps with magnitude1, magnitude2 and phasediff

    session post 2. func/ whole-brain EPI data from all task runs (8x single task, 2x dual task) 3. fmap/ fieldmaps with magnitude1, magnitude2 and phasediff

    Please note, some participants did not complete the post session. We updated our consent form to get explicit permission to publish the individual data, although not all participants resigned the new version. Those participants are excluded here but part of the t-maps on neurovault (compare participants.tsv).

    Tasks were always included either visual or/and auditory input and required either manual or/and vocal responses (visual+manual and auditory+vocal are modality compatible and visual+vocal and auditory+manual are modality incompatible). Tasks were presented as either single task, or dual task. Participants completed a practice intervention prior to session post in which one group worked for 80 minutes outside the scanner on modality incompatible dual-tasks, one on modality compatible dual-task and the third one paused for 80 min.

    For exact tasks description and material and scripts, please see the preregistration: https://osf.io/whpz8

  7. h

    full-modality-data

    • huggingface.co
    Updated Mar 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nguyen Quang Trung (2025). full-modality-data [Dataset]. https://huggingface.co/datasets/ngqtrung/full-modality-data
    Explore at:
    Dataset updated
    Mar 23, 2025
    Authors
    Nguyen Quang Trung
    Description

    ngqtrung/full-modality-data dataset hosted on Hugging Face and contributed by the HF Datasets community

  8. f

    VGG16 Metrics, Average AUC = 0.9987707721217087.

    • plos.figshare.com
    xls
    Updated Dec 13, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Craig Macfadyen; Ajay Duraiswamy; David Harris-Birtill (2023). VGG16 Metrics, Average AUC = 0.9987707721217087. [Dataset]. http://doi.org/10.1371/journal.pdig.0000191.t005
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 13, 2023
    Dataset provided by
    PLOS Digital Health
    Authors
    Craig Macfadyen; Ajay Duraiswamy; David Harris-Birtill
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Algorithms that classify hyper-scale multi-modal datasets, comprising of millions of images, into constituent modality types can help researchers quickly retrieve and classify diagnostic imaging data, accelerating clinical outcomes. This research aims to demonstrate that a deep neural network that is trained on a hyper-scale dataset (4.5 million images) composed of heterogeneous multi-modal data can be used to obtain significant modality classification accuracy (96%). By combining 102 medical imaging datasets, a dataset of 4.5 million images was created. A ResNet-50, ResNet-18, and VGG16 were trained to classify these images by the imaging modality used to capture them (Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET), and X-ray) across many body locations. The classification accuracy of the models was then tested on unseen data. The best performing model achieved classification accuracy of 96% on unseen data, which is on-par, or exceeds the accuracy of more complex implementations using EfficientNets or Vision Transformers (ViTs). The model achieved a balanced accuracy of 86%. This research shows it is possible to train Deep Learning (DL) Convolutional Neural Networks (CNNs) with hyper-scale multimodal datasets, composed of millions of images. Such models can find use in real-world applications with volumes of image data in the hyper-scale range, such as medical imaging repositories, or national healthcare institutions. Further research can expand this classification capability to include 3D-scans.

  9. d

    Distributed Anomaly Detection Using Satellite Data From Multiple Modalities

    • catalog.data.gov
    • datasets.ai
    • +3more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Distributed Anomaly Detection Using Satellite Data From Multiple Modalities [Dataset]. https://catalog.data.gov/dataset/distributed-anomaly-detection-using-satellite-data-from-multiple-modalities-cf764
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    There has been a tremendous increase in the volume of Earth Science data over the last decade from modern satellites, in-situ sensors and different climate models. All these datasets need to be co-analyzed for finding interesting patterns or for searching for extremes or outliers. Information extraction from such rich data sources using advanced data mining methodologies is a challenging task not only due to the massive volume of data, but also because these datasets are physically stored at different geographical locations. Moving these petabytes of data over the network to a single location may waste a lot of bandwidth, and can take days to finish. To solve this problem, in this paper, we present a novel algorithm which can identify outliers in the global data without moving all the data to one location. The algorithm is highly accurate (close to 99%) and requires centralizing less than 5% of the entire dataset. We demonstrate the performance of the algorithm using data obtained from the NASA MODerate-resolution Imaging Spectroradiometer (MODIS) satellite images.

  10. d

    DISTRIBUTED ANOMALY DETECTION USING SATELLITE DATA FROM MULTIPLE MODALITIES

    • catalog.data.gov
    • gimi9.com
    • +3more
    Updated Apr 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). DISTRIBUTED ANOMALY DETECTION USING SATELLITE DATA FROM MULTIPLE MODALITIES [Dataset]. https://catalog.data.gov/dataset/distributed-anomaly-detection-using-satellite-data-from-multiple-modalities
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    DISTRIBUTED ANOMALY DETECTION USING SATELLITE DATA FROM MULTIPLE MODALITIES KANISHKA BHADURI, KAMALIKA DAS, AND PETR VOTAVA** Abstract. There has been a tremendous increase in the volume of Earth Science data over the last decade from modern satellites, in-situ sensors and different climate models. All these datasets need to be co-analyzed for finding interesting patterns or for searching for extremes or outliers. Information extraction from such rich data sources using advanced data mining methodologies is a challenging task not only due to the massive volume of data, but also because these datasets ate physically stored at different geographical locations. Moving these petabytes of data over the network to a single location may waste a lot of bandwidth, and can take days to finish. To solve this problem, in this paper, we present a novel algorithm which can identify outliers in the global data without moving all the data to one location. The algorithm is highly accurate (close to 99%) and requires centralizing less than 5% of the entire dataset. We demonstrate the performance of the algorithm using data obtained from the NASA MODerate-resolution Imaging Spectroradiometer (MODIS) satellite images.

  11. MedMNIST: Standardized Biomedical Images

    • kaggle.com
    Updated Feb 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Möbius (2024). MedMNIST: Standardized Biomedical Images [Dataset]. https://www.kaggle.com/datasets/arashnic/standardized-biomedical-images-medmnist
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 2, 2024
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Möbius
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    "'https://www.nature.com/articles/s41597-022-01721-8'">MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification https://www.nature.com/articles/s41597-022-01721-8

    A large-scale MNIST-like collection of standardized biomedical images, including 12 datasets for 2D and 6 datasets for 3D. All images are pre-processed into 28x28 (2D) or 28x28x28 (3D) with the corresponding classification labels, so that no background knowledge is required for users. Covering primary data modalities in biomedical images, MedMNIST is designed to perform classification on lightweight 2D and 3D images with various data scales (from 100 to 100,000) and diverse tasks (binary/multi-class, ordinal regression and multi-label). The resulting dataset, consisting of approximately 708K 2D images and 10K 3D images in total, could support numerous research and educational purposes in biomedical image analysis, computer vision and machine learning.Providers benchmark several baseline methods on MedMNIST, including 2D / 3D neural networks and open-source / commercial AutoML tools.

    MedMNIST Landscape :

    https://storage.googleapis.com/kagglesdsdata/datasets/4390240/7539891/medmnistlandscape.png?X-Goog-Algorithm=GOOG4-RSA-SHA256&X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20240202%2Fauto%2Fstorage%2Fgoog4_request&X-Goog-Date=20240202T132716Z&X-Goog-Expires=345600&X-Goog-SignedHeaders=host&X-Goog-Signature=479c8d80a4c6f28bf9532fea037969292a4f963662b022484a79c139297cfa1afc82db06c9b5275d6c52d5555d7fb178701d3ad7ebb036c9cf3d076fcf41014c05a6230d293f39dd320303efaa81d18e9c5888c23fe19884148a3be618e3e7c041383119a4c5547f0fa6cb1ddb5f3bf4dc1330a6fd5c693f32280e90fde5735e02052f2fc5b0003085d9ea70039903439814154dc39980dce3bace422d0672a69c4f4cefbe6bcebaacd2c5192a60172143667b14ba050a8383d0a7c6c639526c820ae58bbad99b4afc84e97bc87b2da6002d6faf181d4138e2a33961514370578892409b1e1a662424051573a3392273b00132a4f39becff877dff16a594848f" alt="medmnistlandscape">

    About MedMNIST Landscape figure: The horizontal axis denotes the base-10 logarithm of the dataset scale, and the vertical axis denotes base-10 logarithm of imaging resolution. The upward and downward triangles are used to distinguish between 2D datasets and 3D datasets, and the 4 different colors represent different tasks

    Key Features

    ###

    Diverse: It covers diverse data modalities, dataset scales (from 100 to 100,000), and tasks (binary/multi-class, multi-label, and ordinal regression). It is as diverse as the VDD and MSD to fairly evaluate the generalizable performance of machine learning algorithms in different settings, but both 2D and 3D biomedical images are provided.

    Standardized: Each sub-dataset is pre-processed into the same format, which requires no background knowledge for users. As an MNIST-like dataset collection to perform classification tasks on small images, it primarily focuses on the machine learning part rather than the end-to-end system. Furthermore, we provide standard train-validation-test splits for all datasets in MedMNIST, therefore algorithms could be easily compared.

    User-Friendly: The small size of 28×28 (2D) or 28×28×28 (3D) is lightweight and ideal for evaluating machine learning algorithms. We also offer a larger-size version, MedMNIST+: 64x64 (2D), 128x128 (2D), 224x224 (2D), and 64x64x64 (3D). Serving as a complement to the 28-size MedMNIST, this could be a standardized resource for developing medical foundation models. All these datasets are accessible via the same API.

    Educational: As an interdisciplinary research area, biomedical image analysis is difficult to hand on for researchers from other communities, as it requires background knowledge from computer vision, machine learning, biomedical imaging, and clinical science. Our data with the Creative Commons (CC) License is easy to use for educational purposes.

    Refer to the paper to learn more about data : https://www.nature.com/articles/s41597-022-01721-8

    Starter Code: download more data and training

    Github Page: https://github.com/MedMNIST/MedMNIST

    My Kaggle Starter Notebook: https://www.kaggle.com/code/arashnic/medmnist-download-and-use-data?scriptVersionId=161421937

    Acknowledgements

    Jiancheng Yang,Rui Shi,Donglai Wei,Zequan Liu,Lin Zhao,Bilian Ke,Hanspeter Pfister,Bingbing Ni Shanghai Jiao Tong University, Shanghai, China, Boston College, Chestnut Hill, MA RWTH Aachen University, Aachen, Germany, Fudan Institute of Metabolic Diseases, Zhongshan Hospital, Fudan University, Shanghai, China, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China, Harvard University, Cambridge, MA

    License and Citation

    The code is under Apache-2.0 License.

    The MedMNIST dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0)...

  12. Multi-modality medical image dataset for medical image processing in Python...

    • zenodo.org
    zip
    Updated Aug 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Candace Moore; Candace Moore; Giulia Crocioni; Giulia Crocioni (2024). Multi-modality medical image dataset for medical image processing in Python lesson [Dataset]. http://doi.org/10.5281/zenodo.13305760
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 12, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Candace Moore; Candace Moore; Giulia Crocioni; Giulia Crocioni
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset contains a collection of medical imaging files for use in the "Medical Image Processing with Python" lesson, developed by the Netherlands eScience Center.

    The dataset includes:

    1. SimpleITK compatible files: MRI T1 and CT scans (training_001_mr_T1.mha, training_001_ct.mha), digital X-ray (digital_xray.dcm in DICOM format), neuroimaging data (A1_grayT1.nrrd, A1_grayT2.nrrd). Data have been downloaded from here.
    2. MRI data: a T2-weighted image (OBJECT_phantom_T2W_TSE_Cor_14_1.nii in NIfTI-1 format). Data have been downloaded from here.
    3. Example images for the machine learning lesson: chest X-rays (rotatechest.png, other_op.png), cardiomegaly example (cardiomegaly_cc0.png).
    4. Additional anonymized data: TBA

    These files represent various medical imaging modalities and formats commonly used in clinical research and practice. They are intended for educational purposes, allowing students to practice image processing techniques, machine learning applications, and statistical analysis of medical images using Python libraries such as scikit-image, pydicom, and SimpleITK.

  13. MedSegBench: A Comprehensive Benchmark for Medical Image Segmentation in...

    • zenodo.org
    bin
    Updated Jun 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    MUSA AYDIN; MUSA AYDIN; ZEKİ KUŞ; ZEKİ KUŞ (2025). MedSegBench: A Comprehensive Benchmark for Medical Image Segmentation in Diverse Data Modalities [Dataset]. http://doi.org/10.5281/zenodo.13359660
    Explore at:
    binAvailable download formats
    Dataset updated
    Jun 27, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    MUSA AYDIN; MUSA AYDIN; ZEKİ KUŞ; ZEKİ KUŞ
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    We split the dataset into two parts due to the maximum uploaded file limit. You can find other data in Version 1.

    This dataset is an article study and is under evaluation.

    You can access the article on Nature.

    Trained model weights and detailed prediction results for each dataset (Zenodo)

  14. S

    Data from: Typical Concept-Driven Modality-missing Deep Cross-Modal...

    • scidb.cn
    Updated Apr 24, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xia Xinyu; Zhu Lei; Nie Xiushan; Dong Guohua; Zhang Huaxiang (2025). Typical Concept-Driven Modality-missing Deep Cross-Modal Retrieval [Dataset]. http://doi.org/10.57760/sciencedb.24176
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 24, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Xia Xinyu; Zhu Lei; Nie Xiushan; Dong Guohua; Zhang Huaxiang
    License

    https://api.github.com/licenses/mithttps://api.github.com/licenses/mit

    Description

    Cross-modal retrieval takes one modality data as a query and retrieves semantically relevant data in another modality. Most existing cross-modal retrieval methods are designed for scenarios with complete modality data. However, in real-world applications, incomplete modality data often exists, which these methods struggle to handle effectively. In this paper, we propose a typical concept-driven modality-missing deep cross-modal retrieval model. Specifically, we first propose a multi-modal Transformer integrated with multi-modal pretraining networks, which can fully capture the multi-modal fine-grained semantic interaction in the incomplete modality data, extract multi-modal fusion semantics and construct cross-modal subspace, and at the same time supervise the learning process to generate typical concepts. In addition, the typical concepts are used as the cross-attention key and value to drive the training of the modal mapping network, so that it can adaptively preserve the implicit multi-modal semantic concepts of the query modality data, generate cross-modal retrieval features, and fully preserve the pre-extracted multi-modal fusion semantics. More information about the source code: https://gitee.com/MrSummer123/CPCMR

  15. Multi-Modal Imaging Data-Integration Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Jul 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Multi-Modal Imaging Data-Integration Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/multi-modal-imaging-data-integration-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Jul 5, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Multi-Modal Imaging Data-Integration Market Outlook




    According to our latest research, the global Multi-Modal Imaging Data-Integration market size reached USD 1.67 billion in 2024. The market is expected to expand at a robust CAGR of 9.5% during the forecast period, reaching a projected value of USD 3.78 billion by 2033. This impressive growth is driven by the increasing demand for integrated imaging solutions in clinical diagnostics and research, as well as technological advancements in imaging modalities and data analytics platforms. As per our detailed analysis, the integration of multiple imaging modalities is revolutionizing the way healthcare professionals diagnose and treat complex diseases, offering comprehensive insights that single-modality imaging cannot provide.




    One of the primary growth factors propelling the Multi-Modal Imaging Data-Integration market is the rising prevalence of chronic diseases such as cancer, cardiovascular disorders, and neurological conditions. These diseases often require precise and multifaceted diagnostic approaches, which multi-modal imaging excels at delivering. By combining data from modalities like MRI, CT, PET, and ultrasound, clinicians can achieve a more holistic view of patient pathology, leading to improved treatment planning and patient outcomes. Moreover, the increasing adoption of personalized medicine is further driving the need for integrated imaging data, as tailored therapeutic strategies rely heavily on accurate, multi-dimensional diagnostic information.




    Another significant driver is the rapid technological evolution in both imaging hardware and software. Innovations such as artificial intelligence (AI) and machine learning are enabling more effective integration and interpretation of complex imaging datasets. Advanced integration techniques, including software-based and hybrid solutions, are making it feasible to seamlessly combine anatomical, functional, and molecular information from various imaging platforms. This technological leap is not only enhancing diagnostic precision but also reducing the time and cost associated with traditional, single-modality imaging workflows. The ongoing investment in research and development by both public and private sectors is ensuring a steady pipeline of improvements in multi-modal imaging data-integration.




    The growing adoption of digital health solutions, including cloud-based imaging data repositories and telemedicine platforms, is also contributing to market expansion. Healthcare institutions are increasingly recognizing the value of integrated imaging data in facilitating remote consultations, multidisciplinary team discussions, and collaborative research. The shift toward value-based care models emphasizes outcomes and efficiency, making multi-modal data-integration an attractive proposition for hospitals, diagnostic centers, and research institutes. Additionally, regulatory support for interoperability and data standardization is gradually lowering barriers to adoption, fostering a more conducive environment for market growth.




    From a regional perspective, North America continues to dominate the Multi-Modal Imaging Data-Integration market, accounting for the largest revenue share in 2024. This leadership is attributed to the region’s advanced healthcare infrastructure, high adoption rates of cutting-edge imaging technologies, and significant investments in healthcare IT. Europe follows closely, benefiting from robust government initiatives and a strong focus on research collaborations. The Asia Pacific region is emerging as the fastest-growing market, driven by expanding healthcare access, rising investments in medical technology, and an increasing burden of chronic diseases. Latin America and the Middle East & Africa, while currently holding smaller shares, are expected to witness steady growth due to improving healthcare systems and rising awareness of integrated imaging benefits.





    Imaging Modality Analysis




    The Imaging Modality segment forms the b

  16. g

    School Learning Modalities, 2020-2021 | gimi9.com

    • gimi9.com
    Updated Mar 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). School Learning Modalities, 2020-2021 | gimi9.com [Dataset]. https://gimi9.com/dataset/data-gov_school-learning-modalities-2020-2021/
    Explore at:
    Dataset updated
    Mar 7, 2023
    Description

    🇺🇸 미국 English The 2020-2021 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2020-2021 school year, from August 2020 – June 2021. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels.

  17. Student Enrollment and Attendance Data by Teaching Modality - 2020 - 2021

    • opendata.winchesterva.gov
    • data.virginia.gov
    xlsx
    Updated Jul 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Virginia State Data (2024). Student Enrollment and Attendance Data by Teaching Modality - 2020 - 2021 [Dataset]. https://opendata.winchesterva.gov/dataset/student-enrollment-and-attendance-data-by-teaching-modality-2020-2021
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    Jul 23, 2024
    Dataset provided by
    United States Department of Educationhttp://ed.gov/
    Authors
    Virginia State Data
    Description

    Student enrollment data disaggregated by students from low-income families, students from each racial and ethnic group, gender, English learners, children with disabilities, children experiencing homelessness, children in foster care, and migratory students for each mode of instruction.

  18. S

    Data from: CMShipReID: A Cross-Modality Ship Dataset for the...

    • scidb.cn
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xu Congan; Gao Long; Liu Yu; Zhang Qi; Su Nan; Zhang Shaoxuan; Li Tianyu; Zheng Xiaomei (2025). CMShipReID: A Cross-Modality Ship Dataset for the Re-IDentification Task [Dataset]. http://doi.org/10.57760/sciencedb.radars.00051
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 29, 2025
    Dataset provided by
    Science Data Bank
    Authors
    Xu Congan; Gao Long; Liu Yu; Zhang Qi; Su Nan; Zhang Shaoxuan; Li Tianyu; Zheng Xiaomei
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Image-based ship target analysis is an important task in the field of ship monitoring. Previous studies have achieved remarkable results in ship detection and recognition tasks. However, these related studies mainly rely on unimodal datasets, and there is still no publicly available ship individual re-identification dataset released, which restricts the research in the field of cross-modal individual re-identification of ship targets. To address this issue, we have constructed the first cross-modal ship re-identification dataset, CMShipReID. This dataset contains data from three modalities, namely visible light, near-infrared, and thermal infrared, which are collected by drones. It covers 10 categories, approximately 138 individual ships, and 8,337 images, thus providing data support for the research on cross-modal individual re-identification of ships. We have tested the mainstream re-identification algorithms as the performance benchmark for this dataset, which can serve as a fundamental reference for relevant scholars.

  19. f

    Dataset summary providing data modality, sequencing platform, and number of...

    • plos.figshare.com
    xls
    Updated Jun 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sumeer Ahmad Khan; Robert Lehmann; Xabier Martinez-de-Morentin; Alberto Maillo; Vincenzo Lagani; Narsis A. Kiani; David Gomez-Cabrero; Jesper Tegner (2023). Dataset summary providing data modality, sequencing platform, and number of cells employed for integration after pre-processing. [Dataset]. http://doi.org/10.1371/journal.pone.0281315.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 10, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Sumeer Ahmad Khan; Robert Lehmann; Xabier Martinez-de-Morentin; Alberto Maillo; Vincenzo Lagani; Narsis A. Kiani; David Gomez-Cabrero; Jesper Tegner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset summary providing data modality, sequencing platform, and number of cells employed for integration after pre-processing.

  20. M

    Multimodal Al Report

    • marketreportanalytics.com
    doc, pdf, ppt
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Market Report Analytics (2025). Multimodal Al Report [Dataset]. https://www.marketreportanalytics.com/reports/multimodal-al-75262
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Apr 10, 2025
    Dataset authored and provided by
    Market Report Analytics
    License

    https://www.marketreportanalytics.com/privacy-policyhttps://www.marketreportanalytics.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    Global
    Variables measured
    Market Size
    Description

    The Multimodal AI market is experiencing rapid growth, driven by the increasing need for sophisticated AI systems capable of understanding and interpreting information from multiple sources simultaneously. This convergence of data modalities—like text, images, audio, and video—enables more nuanced and comprehensive insights, leading to advancements across various sectors. The market's Compound Annual Growth Rate (CAGR) is projected to be robust, reflecting the escalating demand for applications like enhanced customer service via AI-powered chatbots incorporating voice and visual cues, improved fraud detection through multimodal analysis of transactional data and user behavior, and more effective medical diagnostics leveraging image analysis alongside patient history. Key players, including established tech giants like AWS, Microsoft, and Google, alongside innovative startups such as OpenAI and Jina AI, are heavily invested in this space, fostering innovation and competition. The market segmentation reveals significant opportunities across diverse applications, with the BFSI (Banking, Financial Services, and Insurance) and Retail & eCommerce sectors showing particularly strong adoption. Cloud-based deployments dominate, reflecting the scalability and accessibility benefits. While the on-premises segment retains relevance in specific industries demanding high security and control, cloud adoption is expected to accelerate further. Geographic distribution reveals a strong North American presence currently, but rapid growth is anticipated in the Asia-Pacific region, particularly India and China, driven by increasing digitalization and investment in AI technologies. The restraints to market expansion include the high initial investment costs associated with developing and deploying multimodal AI systems, the complexity involved in integrating diverse data sources, and the need for robust data annotation and model training processes. Furthermore, addressing concerns about data privacy and security within the context of multimodal data analysis remains crucial. Despite these challenges, the long-term outlook for the Multimodal AI market remains highly optimistic. As technological advancements reduce deployment costs and improve model efficiency, the accessibility and applicability of multimodal AI will broaden across industries and geographies, fueling further market expansion. The continuous innovation in underlying technologies, coupled with the ever-increasing volume of multimodal data generated across the digital landscape, positions Multimodal AI for sustained and significant growth over the forecast period (2025-2033).

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Centers for Disease Control and Prevention (2025). School Learning Modalities, 2021-2022 [Dataset]. https://s.cnmilf.com/user74170196/https/catalog.data.gov/dataset/school-learning-modalities

School Learning Modalities, 2021-2022

Explore at:
3 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Mar 26, 2025
Dataset provided by
Centers for Disease Control and Prevention
Description

The 2021-2022 School Learning Modalities dataset provides weekly estimates of school learning modality (including in-person, remote, or hybrid learning) for U.S. K-12 public and independent charter school districts for the 2021-2022 school year and the Fall 2022 semester, from August 2021 – December 2022. These data were modeled using multiple sources of input data (see below) to infer the most likely learning modality of a school district for a given week. These data should be considered district-level estimates and may not always reflect true learning modality, particularly for districts in which data are unavailable. If a district reports multiple modality types within the same week, the modality offered for the majority of those days is reflected in the weekly estimate. All school district metadata are sourced from the National Center for Educational Statistics (NCES) for 2020-2021. School learning modality types are defined as follows: In-Person: All schools within the district offer face-to-face instruction 5 days per week to all students at all available grade levels. Remote: Schools within the district do not offer face-to-face instruction; all learning is conducted online/remotely to all students at all available grade levels. Hybrid: Schools within the district offer a combination of in-person and remote learning; face-to-face instruction is offered less than 5 days per week, or only to a subset of students. Data Information School learning modality data provided here are model estimates using combined input data and are not guaranteed to be 100% accurate. This learning modality dataset was generated by combining data from four different sources: Burbio [1], MCH Strategic Data [2], the AEI/Return to Learn Tracker [3], and state dashboards [4-20]. These data were combined using a Hidden Markov model which infers the sequence of learning modalities (In-Person, Hybrid, or Remote) for each district that is most likely to produce the modalities reported by these sources. This model was trained using data from the 2020-2021 school year. Metadata describing the _location, number of schools and number of students in each district comes from NCES [21]. You can read more about the model in the CDC MMWR: COVID-19–Related School Closures and Learning Modality Changes — United States, August 1–September 17, 2021. The metrics listed for each school learning modality reflect totals by district and the number of enrolled students per district for which data are available. School districts represented here exclude private schools and include the following NCES subtypes: Public school district that is NOT a component of a supervisory union Public school district that is a component of a supervisory union Independent charter district “BI” in the state column refers to school districts funded by the Bureau of Indian Education. Technical Notes Data from August 1, 2021 to June 24, 2022 correspond to the 2021-2022 school year. During this time frame, data from the AEI/Return to Learn Tracker and most state dashboards were not available. Inferred modalities with a probability below 0.6 were deemed inconclusive and were omitted. During the Fall 2022 semester, modalities for districts with a school closure reported by Burbio were updated to either “Remote”, if the closure spanned the entire week, or “Hybrid”, if the closure spanned 1-4 days of the week. Data from August

Search
Clear search
Close search
Google apps
Main menu