100+ datasets found
  1. SlimageNet64

    • zenodo.org
    bin, bz2
    Updated Feb 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anonymous; Anonymous (2020). SlimageNet64 [Dataset]. http://doi.org/10.5281/zenodo.3672132
    Explore at:
    bz2, binAvailable download formats
    Dataset updated
    Feb 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Anonymous; Anonymous
    Description

    SlimageNet64 is new variant of ImageNet64×64 (Chrabaszcz et al., 2017), derived
    from Slim and ImageNet. SlimageNet64 is ideal for few-shot learning, continual learning and meta-learning research. It consists of 200 instances from each of the 1000 object categories of the ILSVRC-2012 dataset (Krizhevsky et al., 2012; Russakovsky et al., 2015), for a total of 200K RGB images with a resolution of 64 × 64 × 3 pixels. We created this dataset from the downscaled version of ILSVRC-2012, ImageNet64x64, as reported in (Chrabaszcz et al., 2017), using the box downsampling Pillow library.

  2. ViLCo: VIdeo Language COntinual learning Benchmark

    • zenodo.org
    bin, zip
    Updated Jul 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tianqi Tang; Tianqi Tang; Shohreh Deldari; Hao Xue; Celso De Melo; Flora Salim; Shohreh Deldari; Hao Xue; Celso De Melo; Flora Salim (2024). ViLCo: VIdeo Language COntinual learning Benchmark [Dataset]. http://doi.org/10.5281/zenodo.11560095
    Explore at:
    bin, zipAvailable download formats
    Dataset updated
    Jul 2, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Tianqi Tang; Tianqi Tang; Shohreh Deldari; Hao Xue; Celso De Melo; Flora Salim; Shohreh Deldari; Hao Xue; Celso De Melo; Flora Salim
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We introduce the first VIdeo Language COntinual learning Benchmark (ViLCo-Bench). Video language continual learning involves continuously adapting to information from video and text inputs, enhancing a model’s ability to handle new tasks while retaining prior knowledge. This field is a relatively under-explored area, and establishing appropriate datasets is crucial for facilitating communication and research in this field. In this study, we present the first dedicated benchmark, ViLCo-Bench, designed to evaluate continual learning models across a range of video-text tasks. The dataset comprises ten-minute-long videos and corresponding language queries collected from publicly available datasets.

    Additionally, we introduce a novel memory-efficient framework that incorporates self-supervised learning and mimics long-term and short-term memory effects. This framework addresses challenges including memory complexity from long video clips, natural language complexity from open queries, and text-video misalignment. We posit that ViLCo-Bench, with greater complexity compared to existing continual learning benchmarks, would serve as a critical tool for exploring the video-language domain, extending beyond conventional class-incremental tasks, and addressing complex and limited annotation issues.

    More detailed information can also be found on our url: https://github.com/cruiseresearchgroup/ViLCo

  3. Z

    AbdomenCT-1K: Continual Learning Benchmark

    • data.niaid.nih.gov
    • explore.openaire.eu
    • +1more
    Updated Jan 26, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Liu, Shangqing (2022). AbdomenCT-1K: Continual Learning Benchmark [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5903985
    Explore at:
    Dataset updated
    Jan 26, 2022
    Dataset provided by
    Wang, Yunpeng
    Wang, Congcong
    Li, Yuhui
    Zhang, Qi
    An, Xingle
    Gu, Song
    Zhang, Yao
    Wang, Qiyuan
    Liu, Xin
    He, Jian
    Zhang, Yichi
    Ma, Jun
    Liu, Shangqing
    Yang, Xiaoping
    Ge, Cheng
    Cao, Shucheng
    Zhu, Cheng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is the dataset of AbdomenCT-1K: Continual Learning Benchmark.

    Related paper: https://ieeexplore.ieee.org/document/9497733/

    Benchmark homepage: https://abdomenct-1k-continual-learning.grand-challenge.org/

  4. f

    Data_Sheet_1_Privacy-preserving continual learning methods for medical image...

    • frontiersin.figshare.com
    pdf
    Updated Aug 14, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tanvi Verma; Liyuan Jin; Jun Zhou; Jia Huang; Mingrui Tan; Benjamin Chen Ming Choong; Ting Fang Tan; Fei Gao; Xinxing Xu; Daniel S. Ting; Yong Liu (2023). Data_Sheet_1_Privacy-preserving continual learning methods for medical image classification: a comparative analysis.PDF [Dataset]. http://doi.org/10.3389/fmed.2023.1227515.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Aug 14, 2023
    Dataset provided by
    Frontiers
    Authors
    Tanvi Verma; Liyuan Jin; Jun Zhou; Jia Huang; Mingrui Tan; Benjamin Chen Ming Choong; Ting Fang Tan; Fei Gao; Xinxing Xu; Daniel S. Ting; Yong Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundThe implementation of deep learning models for medical image classification poses significant challenges, including gradual performance degradation and limited adaptability to new diseases. However, frequent retraining of models is unfeasible and raises concerns about healthcare privacy due to the retention of prior patient data. To address these issues, this study investigated privacy-preserving continual learning methods as an alternative solution.MethodsWe evaluated twelve privacy-preserving non-storage continual learning algorithms based deep learning models for classifying retinal diseases from public optical coherence tomography (OCT) images, in a class-incremental learning scenario. The OCT dataset comprises 108,309 OCT images. Its classes include normal (47.21%), drusen (7.96%), choroidal neovascularization (CNV) (34.35%), and diabetic macular edema (DME) (10.48%). Each class consisted of 250 testing images. For continuous training, the first task involved CNV and normal classes, the second task focused on DME class, and the third task included drusen class. All selected algorithms were further experimented with different training sequence combinations. The final model's average class accuracy was measured. The performance of the joint model obtained through retraining and the original finetune model without continual learning algorithms were compared. Additionally, a publicly available medical dataset for colon cancer detection based on histology slides was selected as a proof of concept, while the CIFAR10 dataset was included as the continual learning benchmark.ResultsAmong the continual learning algorithms, Brain-inspired-replay (BIR) outperformed the others in the continual learning-based classification of retinal diseases from OCT images, achieving an accuracy of 62.00% (95% confidence interval: 59.36-64.64%), with consistent top performance observed in different training sequences. For colon cancer histology classification, Efficient Feature Transformations (EFT) attained the highest accuracy of 66.82% (95% confidence interval: 64.23-69.42%). In comparison, the joint model achieved accuracies of 90.76% and 89.28%, respectively. The finetune model demonstrated catastrophic forgetting in both datasets.ConclusionAlthough the joint retraining model exhibited superior performance, continual learning holds promise in mitigating catastrophic forgetting and facilitating continual model updates while preserving privacy in healthcare deep learning models. Thus, it presents a highly promising solution for the long-term clinical deployment of such models.

  5. Data from: Tiny Robotics Dataset and Benchmark for Continual Object...

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Sep 25, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pasti Francesco; Pasti Francesco (2024). Tiny Robotics Dataset and Benchmark for Continual Object Detection [Dataset]. http://doi.org/10.5281/zenodo.13834550
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 25, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Pasti Francesco; Pasti Francesco
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Dataset for TiROD: Tiny Robotics Dataset and Benchmark for Continual Object Detection

    Official Website -> https://pastifra.github.io/TiROD/

    Code -> https://github.com/pastifra/TiROD_code

    Video -> https://www.youtube.com/watch?v=e76m3ol1i4I

    Paper -> https://arxiv.org/abs/2409.16215

  6. h

    CLRS

    • huggingface.co
    Updated Apr 27, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jonathan Roberts (2023). CLRS [Dataset]. https://huggingface.co/datasets/jonathan-roberts1/CLRS
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 27, 2023
    Authors
    Jonathan Roberts
    License

    https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/

    Description

    Dataset Card for "CLRS"

      Licensing Information
    

    For academic purposes.

      Citation Information
    

    CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification @article{s20041226, title = {CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification}, author = {Li, Haifeng and Jiang, Hao and Gu, Xin and Peng, Jian and Li, Wenbo and Hong, Liang and Tao, Chao}, year = 2020, journal = {Sensors}… See the full description on the dataset page: https://huggingface.co/datasets/jonathan-roberts1/CLRS.

  7. i

    Data from: Continual Learning for Segment Anything Model Adaptation

    • ieee-dataport.org
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jinglong Yang (2025). Continual Learning for Segment Anything Model Adaptation [Dataset]. https://ieee-dataport.org/documents/continual-learning-segment-anything-model-adaptation
    Explore at:
    Dataset updated
    Mar 31, 2025
    Authors
    Jinglong Yang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    medical imaging

  8. f

    Data_Sheet_1_Avoiding Catastrophe: Active Dendrites Enable Multi-Task...

    • frontiersin.figshare.com
    pdf
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abhiram Iyer; Karan Grewal; Akash Velu; Lucas Oliveira Souza; Jeremy Forest; Subutai Ahmad (2023). Data_Sheet_1_Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments.pdf [Dataset]. http://doi.org/10.3389/fnbot.2022.846219.s001
    Explore at:
    pdfAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Abhiram Iyer; Karan Grewal; Akash Velu; Lucas Oliveira Souza; Jeremy Forest; Subutai Ahmad
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows: first, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results in both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.

  9. P

    ViLCo Dataset

    • library.toponeai.link
    • paperswithcode.com
    Updated Apr 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Tianqi Tang; Shohreh Deldari; Hao Xue; Celso de Melo; Flora D. Salim (2025). ViLCo Dataset [Dataset]. https://library.toponeai.link/dataset/vilco
    Explore at:
    Dataset updated
    Apr 29, 2025
    Authors
    Tianqi Tang; Shohreh Deldari; Hao Xue; Celso de Melo; Flora D. Salim
    Description

    We propose the first standardized benchmark in multimodal continual learning for video data, defining protocols for training and metrics for evaluation. This standardized framework allows researchers to effectively compare models, driving advancements in AI systems that can continuously learn from diverse data sources.

    We define the setup for three recent multimodal tasks in a continual learning setup: Moment Query (MQ), Natural Language Query (NLQ), and Visual Query (VQ). We also provide systematic insights into the challenges, gaps, and limitations of each video-text continual learning tasks.

  10. t

    Marina Ceccon, Alessandro Fabris, Davide Dalle Pezze, Gian Antonio Susto...

    • service.tib.eu
    Updated Dec 17, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Marina Ceccon, Alessandro Fabris, Davide Dalle Pezze, Gian Antonio Susto (2024). Dataset: Multi-Label Continual Learning for Medical Imaging: A Novel Benchmark. https://doi.org/10.57702/zg6g3g3y [Dataset]. https://service.tib.eu/ldmservice/dataset/multi-label-continual-learning-for-medical-imaging--a-novel-benchmark
    Explore at:
    Dataset updated
    Dec 17, 2024
    Description

    A novel benchmark for multi-label image classification in medical imaging, combining new classes and domains into a challenging scenario.

  11. h

    MLLM-CL

    • huggingface.co
    Updated May 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Fei Zhu (2025). MLLM-CL [Dataset]. https://huggingface.co/datasets/Impression2805/MLLM-CL
    Explore at:
    Dataset updated
    May 29, 2025
    Authors
    Fei Zhu
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    MLLM-CL Benchmark Description

    MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains, whereas the latter evaluates on non-IID scenarios with emerging model ability. For more details, please refer to: MLLM-CL: Continual Learning for Multimodal Large Language Models [paper]. ‪Hongbo Zhao, Fei Zhu, Rundong Wang, ‪Gaofeng Meng, ‪Zhaoxiang… See the full description on the dataset page: https://huggingface.co/datasets/Impression2805/MLLM-CL.

  12. f

    Data_Sheet_1_Examining the Use of Temporal-Difference Incremental...

    • frontiersin.figshare.com
    txt
    Updated May 31, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Johannes Günther; Nadia M. Ady; Alex Kearney; Michael R. Dawson; Patrick M. Pilarski (2023). Data_Sheet_1_Examining the Use of Temporal-Difference Incremental Delta-Bar-Delta for Real-World Predictive Knowledge Architectures.CSV [Dataset]. http://doi.org/10.3389/frobt.2020.00034.s001
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Frontiers
    Authors
    Johannes Günther; Nadia M. Ady; Alex Kearney; Michael R. Dawson; Patrick M. Pilarski
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Predictions and predictive knowledge have seen recent success in improving not only robot control but also other applications ranging from industrial process control to rehabilitation. A property that makes these predictive approaches well-suited for robotics is that they can be learned online and incrementally through interaction with the environment. However, a remaining challenge for many prediction-learning approaches is an appropriate choice of prediction-learning parameters, especially parameters that control the magnitude of a learning machine's updates to its predictions (the learning rates or step sizes). Typically, these parameters are chosen based on an extensive parameter search—an approach that neither scales well nor is well-suited for tasks that require changing step sizes due to non-stationarity. To begin to address this challenge, we examine the use of online step-size adaptation using the Modular Prosthetic Limb: a sensor-rich robotic arm intended for use by persons with amputations. Our method of choice, Temporal-Difference Incremental Delta-Bar-Delta (TIDBD), learns and adapts step sizes on a feature level; importantly, TIDBD allows step-size tuning and representation learning to occur at the same time. As a first contribution, we show that TIDBD is a practical alternative for classic Temporal-Difference (TD) learning via an extensive parameter search. Both approaches perform comparably in terms of predicting future aspects of a robotic data stream, but TD only achieves comparable performance with a carefully hand-tuned learning rate, while TIDBD uses a robust meta-parameter and tunes its own learning rates. Secondly, our results show that for this particular application TIDBD allows the system to automatically detect patterns characteristic of sensor failures common to a number of robotic applications. As a third contribution, we investigate the sensitivity of classic TD and TIDBD with respect to the initial step-size values on our robotic data set, reaffirming the robustness of TIDBD as shown in previous papers. Together, these results promise to improve the ability of robotic devices to learn from interactions with their environments in a robust way, providing key capabilities for autonomous agents and robots.

  13. O

    Continual World

    • opendatalab.com
    • paperswithcode.com
    zip
    Updated Mar 24, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Polish Academy of Sciences (2023). Continual World [Dataset]. https://opendatalab.com/OpenDataLab/Continual_World
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 24, 2023
    Dataset provided by
    Jagiellonian University
    University of Oxford
    Polish Academy of Sciences
    DeepMind
    Description

    Continual World is a benchmark consisting of realistic and meaningfully diverse robotic tasks built on top of Meta-World as a testbed.

  14. h

    MADAR

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Intelligent and Quantum Secure Advanced Cyber Defense Research (IQSeC) Lab, MADAR [Dataset]. http://doi.org/10.57967/hf/5859
    Explore at:
    Dataset authored and provided by
    Intelligent and Quantum Secure Advanced Cyber Defense Research (IQSeC) Lab
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    MADAR: Efficient Continual Learning for Malware Analysis with Diversity-Aware Replay

    This dataset is released in support of the paper:

    MADAR: Efficient Continual Learning for Malware Analysis with Diversity-Aware ReplayMohammad Saidur Rahman, Scott Coull, Qi Yu, Matthew WrightarXiv preprint arXiv:2502.05760, 2025

    MADAR is a benchmark suite for evaluating continual learning methods in malware classification. It includes realistic data distribution shifts and supports scenarios such… See the full description on the dataset page: https://huggingface.co/datasets/IQSeC-Lab/MADAR.

  15. d

    Synthetic performance data for the KVLCC2

    • data.dtu.dk
    bin
    Updated Feb 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Malte Mittendorf; Ulrik Dam Nielsen; Harry B. Bingham (2023). Synthetic performance data for the KVLCC2 [Dataset]. http://doi.org/10.11583/DTU.21750257.v1
    Explore at:
    binAvailable download formats
    Dataset updated
    Feb 1, 2023
    Dataset provided by
    Technical University of Denmark
    Authors
    Malte Mittendorf; Ulrik Dam Nielsen; Harry B. Bingham
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The data comprises of two different concept drift scenarios: Sudden (SCD) and Incremental (ICD) concept drift. Moreover, three different datasets per scenario are available, which all show different sources of uncertainty. In fact, dataset A has a negigible degrees of epistemic and aleatoric uncertainty, whereas B introduces epistemic and C shows aleatoric uncertainty. The individual columns are: Speed, draft (fore and aft), significant wave height, peak period, rel. wave direction, rel. wind speed, rel. wind direction, shaft torque, power and rpm, water temperature and finally the biofouling scenario. The columns include a title with corresponding units. The files are saved in binary feather format for efficiency reasons. For more information, please consult the readme file or the upcoming paper.

  16. P

    CORe50 Dataset

    • paperswithcode.com
    Updated Oct 5, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincenzo Lomonaco; Davide Maltoni (2022). CORe50 Dataset [Dataset]. https://paperswithcode.com/dataset/core50
    Explore at:
    Dataset updated
    Oct 5, 2022
    Authors
    Vincenzo Lomonaco; Davide Maltoni
    Description

    CORe50 is a dataset designed for assessing Continual Learning techniques in an Object Recognition context.

  17. P

    Meta-Album Dataset

    • paperswithcode.com
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ihsan Ullah; Dustin Carrión-Ojeda; Sergio Escalera; Isabelle Guyon; Mike Huisman; Felix Mohr; Jan N van Rijn; Haozhe Sun; Joaquin Vanschoren; Phan Anh Vu, Meta-Album Dataset [Dataset]. https://paperswithcode.com/dataset/meta-album
    Explore at:
    Authors
    Ihsan Ullah; Dustin Carrión-Ojeda; Sergio Escalera; Isabelle Guyon; Mike Huisman; Felix Mohr; Jan N van Rijn; Haozhe Sun; Joaquin Vanschoren; Phan Anh Vu
    Description

    Meta Album is a meta-dataset created for few-shot learning, meta-learning, continual learning and so on. Meta Album consists of 40 datasets from 10 unique domains. Datasets are arranged in sets (10 datasets, one dataset from each domain). It is a continuously growing meta-dataset.

    We repurposed datasets that were generously made available by original creators. All datasets are free for use for academic purposes, provided that proper credits are given. For your convenience, you may cite our paper, which references all original creators.

    Meta-Album is released under a CC BY-NC 4.0 license permitting non-commercial use for research purposes, provided that you cite us. Additionally, redistributed datasets have their own license.

    The recommended use of Meta-Album is to conduct fundamental research on machine learning algorithms and conduct benchmarks, particularly in: few-shot learning, meta-learning, continual learning, transfer learning, and image classification.

  18. Z

    CL-MASR

    • data.niaid.nih.gov
    Updated Jun 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Salah Zaiem (2023). CL-MASR [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8065753
    Explore at:
    Dataset updated
    Jun 28, 2023
    Dataset provided by
    Mirco Ravanelli
    Cem Subakan
    Luca Della Libera
    Salah Zaiem
    Pooneh Mousavi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CL-MASR Dataset

    This is the dataset used in the continual learning for multilingual ASR (CL-MASR) benchmark. It is composed of speech recordings from 20 languages selected from the Common Voice 13 dataset. For each language, it includes up to 10/1/1 hours for train/dev/test, respectively.

    The CL-MASR benchmark platform is available in the SpeechBrain toolkit (see recipes/CommonVoice): https://github.com/speechbrain/speechbrain

    The original Common Voice 13 data are available at: https://commonvoice.mozilla.org/en/datasets

    List of Languages

    • English (en)
    • Chinese (zh-CN)
    • German (de)
    • Spanish (es)
    • Russian (ru)
    • French (fr)
    • Portuguese (pt)
    • Japanese (ja)
    • Turkish (tr)
    • Polish (pl)
    • Kinyarwanda (rw)
    • Esperanto (eo)
    • Kabyle (kab)
    • Luganda (lg)
    • Meadow Mari (mhr)
    • Central Kurdish (ckb)
    • Abkhaz (ab)
    • Kurmanji Kurdish (kmr)
    • Frisian (fy-NL)
    • Interlingua (ia)
  19. D

    Adaptive Learning Tools Market Report | Global Forecast From 2025 To 2033

    • dataintelo.com
    csv, pdf, pptx
    Updated Oct 5, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dataintelo (2024). Adaptive Learning Tools Market Report | Global Forecast From 2025 To 2033 [Dataset]. https://dataintelo.com/report/adaptive-learning-tools-market
    Explore at:
    pptx, csv, pdfAvailable download formats
    Dataset updated
    Oct 5, 2024
    Authors
    Dataintelo
    License

    https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy

    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Adaptive Learning Tools Market Outlook



    The global adaptive learning tools market size was valued at approximately USD 2.5 billion in 2023 and is projected to reach around USD 10.9 billion by 2032, growing at a robust CAGR of 17.8% during the forecast period. This growth is driven by an increasing emphasis on personalized learning experiences and the rising adoption of advanced educational technologies across various sectors.



    One of the primary growth factors for the adaptive learning tools market is the increasing demand for personalized learning experiences. As educational institutions and enterprises recognize the diverse learning needs of students and employees, they are increasingly adopting adaptive learning technologies that can tailor educational content to meet individual needs. This personalized approach enhances learning efficiency and outcomes, which drives the adoption of adaptive learning tools. Furthermore, advancements in artificial intelligence and machine learning technologies have significantly boosted the capabilities of adaptive learning systems, enabling them to provide more accurate and effective learning pathways.



    Another crucial factor contributing to market growth is the growing need for lifelong learning and continuous skill development. With the rapid pace of technological advancements and evolving job market requirements, there is a heightened need for individuals to continually upgrade their skills. Adaptive learning tools offer a flexible and efficient solution for ongoing learning and professional development. Corporate training programs, in particular, are increasingly leveraging these tools to provide customized training experiences that align with the specific needs of their workforce, thereby enhancing productivity and performance.



    The proliferation of digital learning platforms and the increasing accessibility of online education are also significant drivers for the adaptive learning tools market. The COVID-19 pandemic has accelerated the shift towards online learning, with a substantial number of educational institutions and enterprises adopting digital learning solutions. Adaptive learning tools, integrated into these digital platforms, have gained substantial traction as they offer an interactive and engaging learning experience while addressing the unique learning requirements of each user. This trend is expected to continue post-pandemic, further fueling market growth.



    Regionally, North America holds a significant share of the adaptive learning tools market, driven by substantial investments in educational technologies and the presence of key market players. The region's highly developed education infrastructure and strong focus on innovation also contribute to market growth. Additionally, Europe and Asia Pacific are anticipated to exhibit significant growth rates over the forecast period, with increasing government initiatives to enhance digital education and the rising adoption of e-learning solutions in these regions.



    Component Analysis



    The adaptive learning tools market is segmented by component into software and services. The software segment dominates the market, primarily due to the growing adoption of advanced adaptive learning platforms and applications. These software solutions leverage artificial intelligence and machine learning algorithms to provide personalized learning pathways, assessments, and feedback. The integration of analytics and data-driven insights into these platforms enhances their effectiveness, making them a preferred choice for educational institutions and enterprises alike. Continuous innovations in software development, including the incorporation of immersive technologies like virtual and augmented reality, are further driving the growth of this segment.



    The services segment, although smaller in comparison to software, is witnessing considerable growth due to the increasing demand for implementation, training, and support services. As organizations and institutions adopt adaptive learning tools, they require comprehensive services to ensure the smooth integration and effective utilization of these technologies. Professional services, such as consulting, system integration, and custom content development, play a crucial role in optimizing the deployment of adaptive learning solutions. Additionally, ongoing support and maintenance services are essential for addressing technical issues and ensuring the continued performance of adaptive learning systems.



    The services segment also includes managed services, which provide a holistic approach to managi

  20. l

    Supplementary information files for Context meta-reinforcement learning via...

    • repository.lboro.ac.uk
    pdf
    Updated Jun 28, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eseoghene Ben-Iwhiwhu; Jeffery Dick; Nicholas A Ketz; Praveen K Pilly; Andrea Soltoggio (2023). Supplementary information files for Context meta-reinforcement learning via neuromodulation [Dataset]. http://doi.org/10.17028/rd.lboro.23592483.v1
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jun 28, 2023
    Dataset provided by
    Loughborough University
    Authors
    Eseoghene Ben-Iwhiwhu; Jeffery Dick; Nicholas A Ketz; Praveen K Pilly; Andrea Soltoggio
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Supplementary files for article Context meta-reinforcement learning via neuromodulation Meta-reinforcement learning (meta-RL) algorithms enable agents to adapt quickly to tasks from few samples in dynamic environments. Such a feat is achieved through dynamic representations in an agent’s policy network (obtained via reasoning about task context, model parameter updates, or both). However, obtaining rich dynamic representations for fast adaptation beyond simple benchmark problems is challenging due to the burden placed on the policy network to accommodate different policies. This paper addresses the challenge by introducing neuromodulation as a modular component to augment a standard policy network that regulates neuronal activities in order to produce efficient dynamic representations for task adaptation. The proposed extension to the policy network is evaluated across multiple discrete and continuous control environments of increasing complexity. To prove the generality and benefits of the extension in meta-RL, the neuromodulated network was applied to two state-of-the-art meta-RL algorithms (CAVIA and PEARL). The result demonstrates that meta-RL augmented with neuromodulation produces significantly better result and richer dynamic representations in comparison to the baselines.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Anonymous; Anonymous (2020). SlimageNet64 [Dataset]. http://doi.org/10.5281/zenodo.3672132
Organization logo

SlimageNet64

Explore at:
6 scholarly articles cite this dataset (View in Google Scholar)
bz2, binAvailable download formats
Dataset updated
Feb 24, 2020
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Anonymous; Anonymous
Description

SlimageNet64 is new variant of ImageNet64×64 (Chrabaszcz et al., 2017), derived
from Slim and ImageNet. SlimageNet64 is ideal for few-shot learning, continual learning and meta-learning research. It consists of 200 instances from each of the 1000 object categories of the ILSVRC-2012 dataset (Krizhevsky et al., 2012; Russakovsky et al., 2015), for a total of 200K RGB images with a resolution of 64 × 64 × 3 pixels. We created this dataset from the downscaled version of ILSVRC-2012, ImageNet64x64, as reported in (Chrabaszcz et al., 2017), using the box downsampling Pillow library.

Search
Clear search
Close search
Google apps
Main menu