SlimageNet64 is new variant of ImageNet64×64 (Chrabaszcz et al., 2017), derived
from Slim and ImageNet. SlimageNet64 is ideal for few-shot learning, continual learning and meta-learning research. It consists of 200 instances from each of the 1000 object categories of the ILSVRC-2012 dataset (Krizhevsky et al., 2012; Russakovsky et al., 2015), for a total of 200K RGB images with a resolution of 64 × 64 × 3 pixels. We created this dataset from the downscaled version of ILSVRC-2012, ImageNet64x64, as reported in (Chrabaszcz et al., 2017), using the box downsampling Pillow library.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We introduce the first VIdeo Language COntinual learning Benchmark (ViLCo-Bench). Video language continual learning involves continuously adapting to information from video and text inputs, enhancing a model’s ability to handle new tasks while retaining prior knowledge. This field is a relatively under-explored area, and establishing appropriate datasets is crucial for facilitating communication and research in this field. In this study, we present the first dedicated benchmark, ViLCo-Bench, designed to evaluate continual learning models across a range of video-text tasks. The dataset comprises ten-minute-long videos and corresponding language queries collected from publicly available datasets.
Additionally, we introduce a novel memory-efficient framework that incorporates self-supervised learning and mimics long-term and short-term memory effects. This framework addresses challenges including memory complexity from long video clips, natural language complexity from open queries, and text-video misalignment. We posit that ViLCo-Bench, with greater complexity compared to existing continual learning benchmarks, would serve as a critical tool for exploring the video-language domain, extending beyond conventional class-incremental tasks, and addressing complex and limited annotation issues.
More detailed information can also be found on our url: https://github.com/cruiseresearchgroup/ViLCo
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This is the dataset of AbdomenCT-1K: Continual Learning Benchmark.
Related paper: https://ieeexplore.ieee.org/document/9497733/
Benchmark homepage: https://abdomenct-1k-continual-learning.grand-challenge.org/
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
BackgroundThe implementation of deep learning models for medical image classification poses significant challenges, including gradual performance degradation and limited adaptability to new diseases. However, frequent retraining of models is unfeasible and raises concerns about healthcare privacy due to the retention of prior patient data. To address these issues, this study investigated privacy-preserving continual learning methods as an alternative solution.MethodsWe evaluated twelve privacy-preserving non-storage continual learning algorithms based deep learning models for classifying retinal diseases from public optical coherence tomography (OCT) images, in a class-incremental learning scenario. The OCT dataset comprises 108,309 OCT images. Its classes include normal (47.21%), drusen (7.96%), choroidal neovascularization (CNV) (34.35%), and diabetic macular edema (DME) (10.48%). Each class consisted of 250 testing images. For continuous training, the first task involved CNV and normal classes, the second task focused on DME class, and the third task included drusen class. All selected algorithms were further experimented with different training sequence combinations. The final model's average class accuracy was measured. The performance of the joint model obtained through retraining and the original finetune model without continual learning algorithms were compared. Additionally, a publicly available medical dataset for colon cancer detection based on histology slides was selected as a proof of concept, while the CIFAR10 dataset was included as the continual learning benchmark.ResultsAmong the continual learning algorithms, Brain-inspired-replay (BIR) outperformed the others in the continual learning-based classification of retinal diseases from OCT images, achieving an accuracy of 62.00% (95% confidence interval: 59.36-64.64%), with consistent top performance observed in different training sequences. For colon cancer histology classification, Efficient Feature Transformations (EFT) attained the highest accuracy of 66.82% (95% confidence interval: 64.23-69.42%). In comparison, the joint model achieved accuracies of 90.76% and 89.28%, respectively. The finetune model demonstrated catastrophic forgetting in both datasets.ConclusionAlthough the joint retraining model exhibited superior performance, continual learning holds promise in mitigating catastrophic forgetting and facilitating continual model updates while preserving privacy in healthcare deep learning models. Thus, it presents a highly promising solution for the long-term clinical deployment of such models.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset for TiROD: Tiny Robotics Dataset and Benchmark for Continual Object Detection
Official Website -> https://pastifra.github.io/TiROD/
Code -> https://github.com/pastifra/TiROD_code
Video -> https://www.youtube.com/watch?v=e76m3ol1i4I
Paper -> https://arxiv.org/abs/2409.16215
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Dataset Card for "CLRS"
Licensing Information
For academic purposes.
Citation Information
CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification @article{s20041226, title = {CLRS: Continual Learning Benchmark for Remote Sensing Image Scene Classification}, author = {Li, Haifeng and Jiang, Hao and Gu, Xin and Peng, Jian and Li, Wenbo and Hong, Liang and Tao, Chao}, year = 2020, journal = {Sensors}… See the full description on the dataset page: https://huggingface.co/datasets/jonathan-roberts1/CLRS.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
medical imaging
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
A key challenge for AI is to build embodied systems that operate in dynamically changing environments. Such systems must adapt to changing task contexts and learn continuously. Although standard deep learning systems achieve state of the art results on static benchmarks, they often struggle in dynamic scenarios. In these settings, error signals from multiple contexts can interfere with one another, ultimately leading to a phenomenon known as catastrophic forgetting. In this article we investigate biologically inspired architectures as solutions to these problems. Specifically, we show that the biophysical properties of dendrites and local inhibitory systems enable networks to dynamically restrict and route information in a context-specific manner. Our key contributions are as follows: first, we propose a novel artificial neural network architecture that incorporates active dendrites and sparse representations into the standard deep learning framework. Next, we study the performance of this architecture on two separate benchmarks requiring task-based adaptation: Meta-World, a multi-task reinforcement learning environment where a robotic agent must learn to solve a variety of manipulation tasks simultaneously; and a continual learning benchmark in which the model's prediction task changes throughout training. Analysis on both benchmarks demonstrates the emergence of overlapping but distinct and sparse subnetworks, allowing the system to fluidly learn multiple tasks with minimal forgetting. Our neural implementation marks the first time a single architecture has achieved competitive results in both multi-task and continual learning settings. Our research sheds light on how biological properties of neurons can inform deep learning systems to address dynamic scenarios that are typically impossible for traditional ANNs to solve.
We propose the first standardized benchmark in multimodal continual learning for video data, defining protocols for training and metrics for evaluation. This standardized framework allows researchers to effectively compare models, driving advancements in AI systems that can continuously learn from diverse data sources.
We define the setup for three recent multimodal tasks in a continual learning setup: Moment Query (MQ), Natural Language Query (NLQ), and Visual Query (VQ). We also provide systematic insights into the challenges, gaps, and limitations of each video-text continual learning tasks.
A novel benchmark for multi-label image classification in medical imaging, combining new classes and domains into a challenging scenario.
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
MLLM-CL Benchmark Description
MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains, whereas the latter evaluates on non-IID scenarios with emerging model ability. For more details, please refer to: MLLM-CL: Continual Learning for Multimodal Large Language Models [paper]. Hongbo Zhao, Fei Zhu, Rundong Wang, Gaofeng Meng, Zhaoxiang… See the full description on the dataset page: https://huggingface.co/datasets/Impression2805/MLLM-CL.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Predictions and predictive knowledge have seen recent success in improving not only robot control but also other applications ranging from industrial process control to rehabilitation. A property that makes these predictive approaches well-suited for robotics is that they can be learned online and incrementally through interaction with the environment. However, a remaining challenge for many prediction-learning approaches is an appropriate choice of prediction-learning parameters, especially parameters that control the magnitude of a learning machine's updates to its predictions (the learning rates or step sizes). Typically, these parameters are chosen based on an extensive parameter search—an approach that neither scales well nor is well-suited for tasks that require changing step sizes due to non-stationarity. To begin to address this challenge, we examine the use of online step-size adaptation using the Modular Prosthetic Limb: a sensor-rich robotic arm intended for use by persons with amputations. Our method of choice, Temporal-Difference Incremental Delta-Bar-Delta (TIDBD), learns and adapts step sizes on a feature level; importantly, TIDBD allows step-size tuning and representation learning to occur at the same time. As a first contribution, we show that TIDBD is a practical alternative for classic Temporal-Difference (TD) learning via an extensive parameter search. Both approaches perform comparably in terms of predicting future aspects of a robotic data stream, but TD only achieves comparable performance with a carefully hand-tuned learning rate, while TIDBD uses a robust meta-parameter and tunes its own learning rates. Secondly, our results show that for this particular application TIDBD allows the system to automatically detect patterns characteristic of sensor failures common to a number of robotic applications. As a third contribution, we investigate the sensitivity of classic TD and TIDBD with respect to the initial step-size values on our robotic data set, reaffirming the robustness of TIDBD as shown in previous papers. Together, these results promise to improve the ability of robotic devices to learn from interactions with their environments in a robust way, providing key capabilities for autonomous agents and robots.
Continual World is a benchmark consisting of realistic and meaningfully diverse robotic tasks built on top of Meta-World as a testbed.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
MADAR: Efficient Continual Learning for Malware Analysis with Diversity-Aware Replay
This dataset is released in support of the paper:
MADAR: Efficient Continual Learning for Malware Analysis with Diversity-Aware ReplayMohammad Saidur Rahman, Scott Coull, Qi Yu, Matthew WrightarXiv preprint arXiv:2502.05760, 2025
MADAR is a benchmark suite for evaluating continual learning methods in malware classification. It includes realistic data distribution shifts and supports scenarios such… See the full description on the dataset page: https://huggingface.co/datasets/IQSeC-Lab/MADAR.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The data comprises of two different concept drift scenarios: Sudden (SCD) and Incremental (ICD) concept drift. Moreover, three different datasets per scenario are available, which all show different sources of uncertainty. In fact, dataset A has a negigible degrees of epistemic and aleatoric uncertainty, whereas B introduces epistemic and C shows aleatoric uncertainty. The individual columns are: Speed, draft (fore and aft), significant wave height, peak period, rel. wave direction, rel. wind speed, rel. wind direction, shaft torque, power and rpm, water temperature and finally the biofouling scenario. The columns include a title with corresponding units. The files are saved in binary feather format for efficiency reasons. For more information, please consult the readme file or the upcoming paper.
CORe50 is a dataset designed for assessing Continual Learning techniques in an Object Recognition context.
Meta Album is a meta-dataset created for few-shot learning, meta-learning, continual learning and so on. Meta Album consists of 40 datasets from 10 unique domains. Datasets are arranged in sets (10 datasets, one dataset from each domain). It is a continuously growing meta-dataset.
We repurposed datasets that were generously made available by original creators. All datasets are free for use for academic purposes, provided that proper credits are given. For your convenience, you may cite our paper, which references all original creators.
Meta-Album is released under a CC BY-NC 4.0 license permitting non-commercial use for research purposes, provided that you cite us. Additionally, redistributed datasets have their own license.
The recommended use of Meta-Album is to conduct fundamental research on machine learning algorithms and conduct benchmarks, particularly in: few-shot learning, meta-learning, continual learning, transfer learning, and image classification.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CL-MASR Dataset
This is the dataset used in the continual learning for multilingual ASR (CL-MASR) benchmark. It is composed of speech recordings from 20 languages selected from the Common Voice 13 dataset. For each language, it includes up to 10/1/1 hours for train/dev/test, respectively.
The CL-MASR benchmark platform is available in the SpeechBrain toolkit (see recipes/CommonVoice): https://github.com/speechbrain/speechbrain
The original Common Voice 13 data are available at: https://commonvoice.mozilla.org/en/datasets
List of Languages
https://dataintelo.com/privacy-and-policyhttps://dataintelo.com/privacy-and-policy
The global adaptive learning tools market size was valued at approximately USD 2.5 billion in 2023 and is projected to reach around USD 10.9 billion by 2032, growing at a robust CAGR of 17.8% during the forecast period. This growth is driven by an increasing emphasis on personalized learning experiences and the rising adoption of advanced educational technologies across various sectors.
One of the primary growth factors for the adaptive learning tools market is the increasing demand for personalized learning experiences. As educational institutions and enterprises recognize the diverse learning needs of students and employees, they are increasingly adopting adaptive learning technologies that can tailor educational content to meet individual needs. This personalized approach enhances learning efficiency and outcomes, which drives the adoption of adaptive learning tools. Furthermore, advancements in artificial intelligence and machine learning technologies have significantly boosted the capabilities of adaptive learning systems, enabling them to provide more accurate and effective learning pathways.
Another crucial factor contributing to market growth is the growing need for lifelong learning and continuous skill development. With the rapid pace of technological advancements and evolving job market requirements, there is a heightened need for individuals to continually upgrade their skills. Adaptive learning tools offer a flexible and efficient solution for ongoing learning and professional development. Corporate training programs, in particular, are increasingly leveraging these tools to provide customized training experiences that align with the specific needs of their workforce, thereby enhancing productivity and performance.
The proliferation of digital learning platforms and the increasing accessibility of online education are also significant drivers for the adaptive learning tools market. The COVID-19 pandemic has accelerated the shift towards online learning, with a substantial number of educational institutions and enterprises adopting digital learning solutions. Adaptive learning tools, integrated into these digital platforms, have gained substantial traction as they offer an interactive and engaging learning experience while addressing the unique learning requirements of each user. This trend is expected to continue post-pandemic, further fueling market growth.
Regionally, North America holds a significant share of the adaptive learning tools market, driven by substantial investments in educational technologies and the presence of key market players. The region's highly developed education infrastructure and strong focus on innovation also contribute to market growth. Additionally, Europe and Asia Pacific are anticipated to exhibit significant growth rates over the forecast period, with increasing government initiatives to enhance digital education and the rising adoption of e-learning solutions in these regions.
The adaptive learning tools market is segmented by component into software and services. The software segment dominates the market, primarily due to the growing adoption of advanced adaptive learning platforms and applications. These software solutions leverage artificial intelligence and machine learning algorithms to provide personalized learning pathways, assessments, and feedback. The integration of analytics and data-driven insights into these platforms enhances their effectiveness, making them a preferred choice for educational institutions and enterprises alike. Continuous innovations in software development, including the incorporation of immersive technologies like virtual and augmented reality, are further driving the growth of this segment.
The services segment, although smaller in comparison to software, is witnessing considerable growth due to the increasing demand for implementation, training, and support services. As organizations and institutions adopt adaptive learning tools, they require comprehensive services to ensure the smooth integration and effective utilization of these technologies. Professional services, such as consulting, system integration, and custom content development, play a crucial role in optimizing the deployment of adaptive learning solutions. Additionally, ongoing support and maintenance services are essential for addressing technical issues and ensuring the continued performance of adaptive learning systems.
The services segment also includes managed services, which provide a holistic approach to managi
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Supplementary files for article Context meta-reinforcement learning via neuromodulation Meta-reinforcement learning (meta-RL) algorithms enable agents to adapt quickly to tasks from few samples in dynamic environments. Such a feat is achieved through dynamic representations in an agent’s policy network (obtained via reasoning about task context, model parameter updates, or both). However, obtaining rich dynamic representations for fast adaptation beyond simple benchmark problems is challenging due to the burden placed on the policy network to accommodate different policies. This paper addresses the challenge by introducing neuromodulation as a modular component to augment a standard policy network that regulates neuronal activities in order to produce efficient dynamic representations for task adaptation. The proposed extension to the policy network is evaluated across multiple discrete and continuous control environments of increasing complexity. To prove the generality and benefits of the extension in meta-RL, the neuromodulated network was applied to two state-of-the-art meta-RL algorithms (CAVIA and PEARL). The result demonstrates that meta-RL augmented with neuromodulation produces significantly better result and richer dynamic representations in comparison to the baselines.
SlimageNet64 is new variant of ImageNet64×64 (Chrabaszcz et al., 2017), derived
from Slim and ImageNet. SlimageNet64 is ideal for few-shot learning, continual learning and meta-learning research. It consists of 200 instances from each of the 1000 object categories of the ILSVRC-2012 dataset (Krizhevsky et al., 2012; Russakovsky et al., 2015), for a total of 200K RGB images with a resolution of 64 × 64 × 3 pixels. We created this dataset from the downscaled version of ILSVRC-2012, ImageNet64x64, as reported in (Chrabaszcz et al., 2017), using the box downsampling Pillow library.