
 Facebook
Facebook Twitter
Twitter Email
Email
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
CPU utilization time series dataset for anomaly detection

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
High-quality multivariate time-series datasets are significantly less accessible compared to more common data types such as images or text, due to the resource-intensive process of continuous monitoring, precise annotation, and long-term observation. This paper introduces a cost-effective solution in the form of a large-scale, curated dataset specifically designed for anomaly detection in computing systems’ performance metrics. The dataset encompasses 45 GB of multivariate time-series data collected from 66 systems, capturing key performance indicators such as CPU usage, memory consumption, disk I/O, system load, and power consumption across diverse hardware configurations and real-world usage scenarios. Annotated anomalies, including performance degradation and resource inefficiencies, provide a reliable benchmark and ground truth for evaluating anomaly detection models. By addressing the accessibility challenges associated with time-series data, this resource facilitates advancements in machine learning applications, including anomaly detection, predictive maintenance, and system optimisation. Its comprehensive and practical design makes it a foundational asset for researchers and practitioners dedicated to developing reliable and efficient computing systems.

 Facebook
Facebook Twitter
Twitter Email
Email
https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By [source]
As you savvy job-seekers know, selecting an optimal site for GlideinWMS jobs is no small feat -weighing so many critical variables, and performing the highly sophisticated calculations needed to maximize the gains can be a tall order. Our dataset offers a valuable helping hand: with detailed insight into resource metrics and time-series analysis of over 400K hours of data, this treasure trove of information will hasten your journey towards finding just the right spot for all your job needs.
Specifically, our dataset contains three files: dataset_classification.csv, which provides information on critical elements such as disk usage and CPU cache size; dataset_time_series_analysis.csv featuring in-depth takeaways from careful time series analysis; And finally dataset_400k_hour.csv gathering computation results from over 400K hours of testing! With columns such as Failure (indicating whether or not the job failed) TotalCpus (the total number of CPUs used by the job), CpuIsBusy (whether or not the CPU is busy), and SlotType (the type of slot used by the job), it's easier than ever to plot that perfect path to success!
For more datasets, click here.
- 🚨 Your notebook can be here! 🚨!
This dataset can be used to help identify the most suitable site for GlideinWMS jobs. It contains resource metrics and time-series analysis, which can provide useful insight into the suitability of each potential site. The dataset consists of three sets: dataset_classification.csv, dataset_time_series_analysis.csv and dataset_400k_hour.csv.
The first set provides a high-level view of the critical resource metrics that are essential when matching a job and a site: DiskUsage, TotalCpus, TotalMemory, TotalDisk, CpuCacheSize and TotalVirtualMemoryTotalSlots as well as total slot information all important criteria for any job matching process - including whether or not the CpuIsBusy - along with information about the SlotType for each job at each potential site; additionally there is also data regarding Failure should an issue arise during this process; finally Site is provided so that users can ensure they are matching jobs to sites within their own specific environment if required by policy or business rules.
The second set provides detailed time-series analysis related to these metrics over longer timeframes as well LastUpdate indicating when this analysis was generated (without date), ydate indicating year of last update (without date), mdate indicating month of last update (without date) and hdate indicating hour at which data is refreshed on a regular basis without errors so that up-to-the minute decisions can be made during busy times like peak workloads or reallocations caused by anomalies in usage patterns within existing systems/environments;
Finally our third set takes things one step further with detailed information related to our 400k+ hours analytical data collection allowing you maximize efficiency while selecting best possible matches across multiple sites/criteria using only one tool (which we have conveniently packaged together in this impressive kaggle datasets :)
By taking advantage of our AI driven approach you will be able benefit from optimal job selection across many different scenarios such maximum efficiency scenarios with boosts in throughput through realtime scaling along with accountability boost ensuring proper system governance when moving from static systems utilizing static strategies towards ones more reactive working utilization dynamics within new agile deployments increasing stability while lowering maintenance costs over longer run!
- Use the total CPU, memory and disk usage metrics to identify jobs that need additional resources to complete quickly and suggest alternatives sites with more optimal resource availability
- Utilize the time-series analysis using failure rate, last update time series, as well as month/hour/year of last update metrics to create predictive models for job site matching and failure avoidance on future jobs
- Identify inefficiencies in scheduling by cross-examining job types (slot type), CPU caching size requirements against historical data to find opportunities for optimization or new approaches to job organization
If you use this dataset in your research, please credit the original authors. Data Source
**License: [CC0 1....

 Facebook
Facebook Twitter
Twitter Email
Email
https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html
The largest real-world dataset for multivariate time series anomaly detection (MTSAD) from the AIOps system of a Real-Time Data Warehouse (RTDW) from a top cloud computing company. All the metrics and labels in our dataset are derived from real-world scenarios. All metrics were obtained from the RTDW instance monitoring system and cover a rich variety of metric types, including CPU usage, queries per second (QPS) and latency, which are related to many important modules within RTDW AIOps Dataset. We obtain labels from the ticket system, which integrates three main sources of instance anomalies: user service requests, instance unavailability and fault simulations . User service requests refer to tickets that are submitted directly by users, whereas instance unavailability is typically detected through existing monitoring tools or discovered by Site Reliability Engineers (SREs). Since the system is usually very stable, we augment the anomaly samples by conducting fault simulations. Fault simulation refers to a special type of anomaly, planned beforehand, which is introduced to the system to test its performance under extreme conditions. All records in the ticket system are subject to follow-up processing by engineers, who meticulously mark the start and end times of each ticket. This rigorous approach ensures the accuracy of the labels in our dataset.

 Facebook
Facebook Twitter
Twitter Email
Email
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
harpertokenSysMon Dataset
  Dataset Summary
This open-source dataset captures real-time system metrics from macOS for time-series analysis, anomaly detection, and predictive maintenance.
  Dataset Features
OS Compatibility: macOS
Data Collection Interval: 1-5 seconds
Total Storage Limit: 4GB
File Format: CSV & Parquet
Data Fields:
timestamp: Date and time of capture
cpu_usage: CPU usage percentage per core
memory_used_mb: RAM usage in MB… See the full description on the dataset page: https://huggingface.co/datasets/harpertoken/harpertokenSysMon.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU hours, institutions, and PI's by year.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
For full details of the data please refer to the paper "The MIT Supercloud Dataset", available at https://ieeexplore.ieee.org/abstract/document/9622850 or https://arxiv.org/abs/2108.02037
Dataset
Datacenter monitoring systems offer a variety of data streams and events. The Datacenter Challenge datasets are a combination of high-level data (e.g. Slurm Workload Manager scheduler data) and low-level job-specific time series data. The high-level data includes parameters such as the number of nodes requested, number of CPU/GPU/memory requests, exit codes, and run time data. The low-level time series data is collected on the order of seconds for each job. This granular time series data includes CPU/GPU/memory utilization, amount of disk I/O, and environmental parameters such as power drawn and temperature. Ideally, leveraging both high-level scheduler data and low-level time series data will facilitate the development of AI/ML algorithms which not only predict/detect failures, but also allow for the accurate determination of their cause.
Here I will only include the high-level data.
If you are interested in using the dataset, please cite this paper.
@INPROCEEDINGS{9773216,
 author={Li, Baolin and Arora, Rohin and Samsi, Siddharth and Patel, Tirthak and Arcand, William and Bestor, David and Byun, Chansup and Roy, Rohan Basu and Bergeron, Bill and Holodnak, John and Houle, Michael and Hubbell, Matthew and Jones, Michael and Kepner, Jeremy and Klein, Anna and Michaleas, Peter and McDonald, Joseph and Milechin, Lauren and Mullen, Julie and Prout, Andrew and Price, Benjamin and Reuther, Albert and Rosa, Antonio and Weiss, Matthew and Yee, Charles and Edelman, Daniel and Vanterpool, Allan and Cheng, Anson and Gadepally, Vijay and Tiwari, Devesh},
 booktitle={2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA)}, 
 title={AI-Enabling Workloads on Large-Scale GPU-Accelerated System: Characterization, Opportunities, and Implications}, 
 year={2022},
 volume={},
 number={},
 pages={1224-1237},
 doi={10.1109/HPCA53966.2022.00093}}
Reference: https://dcc.mit.edu/ https://github.com/boringlee24/HPCA22_SuperCloud

 Facebook
Facebook Twitter
Twitter Email
Email
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Prior works have noted that existing public traces on anomaly detection and bottleneck localization in microservices applications only contain single, severe bottlenecks that are not representative of real-world scenarios. When such a bottleneck is introduced, the resulting latency increases by an order of magnitude (100x), making it trivial to detect that single bottleneck using a simple grid search or threshold-based approaches.
To create a more realistic dataset that includes traces with multiple bottlenecks at different intensities, we carefully benchmarked the social networking application under different interference intensities and duration of interference. We chose intensities and duration values that degrade the application performance but do not cause any faults or errors that can be trivially detected. We induced interference on different VMs at different times and also simultaneously. A single VM could be induced with different types of interference (e.g., CPU and memory), resulting in the hosted microservices experiencing a mixture of interference patterns. The resulting dataset consists of around 40 million request traces along with corresponding time series of CPU, memory, I/O, and network metrics. The dataset also includes application, VM, and Kubernetes logs.
A detailed description of the files is provided in the Data Explorer section. Please reach out to gagan at cs dot stonybrook dot edu if you have any questions or concerns.
If you find the dataset useful, please cite our WWW'24 paper "GAMMA: Graph Neural Network-Based Multi-Bottleneck Localization for Microservices Applications." Citation format (bibtex):
author = {Somashekar, Gagan and Dutt, Anurag and Adak, Mainak and Lorido Botran, Tania and Gandhi, Anshul},
title = {GAMMA: Graph Neural Network-Based Multi-Bottleneck Localization for Microservices Applications.},
year = {2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3589334.3645665},
doi = {10.1145/3589334.3645665},
booktitle = {Proceedings of the ACM Web Conference 2024},
location = {Singapore},
series = {WWW '24}
}```

 Facebook
Facebook Twitter
Twitter Email
Email
https://dataverse.unimi.it/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/LJ6Z8Vhttps://dataverse.unimi.it/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/LJ6Z8V
Dataset containing real-world and synthetic samples on legit and malware samples in the form of time series. The samples consider machine-level performance metrics: CPU usage, RAM usage, number of bytes read and written from and to disk and network. Synthetic samples are generated using a GAN.

 Facebook
Facebook Twitter
Twitter Email
Email
Dataset Card: Anomaly Detection Metrics Data
  Dataset Summary
This dataset contains system performance metrics collected over time for anomaly detection in time series data. It includes multiple system metrics such as CPU load, memory usage, and other resource utilization statistics, along with timestamps and additional attributes.
  Dataset Details
Size: ~7.3 MB (raw JSON), 345 kB (auto-converted Parquet) Rows: 46,669 Format: JSON Libraries: datasets, pandas… See the full description on the dataset page: https://huggingface.co/datasets/ShreyasP123/anomaly_detection_metrics_data.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Project and allocation data from XDCDB.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains input and analysis scripts supporting the findings of Thermal transport of glasses via machine learning driven simulations, by P. Pegolo and F. Grasselli. Content:
README.md: this file, information about the repository SiO2: vitreous silica parent folder
NEP: folder with datasets and input scripts for NEP training
train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file DP: folder with datasets and input scripts for DP training
input.json: DeePMD training input dataset: DeePMD training dataset validation: DeePMD validation dataset frozen_model.pb: DP model GKMD: scripts for the GKMD simulations Tersoff: Tersoff reference simulation
model.xyz: initial configuration run.in: GPUMD script SiO2.gpumd.tersoff88: Tersoff model parameters convert_movie_to_dump.py: script to convert GPUMD XYZ trajectory to LAMMPS format for re-running the trajectory with the MLPs DP: DP simulation
init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the DP NEP: NEP simulation
init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the NEP. Note that this needs the NEP-CPU user package installed in LAMMPS. At the moment it is not possible to re-run a trajectory with GPUMD. QHGK: scripts for the QHGK simulations
DP: DP data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration dynmat: scripts to compute interatomic force constants with the DP model. Analogous scripts were used also to compute IFCs with the other potentials.
initial.data: non optimized configuration in.dynmat.lmp: LAMMPS script to minimize the structure and compute second-order interatomic force constants in.third.lmp: LAMMPS script to compute third-order interatomic force constants Tersoff: Tersoff data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration NEP: NEP data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration qhgk.py: script to compute QHGK lifetimes and thermal conductivity Si: vitreous silicon parent folder
QHGK: scripts for the QHGK simulations
qhgk.py: script to compute QHGK lifetimes [N]: folder with the calculations on a N-atoms system
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration LiSi: vitreous litihum-intercalated silicon parent folder
NEP: folder with datasets and input scripts for NEP training
train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file EMD: folder with data on the equilibrium molecular dynamics simulations
70k: data of the simulations with ~70k atoms
1-45: folder with input scripts for the simulations at different Li concentration
fraction.dat: Li fraction, y, as in Li_{y}Si quench: scripts for the melt-quench-anneal sample preparation
model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input gk: scripts for the GKMD simulation
model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input cepstral: folder for cepstral analysis
analyze.py: python script for cepstral analysis of the fluxes' time-series generated by the GKMD runs

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the datasets used in Mirzadeh et al., 2022. It includes three InSAR time-series datasets from the Envisat descending orbit, ALOS-1 ascending orbit, and Sentinel-1A in ascending and descending orbits, acquired over the Abarkuh Plain, Iran, as well as the geological map of the study area and the GNSS and hydrogeological data used in this research.
Dataset 1: Envisat descending track 292
Dataset 2: ALOS-1 ascending track 569
Dataset 2: Sentinel-1 ascending track 130 and descending track 137
The time series and Mean LOS Velocity (MVL) products can be georeferenced and resampled using the makTempCoh and geometryRadar products and the MintPy commands/functions.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Dataset Description: Jetson Nano Bob Waveshare NVIDIA Performance Metrics
Overview:
This dataset contains performance metrics collected from the NVIDIA Jetson Nano development board, specifically using the Bob Waveshare module. The data captures various system parameters over time, providing insights into the performance and resource utilization of the device. Data Structure:
The dataset is structured in a CSV format with the following columns:
Purpose:
The dataset is intended for performance analysis, benchmarking, and optimization of applications running on the Jetson Nano. It can be used to monitor system health, identify bottlenecks, and evaluate the impact of different workloads on system resources.
Applications:
Researchers and developers can utilize this dataset to: Analyze CPU and GPU performance under various workloads. Monitor thermal performance and power consumption. Optimize software applications for better resource management. Conduct comparative studies with other embedded systems.
Data Collection:
The data was collected over a series of tests and benchmarks, capturing real-time performance metrics during operation. Each entry represents a snapshot of the system's state at a specific point in time.
Usage Notes:
Users should be aware of the context in which the data was collected, including the specific configurations and workloads applied during testing. This information is crucial for interpreting the results accurately. This dataset serves as a valuable resource for anyone looking to understand the performance characteristics of the NVIDIA Jetson Nano platform, particularly in the context of embedded AI and machine learning applications.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of multiple linear regression analysis for total time prediction: Effects of vertex count and video size.

 Facebook
Facebook Twitter
Twitter Email
Email
CPU and GPU time series of cost of computing, also time series of cost of cloud computing in UK. The detailed descriptions of the series are available in the associated paper. AI models miss disease in Black and female patients

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU time for different values of α.

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Autonomous Underwater Vehicle (AUV) Monterey Bay Time Series from Feb 2016. This data set includes CTD and fluorometer data from the Makai AUV, as context for ecogenomic sampling using an onboard Environmental Sample Processor (ESP).

 Facebook
Facebook Twitter
Twitter Email
Email
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Diluted-Average-Shares Time Series for Ingenic Semiconductor. Ingenic Semiconductor Co.,Ltd. engages in the research and development, design, and sale of integrated circuit chip products in China and internationally. It offers multi-core crossover IoT micro-processor, multi-core heterogeneous crossover micro-processor, low-power AIoT micro-processor, low power image recognition micro-processor, ultra-low-power IoT micro-processor, low power AI video processor, 4K video and AI vision application processor, balanced video processor, dual camera low power video processor, 2K HEVC video-IOT MCU, and professional security backend processor. The company also provides computing, storage, analog, and interconnect chips. Its products are used in automotive electronics, industrial and medical, communication equipment, consumer electronics, and other fields. The company was founded in 2005 and is headquartered in Beijing, China.

 Facebook
Facebook Twitter
Twitter Email
Email
Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
CPU utilization time series dataset for anomaly detection