Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
High-quality multivariate time-series datasets are significantly less accessible compared to more common data types such as images or text, due to the resource-intensive process of continuous monitoring, precise annotation, and long-term observation. This paper introduces a cost-effective solution in the form of a large-scale, curated dataset specifically designed for anomaly detection in computing systems’ performance metrics. The dataset encompasses 45 GB of multivariate time-series data collected from 66 systems, capturing key performance indicators such as CPU usage, memory consumption, disk I/O, system load, and power consumption across diverse hardware configurations and real-world usage scenarios. Annotated anomalies, including performance degradation and resource inefficiencies, provide a reliable benchmark and ground truth for evaluating anomaly detection models. By addressing the accessibility challenges associated with time-series data, this resource facilitates advancements in machine learning applications, including anomaly detection, predictive maintenance, and system optimisation. Its comprehensive and practical design makes it a foundational asset for researchers and practitioners dedicated to developing reliable and efficient computing systems.
S-band Polarimetric (S-Pol) Radar radar processor time series data from the Terrain-Influenced Monsoon Rainfall Experiment (TIMREX). The files for this data set are large (up to 4 GB each) so please note the listed file sizes when ordering. There are four different file types: scanning, vertical, stationary, and solar.
https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html
The largest real-world dataset for multivariate time series anomaly detection (MTSAD) from the AIOps system of a Real-Time Data Warehouse (RTDW) from a top cloud computing company. All the metrics and labels in our dataset are derived from real-world scenarios. All metrics were obtained from the RTDW instance monitoring system and cover a rich variety of metric types, including CPU usage, queries per second (QPS) and latency, which are related to many important modules within RTDW AIOps Dataset. We obtain labels from the ticket system, which integrates three main sources of instance anomalies: user service requests, instance unavailability and fault simulations . User service requests refer to tickets that are submitted directly by users, whereas instance unavailability is typically detected through existing monitoring tools or discovered by Site Reliability Engineers (SREs). Since the system is usually very stable, we augment the anomaly samples by conducting fault simulations. Fault simulation refers to a special type of anomaly, planned beforehand, which is introduced to the system to test its performance under extreme conditions. All records in the ticket system are subject to follow-up processing by engineers, who meticulously mark the start and end times of each ticket. This rigorous approach ensures the accuracy of the labels in our dataset.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 18.53(USD Billion) |
MARKET SIZE 2024 | 23.47(USD Billion) |
MARKET SIZE 2032 | 155.5(USD Billion) |
SEGMENTS COVERED | Deployment Model ,Application ,Architecture ,Memory ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | Rising Artificial Intelligence AI Adoption Growing Demand for HighPerformance Computing Advancements in Machine Learning Algorithms Increasing Adoption of Cloud Computing Government Support for AI Research and Development |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Movidius ,Imagination Technologies ,Tensilica ,NVIDIA ,Xilinx ,Cadence Design Systems ,Synopsys ,NXP ,Google ,Analog Devices ,ARM ,Qualcomm ,CEVA ,Intel |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Cloud and edge computing Artificial intelligence Automotive applications Healthcare and medical imaging Industrial automation |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 26.66% (2025 - 2032) |
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The Laptop Motherboard Health Monitoring Dataset is a synthetically generated dataset designed to aid in the development and testing of machine learning models for predictive maintenance and health monitoring of laptop motherboards. The dataset includes various health metrics such as CPU usage, RAM usage, temperature, voltage, disk usage, and fan speed, along with a label indicating whether a problem was detected and the type of problem.
Dataset Columns ModelName: The name and model of the laptop (e.g., Dell Inspiron 1234, HP Pavilion 5678). This column includes realistic combinations of popular laptop brands and model series, making the dataset relatable and practical.
CPUUsage: The CPU usage percentage, ranging from 0 to 100%. This metric indicates how much of the CPU's capacity is being utilized.
RAMUsage: The RAM usage percentage, ranging from 0 to 100%. This metric shows the proportion of RAM being used out of the total available.
Temperature: The temperature of the motherboard in degrees Celsius, ranging from 20 to 100°C. This metric is crucial for detecting overheating issues.
Voltage: The operating voltage in volts, ranging from 10 to 20V. Voltage measurements help in identifying power-related problems.
DiskUsage: The disk usage percentage, ranging from 0 to 100%. This metric indicates how much of the disk's capacity is being used.
FanSpeed: The speed of the cooling fan in revolutions per minute (RPM), ranging from 1000 to 5000 RPM. Fan speed is an important indicator of cooling performance.
ProblemDetected: The type of problem detected, if any. Possible values are:
No Problem Overheating Power Issue Memory Leak Disk Failure Usage This dataset can be used to train and evaluate machine learning models for the purpose of predictive maintenance. Researchers and practitioners can use the data to classify the type of problem based on the health metrics provided. The dataset is ideal for experimenting with various classification algorithms and techniques in the field of hardware health monitoring.
File Laptop_Motherboard_Health_Monitoring_Dataset.csv: The main dataset file containing 2000 rows of synthetic data. Acknowledgements This dataset is synthetically generated and does not represent real-world data. It is intended for educational and research purposes only.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average CPU time (s) of all the referenced algorithm on benchmark function.
Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information
https://dataverse.unimi.it/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/LJ6Z8Vhttps://dataverse.unimi.it/api/datasets/:persistentId/versions/2.1/customlicense?persistentId=doi:10.13130/RD_UNIMI/LJ6Z8V
Dataset containing real-world and synthetic samples on legit and malware samples in the form of time series. The samples consider machine-level performance metrics: CPU usage, RAM usage, number of bytes read and written from and to disk and network. Synthetic samples are generated using a GAN.
Dataset Card: Anomaly Detection Metrics Data
Dataset Summary
This dataset contains system performance metrics collected over time for anomaly detection in time series data. It includes multiple system metrics such as CPU load, memory usage, and other resource utilization statistics, along with timestamps and additional attributes.
Dataset Details
Size: ~7.3 MB (raw JSON), 345 kB (auto-converted Parquet) Rows: 46,669 Format: JSON Libraries: datasets, pandas… See the full description on the dataset page: https://huggingface.co/datasets/ShreyasP123/anomaly_detection_metrics_data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains input and analysis scripts supporting the findings of Thermal transport of glasses via machine learning driven simulations, by P. Pegolo and F. Grasselli. Content:
README.md: this file, information about the repository SiO2: vitreous silica parent folder
NEP: folder with datasets and input scripts for NEP training
train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file DP: folder with datasets and input scripts for DP training
input.json: DeePMD training input dataset: DeePMD training dataset validation: DeePMD validation dataset frozen_model.pb: DP model GKMD: scripts for the GKMD simulations Tersoff: Tersoff reference simulation
model.xyz: initial configuration run.in: GPUMD script SiO2.gpumd.tersoff88: Tersoff model parameters convert_movie_to_dump.py: script to convert GPUMD XYZ trajectory to LAMMPS format for re-running the trajectory with the MLPs DP: DP simulation
init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the DP NEP: NEP simulation
init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the NEP. Note that this needs the NEP-CPU user package installed in LAMMPS. At the moment it is not possible to re-run a trajectory with GPUMD. QHGK: scripts for the QHGK simulations
DP: DP data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration dynmat: scripts to compute interatomic force constants with the DP model. Analogous scripts were used also to compute IFCs with the other potentials.
initial.data: non optimized configuration in.dynmat.lmp: LAMMPS script to minimize the structure and compute second-order interatomic force constants in.third.lmp: LAMMPS script to compute third-order interatomic force constants Tersoff: Tersoff data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration NEP: NEP data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration qhgk.py: script to compute QHGK lifetimes and thermal conductivity Si: vitreous silicon parent folder
QHGK: scripts for the QHGK simulations
qhgk.py: script to compute QHGK lifetimes [N]: folder with the calculations on a N-atoms system
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration LiSi: vitreous litihum-intercalated silicon parent folder
NEP: folder with datasets and input scripts for NEP training
train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file EMD: folder with data on the equilibrium molecular dynamics simulations
70k: data of the simulations with ~70k atoms
1-45: folder with input scripts for the simulations at different Li concentration
fraction.dat: Li fraction, y, as in Li_{y}Si quench: scripts for the melt-quench-anneal sample preparation
model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input gk: scripts for the GKMD simulation
model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input cepstral: folder for cepstral analysis
analyze.py: python script for cepstral analysis of the fluxes' time-series generated by the GKMD runs
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the datasets used in Mirzadeh et al., 2023. It includes three InSAR time-series datasets from the Envisat descending orbit, ALOS-1 ascending orbit, and Sentinel-1A in ascending and descending orbits, acquired over the Abarkuh Plain, Iran, as well as the geological map of the study area and the GNSS and hydrogeological data used in this research.
Dataset 1: Envisat descending track 292
Dataset 2: ALOS-1 ascending track 569
Dataset 2: Sentinel-1 ascending track 130 and descending track 137
The time series and Mean LOS Velocity (MVL) products can be georeferenced and resampled using the makTempCoh and geometryRadar products and the MintPy commands/functions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains system metrics collected from 4 virtual machines (VMs) with identical hardware specifications, tested under a load-balanced Proxmox Virtual Environment using HAProxy.
⚙️ System Configuration:
- Number of VMs: 4 (laos-1, laos-2, laos-3, laos-4)
- VM Specifications: Identical CPU, Memory, and Network configurations
- Application Stack: Each VM runs the same Laravel-based web application served via NGINX
- Load Balancer: HAProxy, using the Least Connection algorithm
- Test Scenario: 10-50 Virtual Users (VU) running continuously for 30 minutes, 9 GET Request, 3 POST Request (k6)
📈 Data Contents:
Each row represents a real-time snapshot of the VM usage during the test, with data collected once per second.
- fetch
: The n-th fetch cycle (1 fetch = 1 data collection from all 4 VMs)
- update
: Indicates whether there was a new score computed between fetch cycles (1 = updated, 0 = no change)
- vm_id
: Internal identifier of the VM
- vm_name
: Hostname of the VM
- cpu_usage
: CPU usage in percent (0–100)
- max_cpu
: Maximum CPU capacity (set to 1.0 for normalization in this test environment)
- mem_usage
: Memory usage in percent (0–100)
- max_mem
: Maximum memory set at 1GB
- cum_netin
: Cumulative incoming network traffic (RX) in bytes
- cum_netout
: Cumulative outgoing network traffic (TX) in bytes
- rate_netin
: Rate of incoming traffic (difference between current and previous RX)
- rate_netout
: Rate of outgoing traffic (difference between current and previous TX)
- bw_usage
: Total bandwidth usage, computed as rate_netin + rate_netout
- max_bw
: Maximum network interface bandwidth, set to 12,500,000 bytes per second
- score
: Composite score calculated as CPU% + Memory% + Bandwidth%
, where a lower score indicates a better (less loaded) VM
- priority
: Ranking based on ascending score
(1 = best VM for routing)
- unix_timestamp
: Unix timestamp equivalent of timestamp
- timestamp
: Human-readable timestamp
🧪 This dataset may be suitable for: - Analyzing the effectiveness of load balancing strategies - Visualization of load distribution among VMs - Time series forecasting of VM workload - Building ML models for anomaly detection or auto-scaling policies
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Project and allocation data from XDCDB.
No description is available. Visit https://dataone.org/datasets/sha256%3A8b9b600f61bbd7d944013b78645ce2bb2494d735129ab86e43ba55f51657d613 for complete metadata about this dataset.
CPU and GPU time series of cost of computing, also time series of cost of cloud computing in UK. The detailed descriptions of the series are available in the associated paper. AI models miss disease in Black and female patients
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU hours, institutions, and PI's by year.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of multiple regression for decompression time prediction: Effects of vertex count and video size.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Allocation plan of CPU.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
High-quality multivariate time-series datasets are significantly less accessible compared to more common data types such as images or text, due to the resource-intensive process of continuous monitoring, precise annotation, and long-term observation. This paper introduces a cost-effective solution in the form of a large-scale, curated dataset specifically designed for anomaly detection in computing systems’ performance metrics. The dataset encompasses 45 GB of multivariate time-series data collected from 66 systems, capturing key performance indicators such as CPU usage, memory consumption, disk I/O, system load, and power consumption across diverse hardware configurations and real-world usage scenarios. Annotated anomalies, including performance degradation and resource inefficiencies, provide a reliable benchmark and ground truth for evaluating anomaly detection models. By addressing the accessibility challenges associated with time-series data, this resource facilitates advancements in machine learning applications, including anomaly detection, predictive maintenance, and system optimisation. Its comprehensive and practical design makes it a foundational asset for researchers and practitioners dedicated to developing reliable and efficient computing systems.