https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html
The largest real-world dataset for multivariate time series anomaly detection (MTSAD) from the AIOps system of a Real-Time Data Warehouse (RTDW) from a top cloud computing company. All the metrics and labels in our dataset are derived from real-world scenarios. All metrics were obtained from the RTDW instance monitoring system and cover a rich variety of metric types, including CPU usage, queries per second (QPS) and latency, which are related to many important modules within RTDW AIOps Dataset. We obtain labels from the ticket system, which integrates three main sources of instance anomalies: user service requests, instance unavailability and fault simulations . User service requests refer to tickets that are submitted directly by users, whereas instance unavailability is typically detected through existing monitoring tools or discovered by Site Reliability Engineers (SREs). Since the system is usually very stable, we augment the anomaly samples by conducting fault simulations. Fault simulation refers to a special type of anomaly, planned beforehand, which is introduced to the system to test its performance under extreme conditions. All records in the ticket system are subject to follow-up processing by engineers, who meticulously mark the start and end times of each ticket. This rigorous approach ensures the accuracy of the labels in our dataset.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
The Laptop Motherboard Health Monitoring Dataset is a synthetically generated dataset designed to aid in the development and testing of machine learning models for predictive maintenance and health monitoring of laptop motherboards. The dataset includes various health metrics such as CPU usage, RAM usage, temperature, voltage, disk usage, and fan speed, along with a label indicating whether a problem was detected and the type of problem.
Dataset Columns ModelName: The name and model of the laptop (e.g., Dell Inspiron 1234, HP Pavilion 5678). This column includes realistic combinations of popular laptop brands and model series, making the dataset relatable and practical.
CPUUsage: The CPU usage percentage, ranging from 0 to 100%. This metric indicates how much of the CPU's capacity is being utilized.
RAMUsage: The RAM usage percentage, ranging from 0 to 100%. This metric shows the proportion of RAM being used out of the total available.
Temperature: The temperature of the motherboard in degrees Celsius, ranging from 20 to 100°C. This metric is crucial for detecting overheating issues.
Voltage: The operating voltage in volts, ranging from 10 to 20V. Voltage measurements help in identifying power-related problems.
DiskUsage: The disk usage percentage, ranging from 0 to 100%. This metric indicates how much of the disk's capacity is being used.
FanSpeed: The speed of the cooling fan in revolutions per minute (RPM), ranging from 1000 to 5000 RPM. Fan speed is an important indicator of cooling performance.
ProblemDetected: The type of problem detected, if any. Possible values are:
No Problem Overheating Power Issue Memory Leak Disk Failure Usage This dataset can be used to train and evaluate machine learning models for the purpose of predictive maintenance. Researchers and practitioners can use the data to classify the type of problem based on the health metrics provided. The dataset is ideal for experimenting with various classification algorithms and techniques in the field of hardware health monitoring.
File Laptop_Motherboard_Health_Monitoring_Dataset.csv: The main dataset file containing 2000 rows of synthetic data. Acknowledgements This dataset is synthetically generated and does not represent real-world data. It is intended for educational and research purposes only.
Dataset Card: Anomaly Detection Metrics Data
Dataset Summary
This dataset contains system performance metrics collected over time for anomaly detection in time series data. It includes multiple system metrics such as CPU load, memory usage, and other resource utilization statistics, along with timestamps and additional attributes.
Dataset Details
Size: ~7.3 MB (raw JSON), 345 kB (auto-converted Parquet) Rows: 46,669 Format: JSON Libraries: datasets, pandas… See the full description on the dataset page: https://huggingface.co/datasets/ShreyasP123/anomaly_detection_metrics_data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU hours, institutions, and PI's by year.
Autonomous Underwater Vehicle (AUV) Monterey Bay Time Series from Feb 2016. This data set includes CTD and fluorometer data from the Makai AUV, as context for ecogenomic sampling using an onboard Environmental Sample Processor (ESP).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average CPU time (s) of all the referenced algorithm on benchmark function.
No description is available. Visit https://dataone.org/datasets/sha256%3A8b9b600f61bbd7d944013b78645ce2bb2494d735129ab86e43ba55f51657d613 for complete metadata about this dataset.
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global neural processor market is projected to grow exponentially in the coming years, driven by the increasing demand for artificial intelligence (AI) in various industries. The market is expected to reach a value of $281.4 million by 2033, expanding at a CAGR of 19.3% from 2025 to 2033. The growth is attributed to the rising adoption of AI in smartphones and tablets, autonomous vehicles, robotics, healthcare, smart home devices, cloud computing, industrial automation, and other applications. The key factors driving the market growth include the increasing demand for AI-powered devices, advancements in AI algorithms and hardware, and government initiatives to promote AI adoption. The rising popularity of smartphones and tablets, the growing adoption of autonomous vehicles, and the increasing use of AI in healthcare and smart home devices are among the major trends influencing the market. However, the market growth is subject to certain restraints, such as high hardware costs, data privacy and security concerns, and the need for skilled AI professionals. The neural processor market is experiencing unprecedented growth, driven by advancements in artificial intelligence and machine learning applications. Valued at [market value] million units in 2023, the market is projected to reach [market value] million units by 2030, exhibiting a CAGR of [growth rate]%. Recent developments include: In September 2024, Intel Corporation has released its Core Ultra 200V processors, which are the company's most power-efficient laptop chips to date. The chips include a neural processing unit optimized for running artificial intelligence models, which is four times faster than the previous generation. This new architecture enhances overall efficiency while maximizing computational power. , In June 2024, Advanced Micro Devices Inc. introduced its artificial intelligence processors, including the MI325X accelerator, at the Computex technology trade show. The company also detailed its new neural processing units (NPUs), designed to handle on-device AI tasks in AI PCs, as part of a broader strategy to enhance its product lineup with significant performance improvements, including the MI350 series expected to achieve 35 times better inference capabilities compared to its predecessors. , In May 2024, Apple Inc. has unveiled the M4 chip for the iPad Pro, utilizing second-generation 3-nanometer technology to enhance power efficiency and enable a thinner design. The chip features a 10-core CPU, a high-performance GPU featuring Dynamic Caching and ray tracing, and the fastest Neural Engine capable of 38 trillion operations per second. , In February 2024, MathWorks, Inc., a developer of mathematical computing software, has launched a hardware support package for the Qualcomm Hexagon Neural Processing Unit. This package enables automated code generation from Simulink and MATLAB models customized for Qualcomm’s architecture, improving data accuracy, ensuring standards compliance, and boosting developer productivity. .
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the datasets used in Mirzadeh et al., 2023. It includes three InSAR time-series datasets from the Envisat descending orbit, ALOS-1 ascending orbit, and Sentinel-1A in ascending and descending orbits, acquired over the Abarkuh Plain, Iran, as well as the geological map of the study area and the GNSS and hydrogeological data used in this research.
Dataset 1: Envisat descending track 292
Dataset 2: ALOS-1 ascending track 569
Dataset 2: Sentinel-1 ascending track 130 and descending track 137
The time series and Mean LOS Velocity (MVL) products can be georeferenced and resampled using the makTempCoh and geometryRadar products and the MintPy commands/functions.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
XSEDE Service Provider Resources during 2011–2015.
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 18.53(USD Billion) |
MARKET SIZE 2024 | 23.47(USD Billion) |
MARKET SIZE 2032 | 155.5(USD Billion) |
SEGMENTS COVERED | Deployment Model ,Application ,Architecture ,Memory ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | Rising Artificial Intelligence AI Adoption Growing Demand for HighPerformance Computing Advancements in Machine Learning Algorithms Increasing Adoption of Cloud Computing Government Support for AI Research and Development |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Movidius ,Imagination Technologies ,Tensilica ,NVIDIA ,Xilinx ,Cadence Design Systems ,Synopsys ,NXP ,Google ,Analog Devices ,ARM ,Qualcomm ,CEVA ,Intel |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Cloud and edge computing Artificial intelligence Automotive applications Healthcare and medical imaging Industrial automation |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 26.66% (2025 - 2032) |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of multiple linear regression analysis for total time prediction: Effects of vertex count and video size.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of multiple linear regression analysis for maximum RAM prediction: Effects of vertex count and video size.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU time for different values of α.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of sample linear regression analysis conducted for compressed bundle size and vertex count.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Allocation plan of CPU.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Model sizes and CPU times needed to solve the model in the computational efficiency test.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average maximum memory usage for three registration methods and two label fusion methods.
https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html
The largest real-world dataset for multivariate time series anomaly detection (MTSAD) from the AIOps system of a Real-Time Data Warehouse (RTDW) from a top cloud computing company. All the metrics and labels in our dataset are derived from real-world scenarios. All metrics were obtained from the RTDW instance monitoring system and cover a rich variety of metric types, including CPU usage, queries per second (QPS) and latency, which are related to many important modules within RTDW AIOps Dataset. We obtain labels from the ticket system, which integrates three main sources of instance anomalies: user service requests, instance unavailability and fault simulations . User service requests refer to tickets that are submitted directly by users, whereas instance unavailability is typically detected through existing monitoring tools or discovered by Site Reliability Engineers (SREs). Since the system is usually very stable, we augment the anomaly samples by conducting fault simulations. Fault simulation refers to a special type of anomaly, planned beforehand, which is introduced to the system to test its performance under extreme conditions. All records in the ticket system are subject to follow-up processing by engineers, who meticulously mark the start and end times of each ticket. This rigorous approach ensures the accuracy of the labels in our dataset.