https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
Just data to finish a task on an IoT course held by SIC Egypt The data contains the CPU metrics from my laptop, such as CPU usage, syscalls, and interrupts. it should be used to try 2 different ways of doing linear regression Time series on lag data and simple regression based on other metrics to predict the CPU usage.
https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html
The largest real-world dataset for multivariate time series anomaly detection (MTSAD) from the AIOps system of a Real-Time Data Warehouse (RTDW) from a top cloud computing company. All the metrics and labels in our dataset are derived from real-world scenarios. All metrics were obtained from the RTDW instance monitoring system and cover a rich variety of metric types, including CPU usage, queries per second (QPS) and latency, which are related to many important modules within RTDW AIOps Dataset. We obtain labels from the ticket system, which integrates three main sources of instance anomalies: user service requests, instance unavailability and fault simulations . User service requests refer to tickets that are submitted directly by users, whereas instance unavailability is typically detected through existing monitoring tools or discovered by Site Reliability Engineers (SREs). Since the system is usually very stable, we augment the anomaly samples by conducting fault simulations. Fault simulation refers to a special type of anomaly, planned beforehand, which is introduced to the system to test its performance under extreme conditions. All records in the ticket system are subject to follow-up processing by engineers, who meticulously mark the start and end times of each ticket. This rigorous approach ensures the accuracy of the labels in our dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU hours, institutions, and PI's by year.
Dataset Card: Anomaly Detection Metrics Data
Dataset Summary
This dataset contains system performance metrics collected over time for anomaly detection in time series data. It includes multiple system metrics such as CPU load, memory usage, and other resource utilization statistics, along with timestamps and additional attributes.
Dataset Details
Size: ~7.3 MB (raw JSON), 345 kB (auto-converted Parquet) Rows: 46,669 Format: JSON Libraries: datasets, pandas… See the full description on the dataset page: https://huggingface.co/datasets/ShreyasP123/anomaly_detection_metrics_data.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains input and analysis scripts supporting the findings of Thermal transport of glasses via machine learning driven simulations, by P. Pegolo and F. Grasselli. Content:
README.md: this file, information about the repository SiO2: vitreous silica parent folder
NEP: folder with datasets and input scripts for NEP training
train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file DP: folder with datasets and input scripts for DP training
input.json: DeePMD training input dataset: DeePMD training dataset validation: DeePMD validation dataset frozen_model.pb: DP model GKMD: scripts for the GKMD simulations Tersoff: Tersoff reference simulation
model.xyz: initial configuration run.in: GPUMD script SiO2.gpumd.tersoff88: Tersoff model parameters convert_movie_to_dump.py: script to convert GPUMD XYZ trajectory to LAMMPS format for re-running the trajectory with the MLPs DP: DP simulation
init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the DP NEP: NEP simulation
init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the NEP. Note that this needs the NEP-CPU user package installed in LAMMPS. At the moment it is not possible to re-run a trajectory with GPUMD. QHGK: scripts for the QHGK simulations
DP: DP data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration dynmat: scripts to compute interatomic force constants with the DP model. Analogous scripts were used also to compute IFCs with the other potentials.
initial.data: non optimized configuration in.dynmat.lmp: LAMMPS script to minimize the structure and compute second-order interatomic force constants in.third.lmp: LAMMPS script to compute third-order interatomic force constants Tersoff: Tersoff data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration NEP: NEP data
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration qhgk.py: script to compute QHGK lifetimes and thermal conductivity Si: vitreous silicon parent folder
QHGK: scripts for the QHGK simulations
qhgk.py: script to compute QHGK lifetimes [N]: folder with the calculations on a N-atoms system
second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration LiSi: vitreous litihum-intercalated silicon parent folder
NEP: folder with datasets and input scripts for NEP training
train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file EMD: folder with data on the equilibrium molecular dynamics simulations
70k: data of the simulations with ~70k atoms
1-45: folder with input scripts for the simulations at different Li concentration
fraction.dat: Li fraction, y, as in Li_{y}Si quench: scripts for the melt-quench-anneal sample preparation
model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input gk: scripts for the GKMD simulation
model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input cepstral: folder for cepstral analysis
analyze.py: python script for cepstral analysis of the fluxes' time-series generated by the GKMD runs
Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Code, documentation, data and Jupyter Notebook associated with the publication "Taxonomist: Application Detection Through Rich Monitoring Data" for the European Conference on Parallel Processing 2018.The related study develops a technique named 'Taxonomist' to identify applications running on supercomputers, using machine learning to classify known applications and detect unknown applications. The technique uses monitoring data such as CPU and memory usage metrics and hardware counters collected from supercomputers. The aims of this technique include providing an alternative to 'naive' application detection methods based on names of processes and scripts, and helping prevent fraud, waste and abuse in supercomputers.Taxonomist uses supervised learning techniques to automatically select the most relevant features that lead to reliable application identification. The process involves the following steps:1. Monitoring data is collected from every compute node in a time series format.2. 11 statistical features are extracted over the time series (e.g. percentiles, minimum, maximum, mean), thus reducing storage and computation overhead.3. A classifier is trained based on a set of labeled applications, based on a 'one-versus-rest' version of that classifier - effectively for each application in the training set a separate classifier is trained to differentiate that application.The dataset consists of:README.pdf - user guide for the 'Taxonomist' artifact outlining installation and instructions for using the Jupyter notebook, as well as code omissions in notebook compared to a described in Euro-Par 2018 process.taxonomist.py - Python file including a basic version of the Taxonomist framework. The module contents can be imported for other projects.noteboook.html - static HTML version of the notebook that can be viewed by a browser.notebook.ipynb - interactive Jupyter Notebook file, for operation see README.pdf.data.zip - compressed .zip file holding monitoring data collected from different applications executed on Volta:- metadata.csv: A csv file listing each run, the IDs of the nodes on which each run executed, which application was executed with which inputs, the start and end times and the duration of the applications.
timeseries.tar.bz2: A bzip2 compressed file containing the data collected. The uncompressed size is 16 GB, it is not necessary to uncompress for most of the notebook.
features.hdf: A HDF5 File containing the pre-calculated features. The calculation process is included in the notebook.requirements.txt - list of Python packages required.LICENSE - the licence under which this software is releasedFiles are in in openly accessible Python language (.py and ipynb), .html. pdf, .csv, .txt .zip and Hierarchical Data Format .hdf formats.Experimental set-up for the experiments reported in the related publication uses Volta, a Cray XC30m supercomputer located at Sandia National Laboratories, as well as the open source monitoring tool Lightweight Distributed Metric System (LDMS).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains the datasets used in Mirzadeh et al., 2022. It includes three InSAR time-series datasets from the Envisat descending orbit, ALOS-1 ascending orbit, and Sentinel-1A in ascending and descending orbits, acquired over the Abarkuh Plain, Iran, as well as the geological map of the study area and the GNSS and hydrogeological data used in this research.
Dataset 1: Envisat descending track 292
Date: 06 Oct 2003 - 05 Sep 2005 (12 acquisitions)
Processor: ISCE/stripmapStack + MintPy
Displacement time-series (in HDF-EOS5 format): timeseries_LOD_tropHgt_ramp_demErr.h5
Mean LOS Velocity (in HDF-EOS5 format): velocity.h5
Mask Temporal Coherence (in HDF-EOS5 format): maskTempCoh.h5
Geometry (in HDF-EOS5 format): geometryRadar.h5
Dataset 2: ALOS-1 ascending track 569
Date: 06 Dec 2006 - 17 Dec 2010 (14 acquisitions)
Processor: ISCE/stripmapStack + MintPy
Displacement time-series (in HDF-EOS5 format): timeseries_ERA5_ramp_demErr.h5
Mean LOS Velocity (in HDF-EOS5 format): velocity.h5
Mask Temporal Coherence (in HDF-EOS5 format): maskTempCoh.h5
Geometry (in HDF-EOS5 format): geometryRadar.h5
Dataset 2: Sentinel-1 ascending track 130 and descending track 137
Date: 14 Oct 2014 - 28 Mar 2020 (129 ascending acquisitions) + 27 Oct 2014 - 29 Mar 2020 (114 descending acquisitions)
Processor: ISCE/topsStack + MintPy
Displacement time-series (in HDF-EOS5 format): timeseries_ERA5_ramp_demErr.h5
Mean LOS Velocity (in HDF-EOS5 format): velocity.h5
Mask Temporal Coherence (in HDF-EOS5 format): maskTempCoh.h5
Geometry (in HDF-EOS5 format): geometryRadar.h5
The time series and Mean LOS Velocity (MVL) products can be georeferenced and resampled using the makTempCoh and geometryRadar products and the MintPy commands/functions.
Autonomous Underwater Vehicle (AUV) Monterey Bay Time Series from Feb 2016. This data set includes CTD and fluorometer data from the Makai AUV, as context for ecogenomic sampling using an onboard Environmental Sample Processor (ESP).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
XSEDE Service Provider Resources during 2011–2015.
CPU and GPU time series of cost of computing, also time series of cost of cloud computing in UK. The detailed descriptions of the series are available in the associated paper. AI models miss disease in Black and female patients
https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy
The global neural processor market is projected to grow exponentially in the coming years, driven by the increasing demand for artificial intelligence (AI) in various industries. The market is expected to reach a value of $281.4 million by 2033, expanding at a CAGR of 19.3% from 2025 to 2033. The growth is attributed to the rising adoption of AI in smartphones and tablets, autonomous vehicles, robotics, healthcare, smart home devices, cloud computing, industrial automation, and other applications. The key factors driving the market growth include the increasing demand for AI-powered devices, advancements in AI algorithms and hardware, and government initiatives to promote AI adoption. The rising popularity of smartphones and tablets, the growing adoption of autonomous vehicles, and the increasing use of AI in healthcare and smart home devices are among the major trends influencing the market. However, the market growth is subject to certain restraints, such as high hardware costs, data privacy and security concerns, and the need for skilled AI professionals. The neural processor market is experiencing unprecedented growth, driven by advancements in artificial intelligence and machine learning applications. Valued at [market value] million units in 2023, the market is projected to reach [market value] million units by 2030, exhibiting a CAGR of [growth rate]%. Recent developments include: In September 2024, Intel Corporation has released its Core Ultra 200V processors, which are the company's most power-efficient laptop chips to date. The chips include a neural processing unit optimized for running artificial intelligence models, which is four times faster than the previous generation. This new architecture enhances overall efficiency while maximizing computational power. , In June 2024, Advanced Micro Devices Inc. introduced its artificial intelligence processors, including the MI325X accelerator, at the Computex technology trade show. The company also detailed its new neural processing units (NPUs), designed to handle on-device AI tasks in AI PCs, as part of a broader strategy to enhance its product lineup with significant performance improvements, including the MI350 series expected to achieve 35 times better inference capabilities compared to its predecessors. , In May 2024, Apple Inc. has unveiled the M4 chip for the iPad Pro, utilizing second-generation 3-nanometer technology to enhance power efficiency and enable a thinner design. The chip features a 10-core CPU, a high-performance GPU featuring Dynamic Caching and ray tracing, and the fastest Neural Engine capable of 38 trillion operations per second. , In February 2024, MathWorks, Inc., a developer of mathematical computing software, has launched a hardware support package for the Qualcomm Hexagon Neural Processing Unit. This package enables automated code generation from Simulink and MATLAB models customized for Qualcomm’s architecture, improving data accuracy, ensuring standards compliance, and boosting developer productivity. .
https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy
BASE YEAR | 2024 |
HISTORICAL DATA | 2019 - 2024 |
REPORT COVERAGE | Revenue Forecast, Competitive Landscape, Growth Factors, and Trends |
MARKET SIZE 2023 | 18.53(USD Billion) |
MARKET SIZE 2024 | 23.47(USD Billion) |
MARKET SIZE 2032 | 155.5(USD Billion) |
SEGMENTS COVERED | Deployment Model ,Application ,Architecture ,Memory ,Regional |
COUNTRIES COVERED | North America, Europe, APAC, South America, MEA |
KEY MARKET DYNAMICS | Rising Artificial Intelligence AI Adoption Growing Demand for HighPerformance Computing Advancements in Machine Learning Algorithms Increasing Adoption of Cloud Computing Government Support for AI Research and Development |
MARKET FORECAST UNITS | USD Billion |
KEY COMPANIES PROFILED | Movidius ,Imagination Technologies ,Tensilica ,NVIDIA ,Xilinx ,Cadence Design Systems ,Synopsys ,NXP ,Google ,Analog Devices ,ARM ,Qualcomm ,CEVA ,Intel |
MARKET FORECAST PERIOD | 2025 - 2032 |
KEY MARKET OPPORTUNITIES | Cloud and edge computing Artificial intelligence Automotive applications Healthcare and medical imaging Industrial automation |
COMPOUND ANNUAL GROWTH RATE (CAGR) | 26.66% (2025 - 2032) |
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of multiple regression for decompression time prediction: Effects of vertex count and video size.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CPU time for different values of α.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Summary of multiple linear regression analysis for total time prediction: Effects of vertex count and video size.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Average of Hash Rate and of Power Consumption over time.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Hardware technical specifications of utilized testbeds.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
GRALIGN already uses all the CPU cores by default.
https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/
Just data to finish a task on an IoT course held by SIC Egypt The data contains the CPU metrics from my laptop, such as CPU usage, syscalls, and interrupts. it should be used to try 2 different ways of doing linear regression Time series on lag data and simple regression based on other metrics to predict the CPU usage.