30 datasets found
  1. cpu_usage

    • kaggle.com
    Updated Sep 9, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Abdelrahman Hanafy (2023). cpu_usage [Dataset]. https://www.kaggle.com/datasets/abdelrahmanhanafy/cpu-usage
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 9, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Abdelrahman Hanafy
    License

    https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/

    Description

    Just data to finish a task on an IoT course held by SIC Egypt The data contains the CPU metrics from my laptop, such as CPU usage, syscalls, and interrupts. it should be used to try 2 different ways of doing linear regression Time series on lag data and simple regression based on other metrics to predict the CPU usage.

  2. f

    BigDataAD Benchmark Dataset

    • figshare.com
    zip
    Updated Sep 29, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kingsley Pattinson (2023). BigDataAD Benchmark Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.24040563.v8
    Explore at:
    zipAvailable download formats
    Dataset updated
    Sep 29, 2023
    Dataset provided by
    figshare
    Authors
    Kingsley Pattinson
    License

    https://www.apache.org/licenses/LICENSE-2.0.htmlhttps://www.apache.org/licenses/LICENSE-2.0.html

    Description

    The largest real-world dataset for multivariate time series anomaly detection (MTSAD) from the AIOps system of a Real-Time Data Warehouse (RTDW) from a top cloud computing company. All the metrics and labels in our dataset are derived from real-world scenarios. All metrics were obtained from the RTDW instance monitoring system and cover a rich variety of metric types, including CPU usage, queries per second (QPS) and latency, which are related to many important modules within RTDW AIOps Dataset. We obtain labels from the ticket system, which integrates three main sources of instance anomalies: user service requests, instance unavailability and fault simulations . User service requests refer to tickets that are submitted directly by users, whereas instance unavailability is typically detected through existing monitoring tools or discovered by Site Reliability Engineers (SREs). Since the system is usually very stable, we augment the anomaly samples by conducting fault simulations. Fault simulation refers to a special type of anomaly, planned beforehand, which is introduced to the system to test its performance under extreme conditions. All records in the ticket system are subject to follow-up processing by engineers, who meticulously mark the start and end times of each ticket. This rigorous approach ensures the accuracy of the labels in our dataset.

  3. f

    CPU hours, institutions, and PI's by year.

    • plos.figshare.com
    xls
    Updated Jun 2, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Knepper; Katy Börner (2023). CPU hours, institutions, and PI's by year. [Dataset]. http://doi.org/10.1371/journal.pone.0157628.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 2, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Richard Knepper; Katy Börner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CPU hours, institutions, and PI's by year.

  4. h

    anomaly_detection_metrics_data

    • huggingface.co
    Updated Feb 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shreyas Patil (2025). anomaly_detection_metrics_data [Dataset]. https://huggingface.co/datasets/ShreyasP123/anomaly_detection_metrics_data
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 5, 2025
    Authors
    Shreyas Patil
    Description

    Dataset Card: Anomaly Detection Metrics Data

      Dataset Summary
    

    This dataset contains system performance metrics collected over time for anomaly detection in time series data. It includes multiple system metrics such as CPU load, memory usage, and other resource utilization statistics, along with timestamps and additional attributes.

      Dataset Details
    

    Size: ~7.3 MB (raw JSON), 345 kB (auto-converted Parquet) Rows: 46,669 Format: JSON Libraries: datasets, pandas… See the full description on the dataset page: https://huggingface.co/datasets/ShreyasP123/anomaly_detection_metrics_data.

  5. Z

    Data for "Thermal transport of glasses via machine learning driven...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Feb 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Grasselli, Federico (2024). Data for "Thermal transport of glasses via machine learning driven simulations" [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_10225315
    Explore at:
    Dataset updated
    Feb 9, 2024
    Dataset provided by
    Grasselli, Federico
    Pegolo, Paolo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This repository contains input and analysis scripts supporting the findings of Thermal transport of glasses via machine learning driven simulations, by P. Pegolo and F. Grasselli. Content:

    README.md: this file, information about the repository SiO2: vitreous silica parent folder

    NEP: folder with datasets and input scripts for NEP training

    train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file DP: folder with datasets and input scripts for DP training

    input.json: DeePMD training input dataset: DeePMD training dataset validation: DeePMD validation dataset frozen_model.pb: DP model GKMD: scripts for the GKMD simulations Tersoff: Tersoff reference simulation

    model.xyz: initial configuration run.in: GPUMD script SiO2.gpumd.tersoff88: Tersoff model parameters convert_movie_to_dump.py: script to convert GPUMD XYZ trajectory to LAMMPS format for re-running the trajectory with the MLPs DP: DP simulation

    init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the DP NEP: NEP simulation

    init.data: LAMMPS initial configuration in.lmp: LAMMPS input to re-run the Tersoff trajectory with the NEP. Note that this needs the NEP-CPU user package installed in LAMMPS. At the moment it is not possible to re-run a trajectory with GPUMD. QHGK: scripts for the QHGK simulations

    DP: DP data

    second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration dynmat: scripts to compute interatomic force constants with the DP model. Analogous scripts were used also to compute IFCs with the other potentials.

    initial.data: non optimized configuration in.dynmat.lmp: LAMMPS script to minimize the structure and compute second-order interatomic force constants in.third.lmp: LAMMPS script to compute third-order interatomic force constants Tersoff: Tersoff data

    second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration NEP: NEP data

    second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration qhgk.py: script to compute QHGK lifetimes and thermal conductivity Si: vitreous silicon parent folder

    QHGK: scripts for the QHGK simulations

    qhgk.py: script to compute QHGK lifetimes [N]: folder with the calculations on a N-atoms system

    second.npy: second-order interatomic force constants third.npy: third-order interatomic force constants replicated_atoms.xyz: configuration LiSi: vitreous litihum-intercalated silicon parent folder

    NEP: folder with datasets and input scripts for NEP training

    train.xyz: training dataset test.xyz: validation dataset nep.in: NEP input script nep.txt: NEP model nep.restart: NEP restart file EMD: folder with data on the equilibrium molecular dynamics simulations

    70k: data of the simulations with ~70k atoms

    1-45: folder with input scripts for the simulations at different Li concentration

    fraction.dat: Li fraction, y, as in Li_{y}Si quench: scripts for the melt-quench-anneal sample preparation

    model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input gk: scripts for the GKMD simulation

    model.xyz: initial configuration restart.xyz: final configuration run.in: GPUMD input cepstral: folder for cepstral analysis

    analyze.py: python script for cepstral analysis of the fluxes' time-series generated by the GKMD runs

  6. d

    Data from: Approaches in highly parameterized inversion: TSPROC, a general...

    • datadiscoverystudio.org
    html, pdf
    Updated Jan 16, 2017
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2017). Approaches in highly parameterized inversion: TSPROC, a general time-series processor to assist in model calibration and result summarization [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/ba629376d4e04ab3901200a97be16b60/html
    Explore at:
    pdf, htmlAvailable download formats
    Dataset updated
    Jan 16, 2017
    Description

    Link to the ScienceBase Item Summary page for the item described by this metadata record. Service Protocol: Link to the ScienceBase Item Summary page for the item described by this metadata record. Application Profile: Web Browser. Link Function: information

  7. Artifact for Taxonomist: Application Detection through Rich Monitoring Data

    • springernature.figshare.com
    txt
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Emre Ates; Ozan Tuncer; Ata Turk; Vitus J. Leung; Jim Brandt; Manuel Egele; Ayse K. Coskun (2023). Artifact for Taxonomist: Application Detection through Rich Monitoring Data [Dataset]. http://doi.org/10.6084/m9.figshare.6384248.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Emre Ates; Ozan Tuncer; Ata Turk; Vitus J. Leung; Jim Brandt; Manuel Egele; Ayse K. Coskun
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Code, documentation, data and Jupyter Notebook associated with the publication "Taxonomist: Application Detection Through Rich Monitoring Data" for the European Conference on Parallel Processing 2018.The related study develops a technique named 'Taxonomist' to identify applications running on supercomputers, using machine learning to classify known applications and detect unknown applications. The technique uses monitoring data such as CPU and memory usage metrics and hardware counters collected from supercomputers. The aims of this technique include providing an alternative to 'naive' application detection methods based on names of processes and scripts, and helping prevent fraud, waste and abuse in supercomputers.Taxonomist uses supervised learning techniques to automatically select the most relevant features that lead to reliable application identification. The process involves the following steps:1. Monitoring data is collected from every compute node in a time series format.2. 11 statistical features are extracted over the time series (e.g. percentiles, minimum, maximum, mean), thus reducing storage and computation overhead.3. A classifier is trained based on a set of labeled applications, based on a 'one-versus-rest' version of that classifier - effectively for each application in the training set a separate classifier is trained to differentiate that application.The dataset consists of:README.pdf - user guide for the 'Taxonomist' artifact outlining installation and instructions for using the Jupyter notebook, as well as code omissions in notebook compared to a described in Euro-Par 2018 process.taxonomist.py - Python file including a basic version of the Taxonomist framework. The module contents can be imported for other projects.noteboook.html - static HTML version of the notebook that can be viewed by a browser.notebook.ipynb - interactive Jupyter Notebook file, for operation see README.pdf.data.zip - compressed .zip file holding monitoring data collected from different applications executed on Volta:- metadata.csv: A csv file listing each run, the IDs of the nodes on which each run executed, which application was executed with which inputs, the start and end times and the duration of the applications.

    • timeseries.tar.bz2: A bzip2 compressed file containing the data collected. The uncompressed size is 16 GB, it is not necessary to uncompress for most of the notebook.

    • features.hdf: A HDF5 File containing the pre-calculated features. The calculation process is included in the notebook.requirements.txt - list of Python packages required.LICENSE - the licence under which this software is releasedFiles are in in openly accessible Python language (.py and ipynb), .html. pdf, .csv, .txt .zip and Hierarchical Data Format .hdf formats.Experimental set-up for the experiments reported in the related publication uses Volta, a Cray XC30m supercomputer located at Sandia National Laboratories, as well as the open source monitoring tool Lightweight Distributed Metric System (LDMS).

  8. Z

    Transition and Drivers of Elastic-Inelastic Deformation in the Abarkuh Plain...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jul 8, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Estelle Chaussard (2023). Transition and Drivers of Elastic-Inelastic Deformation in the Abarkuh Plain from InSAR Multi-Sensor Time Series and Hydrogeological Data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_5972150
    Explore at:
    Dataset updated
    Jul 8, 2023
    Dataset provided by
    Roland Bürgmann
    Shuanggen Jin
    Estelle Chaussard
    Sayyed Mohammad Javad Mirzadeh
    Abolfazl Rezaei
    Saba Ghotbi
    Andreas Braun
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Abarkuh
    Description

    This repository contains the datasets used in Mirzadeh et al., 2022. It includes three InSAR time-series datasets from the Envisat descending orbit, ALOS-1 ascending orbit, and Sentinel-1A in ascending and descending orbits, acquired over the Abarkuh Plain, Iran, as well as the geological map of the study area and the GNSS and hydrogeological data used in this research.

    Dataset 1: Envisat descending track 292

    Date: 06 Oct 2003 - 05 Sep 2005 (12 acquisitions)

    Processor: ISCE/stripmapStack + MintPy

    Displacement time-series (in HDF-EOS5 format): timeseries_LOD_tropHgt_ramp_demErr.h5

    Mean LOS Velocity (in HDF-EOS5 format): velocity.h5

    Mask Temporal Coherence (in HDF-EOS5 format): maskTempCoh.h5

    Geometry (in HDF-EOS5 format): geometryRadar.h5

    Dataset 2: ALOS-1 ascending track 569

    Date: 06 Dec 2006 - 17 Dec 2010 (14 acquisitions)

    Processor: ISCE/stripmapStack + MintPy

    Displacement time-series (in HDF-EOS5 format): timeseries_ERA5_ramp_demErr.h5

    Mean LOS Velocity (in HDF-EOS5 format): velocity.h5

    Mask Temporal Coherence (in HDF-EOS5 format): maskTempCoh.h5

    Geometry (in HDF-EOS5 format): geometryRadar.h5

    Dataset 2: Sentinel-1 ascending track 130 and descending track 137

    Date: 14 Oct 2014 - 28 Mar 2020 (129 ascending acquisitions) + 27 Oct 2014 - 29 Mar 2020 (114 descending acquisitions)

    Processor: ISCE/topsStack + MintPy

    Displacement time-series (in HDF-EOS5 format): timeseries_ERA5_ramp_demErr.h5

    Mean LOS Velocity (in HDF-EOS5 format): velocity.h5

    Mask Temporal Coherence (in HDF-EOS5 format): maskTempCoh.h5

    Geometry (in HDF-EOS5 format): geometryRadar.h5

    The time series and Mean LOS Velocity (MVL) products can be georeferenced and resampled using the makTempCoh and geometryRadar products and the MintPy commands/functions.

  9. d

    Autonomous Underwater Vehicle Monterey Bay Time Series - CTD from AUV Makai...

    • search.dataone.org
    • bco-dmo.org
    • +1more
    Updated Mar 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dr Chris Scholin (2025). Autonomous Underwater Vehicle Monterey Bay Time Series - CTD from AUV Makai on 2016-02-03 [Dataset]. http://doi.org/10.26008/1912/bco-dmo.644012.1
    Explore at:
    Dataset updated
    Mar 9, 2025
    Dataset provided by
    Biological and Chemical Oceanography Data Management Office (BCO-DMO)
    Authors
    Dr Chris Scholin
    Time period covered
    Feb 3, 2016
    Area covered
    Description

    Autonomous Underwater Vehicle (AUV) Monterey Bay Time Series from Feb 2016. This data set includes CTD and fluorometer data from the Makai AUV, as context for ecogenomic sampling using an onboard Environmental Sample Processor (ESP).

  10. f

    First data set.

    • plos.figshare.com
    xls
    Updated Dec 19, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab (2024). First data set. [Dataset]. http://doi.org/10.1371/journal.pone.0314691.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The rapid development of Digital Twin (DT) technology has underlined challenges in resource-constrained mobile devices, especially in the application of extended realities (XR), which includes Augmented Reality (AR) and Virtual Reality (VR). These challenges lead to computational inefficiencies that negatively impact user experience when dealing with sizeable 3D model assets. This article applies multiple lossless compression algorithms to improve the efficiency of digital twin asset delivery in Unity’s AssetBundle and Addressable asset management frameworks. In this study, an optimal model will be obtained that reduces both bundle size and time required in visualization, simultaneously reducing CPU and RAM usage on mobile devices. This study has assessed compression methods, such as LZ4, LZMA, Brotli, Fast LZ, and 7-Zip, among others, for their influence on AR performance. This study also creates mathematical models for predicting resource utilization, like RAM and CPU time, required by AR mobile applications. Experimental results show a detailed comparison among these compression algorithms, which can give insights and help choose the best method according to the compression ratio, decompression speed, and resource usage. It finally leads to more efficient implementations of AR digital twins on resource-constrained mobile platforms with greater flexibility in development and a better end-user experience. Our results show that LZ4 and Fast LZ perform best in speed and resource efficiency, especially with RAM caching. At the same time, 7-Zip/LZMA achieves the highest compression ratios at the cost of slower loading. Brotli emerged as a strong option for web-based AR/VR content, striking a balance between compression efficiency and decompression speed, outperforming Gzip in WebGL contexts. The Addressable Asset system with LZ4 offers the most efficient balance for real-time AR applications. This study will deliver practical guidance on optimal compression method selection to improve user experience and scalability for AR digital twin implementations.

  11. XSEDE Service Provider Resources during 2011–2015.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Richard Knepper; Katy Börner (2023). XSEDE Service Provider Resources during 2011–2015. [Dataset]. http://doi.org/10.1371/journal.pone.0157628.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOShttp://plos.org/
    Authors
    Richard Knepper; Katy Börner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    XSEDE Service Provider Resources during 2011–2015.

  12. c

    Research data supporting "21st century progress in computing".

    • repository.cam.ac.uk
    zip
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Coyle, Diane; Hampton, Lucy (2025). Research data supporting "21st century progress in computing". [Dataset]. http://doi.org/10.17863/CAM.113404
    Explore at:
    zip(638529 bytes)Available download formats
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    Apollo
    University of Cambridge
    Authors
    Coyle, Diane; Hampton, Lucy
    Description

    CPU and GPU time series of cost of computing, also time series of cost of cloud computing in UK. The detailed descriptions of the series are available in the associated paper. AI models miss disease in Black and female patients

  13. N

    Neural Processor Market Report

    • archivemarketresearch.com
    doc, pdf, ppt
    Updated Feb 5, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Archive Market Research (2025). Neural Processor Market Report [Dataset]. https://www.archivemarketresearch.com/reports/neural-processor-market-9880
    Explore at:
    doc, ppt, pdfAvailable download formats
    Dataset updated
    Feb 5, 2025
    Dataset authored and provided by
    Archive Market Research
    License

    https://www.archivemarketresearch.com/privacy-policyhttps://www.archivemarketresearch.com/privacy-policy

    Time period covered
    2025 - 2033
    Area covered
    global
    Variables measured
    Market Size
    Description

    The global neural processor market is projected to grow exponentially in the coming years, driven by the increasing demand for artificial intelligence (AI) in various industries. The market is expected to reach a value of $281.4 million by 2033, expanding at a CAGR of 19.3% from 2025 to 2033. The growth is attributed to the rising adoption of AI in smartphones and tablets, autonomous vehicles, robotics, healthcare, smart home devices, cloud computing, industrial automation, and other applications. The key factors driving the market growth include the increasing demand for AI-powered devices, advancements in AI algorithms and hardware, and government initiatives to promote AI adoption. The rising popularity of smartphones and tablets, the growing adoption of autonomous vehicles, and the increasing use of AI in healthcare and smart home devices are among the major trends influencing the market. However, the market growth is subject to certain restraints, such as high hardware costs, data privacy and security concerns, and the need for skilled AI professionals. The neural processor market is experiencing unprecedented growth, driven by advancements in artificial intelligence and machine learning applications. Valued at [market value] million units in 2023, the market is projected to reach [market value] million units by 2030, exhibiting a CAGR of [growth rate]%. Recent developments include: In September 2024, Intel Corporation has released its Core Ultra 200V processors, which are the company's most power-efficient laptop chips to date. The chips include a neural processing unit optimized for running artificial intelligence models, which is four times faster than the previous generation. This new architecture enhances overall efficiency while maximizing computational power. , In June 2024, Advanced Micro Devices Inc. introduced its artificial intelligence processors, including the MI325X accelerator, at the Computex technology trade show. The company also detailed its new neural processing units (NPUs), designed to handle on-device AI tasks in AI PCs, as part of a broader strategy to enhance its product lineup with significant performance improvements, including the MI350 series expected to achieve 35 times better inference capabilities compared to its predecessors. , In May 2024, Apple Inc. has unveiled the M4 chip for the iPad Pro, utilizing second-generation 3-nanometer technology to enhance power efficiency and enable a thinner design. The chip features a 10-core CPU, a high-performance GPU featuring Dynamic Caching and ray tracing, and the fastest Neural Engine capable of 38 trillion operations per second. , In February 2024, MathWorks, Inc., a developer of mathematical computing software, has launched a hardware support package for the Qualcomm Hexagon Neural Processing Unit. This package enables automated code generation from Simulink and MATLAB models customized for Qualcomm’s architecture, improving data accuracy, ensuring standards compliance, and boosting developer productivity. .

  14. w

    Global Soc Deep Learning Chip Market Research Report: By Deployment Model...

    • wiseguyreports.com
    Updated Aug 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    wWiseguy Research Consultants Pvt Ltd (2024). Global Soc Deep Learning Chip Market Research Report: By Deployment Model (Cloud-based, On-premises, Hybrid), By Application (Image and Video Analytics, Speech and Natural Language Processing, Computer Vision, Time Series Analysis), By Architecture (Central Processing Unit (CPU), Graphics Processing Unit (GPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), By Memory (High Bandwidth Memory (HBM), Graphics Double Data Rate (GDDR, Dynamic Random Access Memory (DRAM) and By Regional (North America, Europe, South America, Asia Pacific, Middle East and Africa) - Forecast to 2032. [Dataset]. https://www.wiseguyreports.com/cn/reports/soc-deep-learning-chip-market
    Explore at:
    Dataset updated
    Aug 6, 2024
    Dataset authored and provided by
    wWiseguy Research Consultants Pvt Ltd
    License

    https://www.wiseguyreports.com/pages/privacy-policyhttps://www.wiseguyreports.com/pages/privacy-policy

    Time period covered
    Jan 8, 2024
    Area covered
    Global
    Description
    BASE YEAR2024
    HISTORICAL DATA2019 - 2024
    REPORT COVERAGERevenue Forecast, Competitive Landscape, Growth Factors, and Trends
    MARKET SIZE 202318.53(USD Billion)
    MARKET SIZE 202423.47(USD Billion)
    MARKET SIZE 2032155.5(USD Billion)
    SEGMENTS COVEREDDeployment Model ,Application ,Architecture ,Memory ,Regional
    COUNTRIES COVEREDNorth America, Europe, APAC, South America, MEA
    KEY MARKET DYNAMICSRising Artificial Intelligence AI Adoption Growing Demand for HighPerformance Computing Advancements in Machine Learning Algorithms Increasing Adoption of Cloud Computing Government Support for AI Research and Development
    MARKET FORECAST UNITSUSD Billion
    KEY COMPANIES PROFILEDMovidius ,Imagination Technologies ,Tensilica ,NVIDIA ,Xilinx ,Cadence Design Systems ,Synopsys ,NXP ,Google ,Analog Devices ,ARM ,Qualcomm ,CEVA ,Intel
    MARKET FORECAST PERIOD2025 - 2032
    KEY MARKET OPPORTUNITIESCloud and edge computing Artificial intelligence Automotive applications Healthcare and medical imaging Industrial automation
    COMPOUND ANNUAL GROWTH RATE (CAGR) 26.66% (2025 - 2032)
  15. f

    Summary of multiple regression for decompression time prediction: Effects of...

    • plos.figshare.com
    xls
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab (2024). Summary of multiple regression for decompression time prediction: Effects of vertex count and video size. [Dataset]. http://doi.org/10.1371/journal.pone.0314691.t008
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of multiple regression for decompression time prediction: Effects of vertex count and video size.

  16. f

    CPU time for different values of α.

    • plos.figshare.com
    xls
    Updated Jun 21, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shakoor Ahmad; Shumaila Javeed; Saqlain Raza; Dumitru Baleanu (2023). CPU time for different values of α. [Dataset]. http://doi.org/10.1371/journal.pone.0277472.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 21, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Shakoor Ahmad; Shumaila Javeed; Saqlain Raza; Dumitru Baleanu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    CPU time for different values of α.

  17. f

    Summary of multiple linear regression analysis for total time prediction:...

    • plos.figshare.com
    xls
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab (2024). Summary of multiple linear regression analysis for total time prediction: Effects of vertex count and video size. [Dataset]. http://doi.org/10.1371/journal.pone.0314691.t007
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Summary of multiple linear regression analysis for total time prediction: Effects of vertex count and video size.

  18. f

    Average of Hash Rate and of Power Consumption over time.

    • plos.figshare.com
    xls
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Luisanna Cocco; Michele Marchesi (2023). Average of Hash Rate and of Power Consumption over time. [Dataset]. http://doi.org/10.1371/journal.pone.0164603.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Luisanna Cocco; Michele Marchesi
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Average of Hash Rate and of Power Consumption over time.

  19. f

    Hardware technical specifications of utilized testbeds.

    • plos.figshare.com
    xls
    Updated Dec 19, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab (2024). Hardware technical specifications of utilized testbeds. [Dataset]. http://doi.org/10.1371/journal.pone.0314691.t001
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Dec 19, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Mohammed Hlayel; Hairulnizam Mahdin; Mohammad Hayajneh; Saleh H. AlDaajeh; Siti Salwani Yaacob; Mazidah Mat Rejab
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Hardware technical specifications of utilized testbeds.

  20. f

    Time (in seconds) and Speedup (S) for end-to-end all-to-all analysis of the...

    • figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Anuj Sharma; Elias S. Manolakos (2023). Time (in seconds) and Speedup (S) for end-to-end all-to-all analysis of the Proteus_300 dataset using pyMCPSC on a multi-core PC with Intel i7 CPU having 8 cores (16 threads), 32 GB RAM, running at 3.0 GHz, under Ubuntu 14.04 Linux. [Dataset]. http://doi.org/10.1371/journal.pone.0204587.t002
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Anuj Sharma; Elias S. Manolakos
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    GRALIGN already uses all the CPU cores by default.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Abdelrahman Hanafy (2023). cpu_usage [Dataset]. https://www.kaggle.com/datasets/abdelrahmanhanafy/cpu-usage
Organization logo

cpu_usage

Data collection for some CPU metrices to try linear regression model

Explore at:
434 scholarly articles cite this dataset (View in Google Scholar)
CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
Dataset updated
Sep 9, 2023
Dataset provided by
Kagglehttp://kaggle.com/
Authors
Abdelrahman Hanafy
License

https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/

Description

Just data to finish a task on an IoT course held by SIC Egypt The data contains the CPU metrics from my laptop, such as CPU usage, syscalls, and interrupts. it should be used to try 2 different ways of doing linear regression Time series on lag data and simple regression based on other metrics to predict the CPU usage.

Search
Clear search
Close search
Google apps
Main menu