100+ datasets found
  1. t

    Elastic Source Imaging with Sparse Data - Dataset - LDM

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Elastic Source Imaging with Sparse Data - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/elastic-source-imaging-with-sparse-data
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    The dataset used in this paper for elastic source imaging with very sparse data, both in space and time.

  2. d

    Floreat-f2 - Sparse Point Cloud LAS - Aug 2021 - Datasets - data.wa.gov.au

    • catalogue.data.wa.gov.au
    Updated Aug 17, 2021
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2021). Floreat-f2 - Sparse Point Cloud LAS - Aug 2021 - Datasets - data.wa.gov.au [Dataset]. https://catalogue.data.wa.gov.au/dataset/floreat-f2-sparse-point-cloud-las-aug-2021
    Explore at:
    Dataset updated
    Aug 17, 2021
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Western Australia, Floreat
    Description

    The first capture of the area North of the Floreat Surf Life Saving Club, these sand dunes were captured by UAV imagery on 17th Aug 2021 for the Cambridge Coastcare beach dune modelling and monitoring project. It was created as part of an initiative to innovatively monitor coastal dune erosion and visualize these changes over time for future management and mitigation. This data includes Orthomosaic, DSM, DTM, Elevation Contours, 3D Mesh, 3D Point Cloud and LiDAR constructed from over 500 images captured from UAV (drone) and processed in Pix4D. All datasets can be freely accessed through DataWA. Link to Animated video fly-through of this 3D data model Link to the Sketchfab visualisation of the 3D textured mesh The dataset is a Sparse 3D Point Cloud (i.e. a 3D set of points): the X,Y,Z position and colour information is stored for each point of the point cloud. This dataset is of the area North of Floreat SLSC (2021 Flight-2 project area).

  3. Data from: Sparse Machine Learning Methods for Understanding Large Text...

    • data.nasa.gov
    • gimi9.com
    • +3more
    Updated Mar 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Sparse Machine Learning Methods for Understanding Large Text Corpora [Dataset]. https://data.nasa.gov/dataset/sparse-machine-learning-methods-for-understanding-large-text-corpora
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional data with high degree of interpretability, at low computational cost. This paper posits that these methods can be extremely useful for understanding large collections of text documents, without requiring user expertise in machine learning. Our approach relies on three main ingredients: (a) multi-document text summarization and (b) comparative summarization of two corpora, both using parse regression or classifi?cation; (c) sparse principal components and sparse graphical models for unsupervised analysis and visualization of large text corpora. We validate our approach using a corpus of Aviation Safety Reporting System (ASRS) reports and demonstrate that the methods can reveal causal and contributing factors in runway incursions. Furthermore, we show that the methods automatically discover four main tasks that pilots perform during flight, which can aid in further understanding the causal and contributing factors to runway incursions and other drivers for aviation safety incidents. Citation: L. El Ghaoui, G. C. Li, V. Duong, V. Pham, A. N. Srivastava, and K. Bhaduri, “Sparse Machine Learning Methods for Understanding Large Text Corpora,” Proceedings of the Conference on Intelligent Data Understanding, 2011.

  4. CITEseq22: Sparse RAW counts data

    • kaggle.com
    zip
    Updated Nov 20, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Antonina Dolgorukova (2022). CITEseq22: Sparse RAW counts data [Dataset]. https://www.kaggle.com/datasets/antoninadolgorukova/sparse-raw-counts-data-open-problems-multimodal
    Explore at:
    zip(7575907904 bytes)Available download formats
    Dataset updated
    Nov 20, 2022
    Authors
    Antonina Dolgorukova
    License

    https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/

    Description

    Sparse matrices for raw counts data for open-problems-multimodal competition. The script for generating sparse matrices was shared by Wei Xie and can be found here.

    The similar dataset for normalized and log1p transformed counts for the same cells can be found here.

    Each h5 file contains 5 arrays:

    axis0 (row index from the original h5 file)

    axis1 (column index from the original h5 file)

    value_i (attribute i in dgCMatrix in R or index indices in csc_array in scipy.sparse)

    value_p (attribute p in dgCMatrix in R or index indptr in csc_array in scipy.sparse)

    value_x (attribute x in dgcMatrix in R or index data in csc_array in scipy.sparse.)

  5. d

    Data from: Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    • catalog.data.gov
    • s.cnmilf.com
    Updated Nov 14, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach [Dataset]. https://catalog.data.gov/dataset/sparse-solutions-for-single-class-svms-a-bi-criterion-approach
    Explore at:
    Dataset updated
    Nov 14, 2025
    Dataset provided by
    Dashlink
    Description

    In this paper we propose an innovative learning algorithm - a variation of One-class ? Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class ? SVMs while reducing both training time and test time by several factors.

  6. N

    Replication Data for: Plug-and-Play: Improve Depth Estimation via Sparse...

    • dataverse.lib.nycu.edu.tw
    bin, gif, org, png +2
    Updated Jun 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NYCU Dataverse (2022). Replication Data for: Plug-and-Play: Improve Depth Estimation via Sparse Data Propagation [Dataset]. http://doi.org/10.57770/CNNFTP
    Explore at:
    org(1165), text/x-python(819), bin(1203), text/markdown(5719), gif(11543315), png(80201)Available download formats
    Dataset updated
    Jun 14, 2022
    Dataset provided by
    NYCU Dataverse
    License

    https://dataverse.lib.nycu.edu.tw/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.57770/CNNFTPhttps://dataverse.lib.nycu.edu.tw/api/datasets/:persistentId/versions/1.0/customlicense?persistentId=doi:10.57770/CNNFTP

    Description

    We propose a novel plug-and-play (PnP) module for improving depth prediction with taking arbitrary patterns of sparse depths as input. Given any pre-trained depth predic-tion model, our PnP module updates the intermediate feature map such that the model outputs new depths consistent with the given sparse depths. Our method requires no additional training and can be applied to practical applications such as leveraging both RGB and sparse LiDAR points to robustly estimate dense depth map. Our approach achieves consistent improvements on various state-of-the-art methods on indoor (i.e., NYU-v2) and outdoor (i.e., KITTI) datasets. Various types of LiDARs are also synthesized in our experiments to verify the general applicability of our PnP module in practice.

  7. s

    Citation Trends for "Effect of data representation on cost of sparse matrix...

    • shibatadb.com
    Updated Oct 12, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yubetsu (2025). Citation Trends for "Effect of data representation on cost of sparse matrix operations" [Dataset]. https://www.shibatadb.com/article/EaMxUD28
    Explore at:
    Dataset updated
    Oct 12, 2025
    Dataset authored and provided by
    Yubetsu
    License

    https://www.shibatadb.com/license/data/proprietary/v1.0/license.txthttps://www.shibatadb.com/license/data/proprietary/v1.0/license.txt

    Time period covered
    1982 - 1988
    Variables measured
    New Citations per Year
    Description

    Yearly citation counts for the publication titled "Effect of data representation on cost of sparse matrix operations".

  8. Z

    Data from: SparsePoser: Real-time Full-body Motion Reconstruction from...

    • data.niaid.nih.gov
    • data.europa.eu
    Updated Oct 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ponton, Jose Luis; Yun, Haoran; Aristidou, Andreas; Andujar, Carlos; Pelechano, Nuria (2023). SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8427980
    Explore at:
    Dataset updated
    Oct 12, 2023
    Dataset provided by
    Universitat Politècnica de Catalunya
    University of Cyprus and CYENS Centre of Excellence
    Authors
    Ponton, Jose Luis; Yun, Haoran; Aristidou, Andreas; Andujar, Carlos; Pelechano, Nuria
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data used for the paper SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data

    It contains over 1GB of high-quality motion capture data recorded with an Xsens Awinda system while using a variety of VR applications in Meta Quest devices.

    Visit the paper website!

    If you find our data useful, please cite our paper:

    @article{10.1145/3625264, author = {Ponton, Jose Luis and Yun, Haoran and Aristidou, Andreas and Andujar, Carlos and Pelechano, Nuria}, title = {SparsePoser: Real-Time Full-Body Motion Reconstruction from Sparse Data}, year = {2023}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, issn = {0730-0301}, url = {https://doi.org/10.1145/3625264}, doi = {10.1145/3625264}, journal = {ACM Trans. Graph.}, month = {oct}}

  9. Python Sparse Matrix - Open Problem Multimodal

    • kaggle.com
    zip
    Updated Oct 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wei Xie (2022). Python Sparse Matrix - Open Problem Multimodal [Dataset]. https://www.kaggle.com/datasets/stautxie/python-sparse-matrix-open-problem-multimodal
    Explore at:
    zip(10844965431 bytes)Available download formats
    Dataset updated
    Oct 14, 2022
    Authors
    Wei Xie
    License

    https://cdla.io/sharing-1-0/https://cdla.io/sharing-1-0/

    Description

    Dataset

    This dataset was created by Wei Xie

    Released under Community Data License Agreement - Sharing - Version 1.0

    Contents

  10. r

    Data from: Provable Low Rank Plus Sparse Matrix Separation Via Nonconvex...

    • resodate.org
    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    April Sagan; John E. Mitchell (2024). Provable Low Rank Plus Sparse Matrix Separation Via Nonconvex Regularizers [Dataset]. https://resodate.org/resources/aHR0cHM6Ly9zZXJ2aWNlLnRpYi5ldS9sZG1zZXJ2aWNlL2RhdGFzZXQvcHJvdmFibGUtbG93LXJhbmstcGx1cy1zcGFyc2UtbWF0cml4LXNlcGFyYXRpb24tdmlhLW5vbmNvbnZleC1yZWd1bGFyaXplcnM=
    Explore at:
    Dataset updated
    Dec 16, 2024
    Dataset provided by
    Leibniz Data Manager
    Authors
    April Sagan; John E. Mitchell
    Description

    This paper considers a large class of problems where we seek to recover a low rank matrix and/or sparse vector from some set of measurements.

  11. d

    Data from: Generating fast sparse matrix vector multiplication from a high...

    • datadryad.org
    • data.niaid.nih.gov
    • +1more
    zip
    Updated Mar 19, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Federico Pizzuti; Michel Steuwer; Christophe Dubach (2020). Generating fast sparse matrix vector multiplication from a high level generic functional IR [Dataset]. http://doi.org/10.5061/dryad.wstqjq2gs
    Explore at:
    zipAvailable download formats
    Dataset updated
    Mar 19, 2020
    Dataset provided by
    Dryad
    Authors
    Federico Pizzuti; Michel Steuwer; Christophe Dubach
    Time period covered
    Mar 19, 2020
    Description

    Usage of high-level intermediate representations promises the generation of fast code from a high-level description, improving the productivity of developers while achieving the performance traditionally only reached with low-level programming approaches.

    High-level IRs come in two flavors: 1) domain-specific IRs designed to express only for a specific application area; or 2) generic high-level IRs that can be used to generate high-performance code across many domains. Developing generic IRs is more challenging but offers the advantage of reusing a common compiler infrastructure various applications.

    In this paper, we extend a generic high-level IR to enable efficient computation with sparse data structures. Crucially, we encode sparse representation using reusable dense building blocks already present in the high-level IR. We use a form of dependent types to model sparse matrices in CSR format by expressing the relationship between multiple dense arrays explicitly separately storing ...

  12. G

    Sparse Models Serving Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Oct 7, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Sparse Models Serving Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/sparse-models-serving-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Oct 7, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Sparse Models Serving Market Outlook



    According to our latest research, the global sparse models serving market size reached USD 1.35 billion in 2024, reflecting the rapid adoption of efficient AI model deployment across industries. With a robust compound annual growth rate (CAGR) of 31.7% projected between 2025 and 2033, the market is forecasted to achieve a value of USD 14.52 billion by 2033. This remarkable growth trajectory is primarily driven by the increasing demand for scalable, low-latency AI inference solutions, as enterprises seek to optimize resource utilization and reduce operational costs in an era of data explosion and AI-driven digital transformation.




    A key growth factor for the sparse models serving market is the exponential increase in the deployment of artificial intelligence and machine learning solutions across diverse sectors. As organizations generate and process massive volumes of data, the need for highly efficient and scalable AI models has become paramount. Sparse models, which leverage techniques such as weight pruning and quantization, enable faster inference and reduced memory footprints without compromising accuracy. This capability is particularly valuable for real-time applications in industries such as finance, healthcare, and retail, where latency and resource efficiency are critical. The widespread adoption of edge computing and IoT devices further amplifies the demand for sparse model serving, as these environments often operate under stringent computational constraints.




    Another significant driver is the ongoing advancements in hardware accelerators and AI infrastructure. The evolution of specialized hardware, such as GPUs, TPUs, and custom AI chips, has enabled the efficient execution of sparse models at scale. Leading technology providers are investing heavily in the development of optimized software frameworks and libraries that support sparse computation, making it easier for organizations to deploy and manage these models in production environments. The integration of sparse model serving with cloud-native platforms and container orchestration systems has further streamlined the operationalization of AI workloads, allowing enterprises to achieve seamless scalability, high availability, and cost-effectiveness. This technological synergy is accelerating the adoption of sparse models serving across both on-premises and cloud deployments.




    The growing emphasis on sustainable AI and green computing is also propelling the market forward. Sparse models consume significantly less energy and computational resources compared to dense models, aligning with the global push towards environmentally responsible technology practices. Enterprises are increasingly prioritizing solutions that not only deliver high performance but also minimize their carbon footprint. Sparse model serving addresses this need by enabling efficient utilization of existing hardware, reducing the frequency of hardware upgrades, and lowering overall power consumption. This sustainability aspect is becoming a key differentiator for vendors in the sparse models serving market, as regulatory frameworks and corporate social responsibility initiatives place greater emphasis on eco-friendly AI deployments.




    From a regional perspective, North America currently dominates the sparse models serving market, accounting for the largest share in 2024. The region’s leadership can be attributed to its advanced digital infrastructure, strong presence of AI technology providers, and early adoption of cutting-edge AI solutions across sectors such as BFSI, healthcare, and IT. Europe and Asia Pacific are also witnessing rapid growth, driven by increasing investments in AI research, government initiatives supporting digital transformation, and the proliferation of data-centric industries. Emerging markets in Latin America and the Middle East & Africa are gradually catching up, as enterprises in these regions recognize the value of efficient AI model deployment in enhancing competitiveness and operational efficiency.





    Compone

  13. D

    Related Data for: Efficient FPGA-based sparse matrix-vector multiplication...

    • researchdata.ntu.edu.sg
    Updated Sep 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shiqing Li; Shiqing Li; Di Liu; Di Liu; Weichen Liu; Weichen Liu (2023). Related Data for: Efficient FPGA-based sparse matrix-vector multiplication with data reuse-aware compression [Dataset]. http://doi.org/10.21979/N9/EXZ0Y3
    Explore at:
    Dataset updated
    Sep 20, 2023
    Dataset provided by
    DR-NTU (Data)
    Authors
    Shiqing Li; Shiqing Li; Di Liu; Di Liu; Weichen Liu; Weichen Liu
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Dataset funded by
    Ministry of Education (MOE)
    Nanyang Technological University
    Description

    The code for the paper: Efficient FPGA-based sparse matrix-vector multiplication with data reuse-aware compression

  14. D

    Related data for:Optimized Data Reuse via Reordering for Sparse...

    • researchdata.ntu.edu.sg
    Updated Mar 28, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shiqing Li; Shiqing Li; Weichen Liu; Weichen Liu (2022). Related data for:Optimized Data Reuse via Reordering for Sparse Matrix-Vector Multiplication on FPGAs [Dataset]. http://doi.org/10.21979/N9/ATEYFB
    Explore at:
    Dataset updated
    Mar 28, 2022
    Dataset provided by
    DR-NTU (Data)
    Authors
    Shiqing Li; Shiqing Li; Weichen Liu; Weichen Liu
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Dataset funded by
    Nanyang Technological University
    Ministry of Education (MOE)
    Description

    This dataset is related to our ICCAD work "Optimized Data Reuse via Reordering for Sparse Matrix-Vector Multiplication on FPGAs".

  15. Augmented dataset

    • figshare.com
    bin
    Updated Dec 22, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lijun Wang (2024). Augmented dataset [Dataset]. http://doi.org/10.6084/m9.figshare.28079147.v2
    Explore at:
    binAvailable download formats
    Dataset updated
    Dec 22, 2024
    Dataset provided by
    figshare
    Figsharehttp://figshare.com/
    Authors
    Lijun Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    augmented non-deterministic dataset through MCMC and the auxiliary SWAP model

  16. t

    Data from: MIST: l0 Sparse Linear Regression with Momentum

    • service.tib.eu
    Updated Jan 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). MIST: l0 Sparse Linear Regression with Momentum [Dataset]. https://service.tib.eu/ldmservice/dataset/mist--l0-sparse-linear-regression-with-momentum
    Explore at:
    Dataset updated
    Jan 3, 2025
    Description

    The dataset used in the paper is a large linear system of equations with a sparse solution. The authors used this dataset to test their Momentumized Iterative Shrinkage Thresholding (MIST) algorithm.

  17. D

    Replication Data for: Fast Sparse Grid Operations using the Unidirectional...

    • darus.uni-stuttgart.de
    Updated Mar 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Holzmüller (2022). Replication Data for: Fast Sparse Grid Operations using the Unidirectional Principle: A Generalized and Unified Framework [Dataset]. http://doi.org/10.18419/DARUS-1779
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Mar 29, 2022
    Dataset provided by
    DaRUS
    Authors
    David Holzmüller
    License

    https://www.apache.org/licenses/LICENSE-2.0https://www.apache.org/licenses/LICENSE-2.0

    Dataset funded by
    DFG
    Description

    This dataset contains supplementary code for the paper Fast Sparse Grid Operations using the Unidirectional Principle: A Generalized and Unified Framework. The code is also provided on GitHub. Here, we additionally provide the runtime measurement data generated by the code, which was used to generate the runtime plot in the paper. For more details, we refer to the file README.md.

  18. G

    Sparse-Matrix Compression Engine Market Research Report 2033

    • growthmarketreports.com
    csv, pdf, pptx
    Updated Aug 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Growth Market Reports (2025). Sparse-Matrix Compression Engine Market Research Report 2033 [Dataset]. https://growthmarketreports.com/report/sparse-matrix-compression-engine-market
    Explore at:
    pdf, pptx, csvAvailable download formats
    Dataset updated
    Aug 29, 2025
    Dataset authored and provided by
    Growth Market Reports
    Time period covered
    2024 - 2032
    Area covered
    Global
    Description

    Sparse-Matrix Compression Engine Market Outlook



    According to our latest research, the global Sparse-Matrix Compression Engine market size reached USD 1.42 billion in 2024, reflecting robust adoption across high-performance computing and advanced analytics sectors. The market is poised for substantial expansion, with a projected CAGR of 15.8% during the forecast period. By 2033, the market is forecasted to achieve a value of USD 5.18 billion, driven by escalating data complexity, the proliferation of machine learning applications, and the imperative for efficient storage and computational solutions. The surge in demand for real-time analytics and the growing penetration of artificial intelligence across industries are primary factors fueling this remarkable growth trajectory.




    One of the key growth drivers for the Sparse-Matrix Compression Engine market is the exponential increase in data generation and the corresponding need for efficient data processing and storage. As organizations in sectors such as scientific computing, finance, and healthcare grapple with large-scale, high-dimensional datasets, the requirement for optimized storage solutions becomes paramount. Sparse-matrix compression engines enable significant reduction in data redundancy, leading to lower storage costs and faster data retrieval. This efficiency is particularly crucial in high-performance computing environments where memory bandwidth and storage limitations can hinder computational throughput. The adoption of these engines is further propelled by advancements in hardware accelerators and software algorithms that enhance compression ratios without compromising data integrity.




    Another significant factor contributing to market growth is the rising adoption of machine learning and artificial intelligence across diverse industry verticals. Modern AI and ML algorithms often operate on sparse datasets, especially in areas such as natural language processing, recommendation systems, and scientific simulations. Sparse-matrix compression engines play a pivotal role in minimizing memory footprint and optimizing computational resources, thereby accelerating model training and inference. The integration of these engines into cloud-based and on-premises solutions allows enterprises to scale their AI workloads efficiently, driving widespread deployment in both research and commercial applications. Additionally, the ongoing evolution of lossless and lossy compression techniques is expanding the applicability of these engines to new and emerging use cases.




    The market is also benefiting from the increasing emphasis on cost optimization and energy efficiency in data centers and enterprise IT infrastructure. As organizations strive to reduce operational expenses and carbon footprints, the adoption of compression technologies that minimize data movement and storage requirements becomes a strategic imperative. Sparse-matrix compression engines facilitate this by enabling higher data throughput and lower energy consumption, making them attractive for deployment in large-scale analytics, telecommunications, and industrial automation. Furthermore, the growing ecosystem of service providers and solution integrators is making these technologies more accessible to small and medium enterprises, contributing to broader market penetration.



    The development of High-Speed Hardware Compression Chip technology is revolutionizing the Sparse-Matrix Compression Engine market. These chips are designed to accelerate data compression processes, significantly enhancing the performance of high-performance computing systems. By integrating these chips, organizations can achieve faster data processing speeds, which is crucial for handling large-scale datasets in real-time analytics and AI applications. The chips offer a unique advantage by reducing latency and improving throughput, making them an essential component in modern data centers. As the demand for efficient data management solutions grows, the adoption of high-speed hardware compression chips is expected to rise, driving further innovation and competitiveness in the market.




    From a regional perspective, North America continues to dominate the Sparse-Matrix Compression Engine market, accounting for the largest revenue share in 2024 owing to the presence of leading technology companies, advanced research institutions, and

  19. B

    Data from: Direction matching for sparse movement data sets: determining...

    • datasetcatalog.nlm.nih.gov
    • search.dataone.org
    Updated Dec 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Barrett, Louise; Bonnell, Tyler R.; Bonnell, Tyler R.; Henzi, S. Peter; Henzi, S. Peter; Barrett, Louise (2024). Data from: Direction matching for sparse movement data sets: determining interaction rules in social groups [Dataset]. http://doi.org/10.5683/SP3/UBPQB8
    Explore at:
    Dataset updated
    Dec 4, 2024
    Authors
    Barrett, Louise; Bonnell, Tyler R.; Bonnell, Tyler R.; Henzi, S. Peter; Henzi, S. Peter; Barrett, Louise
    Description

    AbstractIt is generally assumed that high-resolution movement data are needed to extract meaningful decision-making patterns of animals on the move. Here we propose a modified version of force matching (referred to here as direction matching), whereby sparse movement data (i.e., collected over minutes instead of seconds) can be used to test hypothesized forces acting on a focal animal based on their ability to explain observed movement. We first test the direction matching approach using simulated data from an agent-based model, and then go on to apply it to a sparse movement data set collected on a troop of baboons in the DeHoop Nature Reserve, South Africa. We use the baboon data set to test the hypothesis that an individual’s motion is influenced by the group as a whole or, alternatively, whether it is influenced by the location of specific individuals within the group. Our data provide support for both hypotheses, with stronger support for the latter. The focal animal showed consistent patterns of movement toward particular individuals when distance from these individuals increased beyond 5.6 m. Although the focal animal was also sensitive to the group movement on those occasions when the group as a whole was highly clustered, these conditions of isolation occurred infrequently. We suggest that specific social interactions may thus drive overall group cohesion. The results of the direction matching approach suggest that relatively sparse data, with low technical and economic costs, can be used to test between hypotheses on the factors driving movement decisions.

  20. Data for "Enhanced Gain Extrapolation Technique: a third-order scattering...

    • nist.gov
    • data.nist.gov
    • +2more
    Updated Apr 12, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2024). Data for "Enhanced Gain Extrapolation Technique: a third-order scattering approach for high-accuracy antenna gain, sparse sampling, at Fresnel distances" [Dataset]. http://doi.org/10.18434/mds2-3222
    Explore at:
    Dataset updated
    Apr 12, 2024
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    License

    https://www.nist.gov/open/licensehttps://www.nist.gov/open/license

    Description

    In this paper we describe an enhanced three-antenna gain extrapolation technique that allows one to determine antenna gain with significantly fewer data points and at closer distances than with the well-established traditional three-antenna gain extrapolation technique that has been in use for over five decades. As opposed to the traditional gain extrapolation technique, where high-order scattering is purposely ignored so as to isolate only the direct antenna-to-antenna coupling, we show that by incorporating third-order scattering the enhanced gain extrapolation technique can be obtained. The theoretical foundation using third-order scattering is developed and experimental results are presented comparing the enhanced technique and traditional technique for two sets of internationally recognized NIST reference standard gain horn antennas at X-band and Ku-band. We show that with the enhanced technique gain values for these antennas are readily obtained to within stated uncertainties of ±0.07 dB using as few as 10 data points per antenna pair, as opposed to approximately 4000 -to- 8000 data points per antenna pair that is needed with the traditional technique. Furthermore, with the described enhanced technique, antenna-to-antenna distances can be reduced by a factor of three, and up a factor of six in some cases, compared to the traditional technique, a significant reduction in the overall size requirement of facilities used to perform gain extrapolation measurements.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
(2024). Elastic Source Imaging with Sparse Data - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/elastic-source-imaging-with-sparse-data

Elastic Source Imaging with Sparse Data - Dataset - LDM

Explore at:
Dataset updated
Dec 16, 2024
Description

The dataset used in this paper for elastic source imaging with very sparse data, both in space and time.

Search
Clear search
Close search
Google apps
Main menu