100+ datasets found
  1. Sparse Basic Linear Algebra Subprograms

    • catalog.data.gov
    • datasets.ai
    • +1more
    Updated Jul 29, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Institute of Standards and Technology (2022). Sparse Basic Linear Algebra Subprograms [Dataset]. https://catalog.data.gov/dataset/sparse-basic-linear-algebra-subprograms-6885a
    Explore at:
    Dataset updated
    Jul 29, 2022
    Dataset provided by
    National Institute of Standards and Technologyhttp://www.nist.gov/
    Description

    Sparse Basic Linear Algebra Subprograms (BLAS), comprise of computational kernels for the calculation sparse vectors and matrices operations.

  2. w

    Dataset of subjects of Sparse matrix technology

    • workwithdata.com
    Updated Jul 13, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Work With Data (2024). Dataset of subjects of Sparse matrix technology [Dataset]. https://www.workwithdata.com/datasets/book-subjects?f=1&fcol0=j0-book&fop0=%3D&fval0=Sparse+matrix+technology&j=1&j0=books
    Explore at:
    Dataset updated
    Jul 13, 2024
    Dataset authored and provided by
    Work With Data
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is about book subjects, has 1 rows. and is filtered where the books is Sparse matrix technology. It features 10 columns including book subject, number of authors, number of books, earliest publication date, and latest publication date. The preview is ordered by number of books (descending).

  3. d

    Data from: Sparse Machine Learning Methods for Understanding Large Text...

    • catalog.data.gov
    • data.nasa.gov
    • +1more
    Updated Apr 10, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). Sparse Machine Learning Methods for Understanding Large Text Corpora [Dataset]. https://catalog.data.gov/dataset/sparse-machine-learning-methods-for-understanding-large-text-corpora
    Explore at:
    Dataset updated
    Apr 10, 2025
    Dataset provided by
    Dashlink
    Description

    Sparse machine learning has recently emerged as powerful tool to obtain models of high-dimensional data with high degree of interpretability, at low computational cost. This paper posits that these methods can be extremely useful for understanding large collections of text documents, without requiring user expertise in machine learning. Our approach relies on three main ingredients: (a) multi-document text summarization and (b) comparative summarization of two corpora, both using parse regression or classifi?cation; (c) sparse principal components and sparse graphical models for unsupervised analysis and visualization of large text corpora. We validate our approach using a corpus of Aviation Safety Reporting System (ASRS) reports and demonstrate that the methods can reveal causal and contributing factors in runway incursions. Furthermore, we show that the methods automatically discover four main tasks that pilots perform during flight, which can aid in further understanding the causal and contributing factors to runway incursions and other drivers for aviation safety incidents. Citation: L. El Ghaoui, G. C. Li, V. Duong, V. Pham, A. N. Srivastava, and K. Bhaduri, “Sparse Machine Learning Methods for Understanding Large Text Corpora,” Proceedings of the Conference on Intelligent Data Understanding, 2011.

  4. SparseBeads Dataset

    • zenodo.org
    • data.niaid.nih.gov
    zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    J. S. Jørgensen; S. B. Coban; W. R. B. Lionheart; S. A. McDonald; P. J. Withers; J. S. Jørgensen; S. B. Coban; W. R. B. Lionheart; S. A. McDonald; P. J. Withers (2020). SparseBeads Dataset [Dataset]. http://doi.org/10.5281/zenodo.290117
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    J. S. Jørgensen; S. B. Coban; W. R. B. Lionheart; S. A. McDonald; P. J. Withers; J. S. Jørgensen; S. B. Coban; W. R. B. Lionheart; S. A. McDonald; P. J. Withers
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The presented data set, inspired by the SophiaBeads Dataset Project for X-ray Computed Tomography, is collected for studies involving sparsity-regularised reconstruction. The aim is to provide tomographic data for various samples where the sparsity in the image varies.

    This dataset is made available as part of the publication

    "SparseBeads Data: Benchmarking Sparsity-Regularized Computed Tomography", Jakob S Jørgensen et al, 2017. Meas. Sci. Technol. 28 124005.

    Direct link: https://doi.org/10.1088/1361-6501/aa8c29.

    This manuscript is published as part of Special Feature on Advanced X-ray Tomography (open access). We refer the users to this publication for an extensive detail in the experimental planning and data acquisition.

    Each zipped data folder includes

    • The meta data for data acquisition and geometry parameters of the scan (.xtekct and .ctprofile.xml).

    • A sinogram of the central slice (CentreSlice > Sinograms > .tif) along with meta data for the 2D slice (.xtek2dct and .ct2dprofile.xml),

    • List of projection angles (.ang)

    • and a 2D FDK reconstruction using the CTPro reconstruction suite (RECON2D > .vol) with volume visualisation parameters (.vgi), added as a reference.

    We also include an extra script for those that wish to use the SophiaBeads Dataset Project Codes, which essentially replaces the main script provided, sophiaBeads.m (visit https://zenodo.org/record/16539). Please note that sparseBeads.m script will have to be placed in the same folder as the project codes. The latest version of this script can be found here: https://github.com/jakobsj/SparseBeads_code

    For more information, please contact

    • jakj [at] dtu.dk
    • jakob.jorgensen [at] manchester.ac.uk
  5. h

    requests

    • huggingface.co
    Updated Apr 27, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    sparse-generative-ai (2024). requests [Dataset]. https://huggingface.co/datasets/sparse-generative-ai/requests
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Apr 27, 2024
    Dataset authored and provided by
    sparse-generative-ai
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    sparse-generative-ai/requests dataset hosted on Hugging Face and contributed by the HF Datasets community

  6. National Forest and Sparse Woody Vegetation Data (Version 3, 2018 Release)

    • data.gov.au
    • devweb.dga.links.com.au
    .pdf, wms, zip
    Updated Apr 7, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Government Department of Climate Change, Energy, the Environment and Water (2022). National Forest and Sparse Woody Vegetation Data (Version 3, 2018 Release) [Dataset]. https://data.gov.au/data/dataset/national-forest-and-sparse-woody-vegetation-data-version-3-2018-release
    Explore at:
    zip(921336367), zip, zip(1542892812), zip(293806346), zip(79186384), .pdf(466616), wmsAvailable download formats
    Dataset updated
    Apr 7, 2022
    Dataset provided by
    Australian Governmenthttp://www.australia.gov.au/
    Authors
    Australian Government Department of Climate Change, Energy, the Environment and Water
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    Landsat satellite imagery is used to derive woody vegetation extent products that discriminate between forest, sparse woody and non-woody land cover across a time series from 1988 to 2018. A forest is defined as woody vegetation with a minimum 20 per cent canopy cover, potentially reaching 2 metres high and a minimum area of 0.2 hectares. Sparse woody is defined as woody vegetation with a canopy cover between 5-19 per cent.

    The three-class classification (forest, sparse woody and non-woody) supersedes the two class classification (forest and non-forest) from 2016. The new classification is produced using the same approach in terms of time series processing (conditional probability networks) as the two-class method, to detect woody vegetation cover. The three-class algorithm better encompasses the different types of woody vegetation across the Australian landscape.

  7. d

    Repository URL

    • datadiscoverystudio.org
    resource url
    Updated 2011
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2011). Repository URL [Dataset]. http://datadiscoverystudio.org/geoportal/rest/metadata/item/622eb040ec0441ddbf47bc22b5cdec92/html
    Explore at:
    resource urlAvailable download formats
    Dataset updated
    2011
    Area covered
    Description

    Link Function: information

  8. d

    Data from: Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    • catalog.data.gov
    Updated Dec 6, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2023). Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach [Dataset]. https://catalog.data.gov/dataset/sparse-solutions-for-single-class-svms-a-bi-criterion-approach
    Explore at:
    Dataset updated
    Dec 6, 2023
    Dataset provided by
    Dashlink
    Description

    In this paper we propose an innovative learning algorithm - a variation of One-class ? Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class ? SVMs while reducing both training time and test time by several factors.

  9. intermediate-netflix-sparse-matrix

    • kaggle.com
    Updated Sep 9, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yash Gupta (2021). intermediate-netflix-sparse-matrix [Dataset]. https://www.kaggle.com/datasets/eryash15/intermediatenetflixsparsematrix
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 9, 2021
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Yash Gupta
    Description

    Dataset

    This dataset was created by Yash Gupta

    Contents

  10. n

    Dataset for On the Sparsity of XORs in Approximate Model Counting (SAT-20...

    • data.niaid.nih.gov
    Updated May 13, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agrawal, Durgesh (2020). Dataset for On the Sparsity of XORs in Approximate Model Counting (SAT-20 Paper) [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_3792747
    Explore at:
    Dataset updated
    May 13, 2020
    Dataset provided by
    Agrawal, Durgesh
    Bhavishya
    Meel, Kuldeep S.
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The artifact consists of the necessary data to reproduce the results reported in the SAT-20 Paper titled "On the Sparsity of XORs in Approximate Model Counting".

    In particular, the artifact consists of the binaries, the log files generated by our computing cluster, and scripts to generate tables and the plots used in the paper.

  11. h

    sparse

    • huggingface.co
    Updated Sep 14, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Diksha Shrivastava (2024). sparse [Dataset]. https://huggingface.co/datasets/diksha-shrivastava13/sparse
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Sep 14, 2024
    Authors
    Diksha Shrivastava
    License

    MIT Licensehttps://opensource.org/licenses/MIT
    License information was derived automatically

    Description

    diksha-shrivastava13/sparse dataset hosted on Hugging Face and contributed by the HF Datasets community

  12. t

    Efficient and Robust Classification for Sparse Attacks - Dataset - LDM

    • service.tib.eu
    Updated Dec 16, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Efficient and Robust Classification for Sparse Attacks - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/ef-cient-and-robust-classi-cation-for-sparse-attacks
    Explore at:
    Dataset updated
    Dec 16, 2024
    Description

    The MNIST and CIFAR datasets are used to test the robustness of neural networks against sparse attacks.

  13. h

    NEST

    • huggingface.co
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Vincent Maillou, NEST [Dataset]. https://huggingface.co/datasets/vincent-maillou/NEST
    Explore at:
    Authors
    Vincent Maillou
    Description

    NEST: NEw Sparse maTrix dataset

    NEST is a new sparse matrix dataset. Its purpose is to define a modern set of sparse matrices arising in relevant and actual scientific application in order to improve further sparse numerical methods. Nest can be seen as a continuity of the Sparse Matrix Market datasets and contain some curated sparse matrices from it as legacy references. The matrices are stored as COO sparse matrices in scipy.sparse.npz archive format. Conversion utils to/from the… See the full description on the dataset page: https://huggingface.co/datasets/vincent-maillou/NEST.

  14. Z

    SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data

    • data.niaid.nih.gov
    Updated Oct 12, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ponton, Jose Luis (2023). SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_8427980
    Explore at:
    Dataset updated
    Oct 12, 2023
    Dataset provided by
    Pelechano, Nuria
    Yun, Haoran
    Ponton, Jose Luis
    Andujar, Carlos
    Aristidou, Andreas
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data used for the paper SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data

    It contains over 1GB of high-quality motion capture data recorded with an Xsens Awinda system while using a variety of VR applications in Meta Quest devices.

    Visit the paper website!

    If you find our data useful, please cite our paper:

    @article{10.1145/3625264, author = {Ponton, Jose Luis and Yun, Haoran and Aristidou, Andreas and Andujar, Carlos and Pelechano, Nuria}, title = {SparsePoser: Real-Time Full-Body Motion Reconstruction from Sparse Data}, year = {2023}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, issn = {0730-0301}, url = {https://doi.org/10.1145/3625264}, doi = {10.1145/3625264}, journal = {ACM Trans. Graph.}, month = {oct}}

  15. 4

    Supporting data for: "Iterative modal reconstruction for sparse particle...

    • data.4tu.nl
    zip
    Updated Jul 2, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Adrian Grille Guerra; Andrea Sciacchitano; Fulvio Scarano (2024). Supporting data for: "Iterative modal reconstruction for sparse particle tracking data" [Dataset]. http://doi.org/10.4121/caa059d2-7657-4301-a805-767e9ca98eab.v1
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jul 2, 2024
    Dataset provided by
    4TU.ResearchData
    Authors
    Adrian Grille Guerra; Andrea Sciacchitano; Fulvio Scarano
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset in this repository complements the publication: Adrian Grille Guerra, Andrea Sciacchitano, Fulvio Scarano; Iterative modal reconstruction for sparse particle tracking data. Physics of Fluids 1 July 2024; 36 (7): 075107. https://doi.org/10.1063/5.0209527. The dataset contains the electronic supplementary material also available in the online version of the journal (three videos), a digital version of the figures of the publication in Matlab figure format, the full dataset discussed in the publication and also a sample code of the proposed methodology.

  16. f

    Assessment and Improvement of Statistical Tools for Comparative Proteomics...

    • acs.figshare.com
    txt
    Updated Jun 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Veit Schwämmle; Ileana Rodríguez León; Ole Nørregaard Jensen (2023). Assessment and Improvement of Statistical Tools for Comparative Proteomics Analysis of Sparse Data Sets with Few Experimental Replicates [Dataset]. http://doi.org/10.1021/pr400045u.s002
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jun 3, 2023
    Dataset provided by
    ACS Publications
    Authors
    Veit Schwämmle; Ileana Rodríguez León; Ole Nørregaard Jensen
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.

  17. A

    Data from: Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach

    • data.amerigeoss.org
    pdf
    Updated Jul 19, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    United States (2018). Sparse Solutions for Single Class SVMs: A Bi-Criterion Approach [Dataset]. https://data.amerigeoss.org/id/dataset/sparse-solutions-for-single-class-svms-a-bi-criterion-approach
    Explore at:
    pdfAvailable download formats
    Dataset updated
    Jul 19, 2018
    Dataset provided by
    United States
    License

    U.S. Government Workshttps://www.usa.gov/government-works
    License information was derived automatically

    Description

    In this paper we propose an innovative learning algorithm - a variation of One-class Support Vector Machines (SVMs) learning algorithm to produce sparser solutions with much reduced computational complexities. The proposed technique returns an approximate solution, nearly as good as the solution set obtained by the classical approach, by minimizing the original risk function along with a regularization term. We introduce a bi-criterion optimization that helps guide the search towards the optimal set in much reduced time. The outcome of the proposed learning technique was compared with the benchmark one-class Support Vector machines algorithm which more often leads to solutions with redundant support vectors. Through out the analysis, the problem size for both optimization routines was kept consistent. We have tested the proposed algorithm on a variety of data sources under different conditions to demonstrate the effectiveness. In all cases the proposed algorithm closely preserves the accuracy of standard one-class SVMs while reducing both training time and test time by several factors.

  18. f

    The feature dimension of the sparse feature subsets and the full features.

    • figshare.com
    • plos.figshare.com
    xls
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Shihong Yao; Tao Wang; Weiming Shen; Shaoming Pan; Yanwen Chong; Fei Ding (2023). The feature dimension of the sparse feature subsets and the full features. [Dataset]. http://doi.org/10.1371/journal.pone.0134242.t003
    Explore at:
    xlsAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Shihong Yao; Tao Wang; Weiming Shen; Shaoming Pan; Yanwen Chong; Fei Ding
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The feature dimension of the sparse feature subsets and the full features.

  19. t

    Experimental data for the paper "using constraints to discover sparse and...

    • service.tib.eu
    Updated Nov 28, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). Experimental data for the paper "using constraints to discover sparse and alternative subgroup descriptions" [Dataset]. https://service.tib.eu/ldmservice/dataset/rdr-doi-10-35097-cakkjctokqgxyvqg
    Explore at:
    Dataset updated
    Nov 28, 2024
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Abstract: These are the experimental data for the paper Bach, Jakob. "Using Constraints to Discover Sparse and Alternative Subgroup Descriptions" published on arXiv in 2024. You can find the paper here and the code here. See the README for details. The datasets used in our study (which we also provide here) originate from PMLB. The corresponding GitHub repository is MIT-licensed ((c) 2016 Epistasis Lab at UPenn). Please see the file LICENSE in the folder datasets/ for the license text. TechnicalRemarks: # Experimental Data for the Paper "Using Constraints to Discover Sparse and Alternative Subgroup Descriptions" These are the experimental data for the paper Bach, Jakob. "Using Constraints to Discover Sparse and Alternative Subgroup Descriptions"

  20. National Forest and Sparse Woody Vegetation Data (Version 5.0 - 2020...

    • data.gov.au
    zip
    Updated Aug 3, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Australian Government Department of Climate Change, Energy, the Environment and Water (2022). National Forest and Sparse Woody Vegetation Data (Version 5.0 - 2020 Release) [Dataset]. https://data.gov.au/data/dataset/national-forest-and-sparse-woody-vegetation-data-version-5-2020-release
    Explore at:
    zip(620676416), zip(160811143), zip(991618693)Available download formats
    Dataset updated
    Aug 3, 2022
    Authors
    Australian Government Department of Climate Change, Energy, the Environment and Water
    Description

    Landsat satellite imagery is used to derive woody vegetation extent products that discriminate between forest, sparse woody and non-woody land cover across a time series from 1988 to 2020. A forest is defined as woody vegetation with a minimum 20 per cent canopy cover, at least 2 metres high and a minimum area of 0.2 hectares. Sparse woody is defined as woody vegetation with a canopy cover between 5-19 per cent.

    The three-class classification (forest, sparse woody and non-woody) supersedes the two-class classification (forest and non-forest) from 2016. The new classification is produced using the same approach in terms of time series processing (conditional probability networks) as the two-class method, to detect woody vegetation cover. The three-class algorithm better encompasses the different types of woody vegetation across the Australian landscape.

    Earlier versions of this dataset were published in the Department of Environment and Energy.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
National Institute of Standards and Technology (2022). Sparse Basic Linear Algebra Subprograms [Dataset]. https://catalog.data.gov/dataset/sparse-basic-linear-algebra-subprograms-6885a
Organization logo

Sparse Basic Linear Algebra Subprograms

Explore at:
185 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Jul 29, 2022
Dataset provided by
National Institute of Standards and Technologyhttp://www.nist.gov/
Description

Sparse Basic Linear Algebra Subprograms (BLAS), comprise of computational kernels for the calculation sparse vectors and matrices operations.

Search
Clear search
Close search
Google apps
Main menu