100+ datasets found
  1. Accompanying Dataset migr_asyappctzm for Efficient Analytical Queries on...

    • zenodo.org
    • data.niaid.nih.gov
    application/gzip, bin
    Updated Aug 3, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Matteo Lissandrini; Matteo Lissandrini (2023). Accompanying Dataset migr_asyappctzm for Efficient Analytical Queries on Semantic Web Data Cubes [Dataset]. http://doi.org/10.5281/zenodo.8210998
    Explore at:
    application/gzip, binAvailable download formats
    Dataset updated
    Aug 3, 2023
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Matteo Lissandrini; Matteo Lissandrini
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset shows how the Eurostat data cube in the orginal publicatin is modelled in QB4OLAP.

    This data is based on statistical data about asylum applications to the European Union, provided by Eurostat on

    http://ec.europa.eu/eurostat/web/products-datasets/-/migr_asyappctzm

    Further data has been integrated from: https://github.com/lorenae/qb4olap/tree/master/examples

  2. Data, models and codes of AI Cube server

    • figshare.com
    • datasetcatalog.nlm.nih.gov
    Updated Feb 2, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Kaixuan Wang (2025). Data, models and codes of AI Cube server [Dataset]. http://doi.org/10.6084/m9.figshare.28219331.v2
    Explore at:
    Dataset updated
    Feb 2, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Kaixuan Wang
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Data, models and codes used in the paper "Towards an AI Cube: Enriching Geospatial Data Cube with AI Inference Capabilities"

  3. Workforce Information Cubes for NASA - Dataset - NASA Open Data Portal

    • data.nasa.gov
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Workforce Information Cubes for NASA - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/workforce-information-cubes-for-nasa
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Workforce Information Cubes for NASA, sourced from NASA's personnel/payroll system, gives data about who is working where and on what. Includes records for every civil service employee in NASA, snapshots of workforce composition as of certain dates, and data on personnel transactions, such as hires, losses and promotions. Updates occur every 2 weeks.

  4. China Earth Observation Data Cube: The 30m Seamless Annual Leaf-On Landsat...

    • zenodo.org
    tiff
    Updated Aug 11, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yaotong Cai; Yaotong Cai; Peng Zhu; Xiaoping Liu; Peng Zhu; Xiaoping Liu (2025). China Earth Observation Data Cube: The 30m Seamless Annual Leaf-On Landsat Composites from 1985 to 2024 [Dataset]. http://doi.org/10.5281/zenodo.14131869
    Explore at:
    tiffAvailable download formats
    Dataset updated
    Aug 11, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Yaotong Cai; Yaotong Cai; Peng Zhu; Xiaoping Liu; Peng Zhu; Xiaoping Liu
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Earth
    Description

    The 30m seamless annual leaf-on Landsat composites from 1985 to 2024 were generated using a comprehensive framework designed to ensure high-quality, consistent data across decades. Starting with preprocessed Level-2 surface reflectance images from multiple Landsat sensors, the dataset is restricted to the Leaf-On season, with rigorous cloud and shadow masking applied based on quality assessment bands. To maintain consistency across sensors, spectral harmonization is conducted, followed by annual composite generation using the medoid method to capture peak vegetation conditions. The resulting composites are structured into a spatially consistent data cube, facilitating efficient analysis and monitoring of vegetation dynamics over time.

    The band naming convention follows Landsat TM standards, with bands designated as Blue (B1), Green (B2), Red (B3), NIR (B4), SWIR1 (B5), and SWIR2 (B7). Both qualitative and quantitative evaluations were conducted to validate the data quality. Here, we provide 2023 image data covering southwestern forest regions of China as a sample for testing. For access to the full dataset, please visit Google Earth Engine at this link, and Earth Engine App (Landsat Yearly Composite Viewer) at this link.

    The dataset has now been updated to include data up to 2024.

    Data citation: Cai, Y., Li, X., Zhu, P., Nie, S., Wang, C., Liu, X., & Chen, Y. (2025). China Earth Observation Data Cube: The 30m Seamless Annual Leaf-On Landsat Composites from 1985 to 2023. Journal of Remote Sensing. DOI: 10.34133/remotesensing.0698

    For data-related inquiries, please contact Dr. Yaotong Cai at caiyt33@mail2.sysu.edu.cn.

  5. STAC Collection - Landsat Collection 2 - Level-2 - Data Cube - LCF 16 days

    • data.inpe.br
    Updated Jan 22, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    INPE/MCTI (2023). STAC Collection - Landsat Collection 2 - Level-2 - Data Cube - LCF 16 days [Dataset]. https://data.inpe.br/geonetwork/srv/api/records/LANDSAT-16D-1
    Explore at:
    www:link-2.0-http--link, www:link-1.0-http--linkAvailable download formats
    Dataset updated
    Jan 22, 2023
    Dataset provided by
    National Institute for Space Researchhttp://www.inpe.br/
    Authors
    INPE/MCTI
    Time period covered
    Jan 1, 1990 - Aug 12, 2025
    Area covered
    Description

    Earth Observation Data Cube generated from Landsat Level-2 product over Brazil extension. This dataset is provided in Cloud Optimized GeoTIFF (COG) file format. The dataset is processed with 30 meters of spatial resolution, reprojected and cropped to BDC_MD grid Version 2 (BDC_MD V2), considering a temporal compositing function of 16 days using the Least Cloud Cover First (LCF) best pixel approach.

  6. SeasFire Cube: A Global Dataset for Seasonal Fire Modeling in the Earth...

    • zenodo.org
    • data.niaid.nih.gov
    pdf, zip
    Updated Jul 16, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lazaro Alonso; Fabian Gans; Ilektra Karasante; Akanksha Ahuja; Ioannis Prapas; Spyros Kondylatos; Ioannis Papoutsis; Eleannna Panagiotou; Dimitrios Mihail; Felix Cremer; Ulrich Weber; Nuno Carvalhais; Lazaro Alonso; Fabian Gans; Ilektra Karasante; Akanksha Ahuja; Ioannis Prapas; Spyros Kondylatos; Ioannis Papoutsis; Eleannna Panagiotou; Dimitrios Mihail; Felix Cremer; Ulrich Weber; Nuno Carvalhais (2024). SeasFire Cube: A Global Dataset for Seasonal Fire Modeling in the Earth System [Dataset]. http://doi.org/10.5281/zenodo.7108392
    Explore at:
    zip, pdfAvailable download formats
    Dataset updated
    Jul 16, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Lazaro Alonso; Fabian Gans; Ilektra Karasante; Akanksha Ahuja; Ioannis Prapas; Spyros Kondylatos; Ioannis Papoutsis; Eleannna Panagiotou; Dimitrios Mihail; Felix Cremer; Ulrich Weber; Nuno Carvalhais; Lazaro Alonso; Fabian Gans; Ilektra Karasante; Akanksha Ahuja; Ioannis Prapas; Spyros Kondylatos; Ioannis Papoutsis; Eleannna Panagiotou; Dimitrios Mihail; Felix Cremer; Ulrich Weber; Nuno Carvalhais
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Earth
    Description

    The SeasFire Cube is a scientific datacube for seasonal fire forecasting around the globe. Apart from seasonal fire forecasting, which is the aim of the SeasFire project, the datacube can be used for several other tasks. For example, it can be used to model teleconnections and memory effects in the earth system. Additionally, it can be used to model emissions from wildfires and the evolution of wildfire regimes.

    It has been created in the context of the SeasFire project, which deals with "Earth System Deep Learning for Seasonal Fire Forecasting" and is funded by the European Space Agency (ESA) in the context of ESA Future EO-1 Science for Society Call.

    It contains 21 years of data (2001-2021) in an 8-days time resolution and 0.25 degrees grid resolution. It has a diverse range of seasonal fire drivers. It expands from atmospheric and climatological ones to vegetation variables, socioeconomic and the target variables related to wildfires such as burned areas, fire radiative power, and wildfire-related CO2 emissions.

    Datacube properties

    Feature

    Value

    Spatial Coverage

    Global

    Temporal Coverage

    2001 to 2021

    Spatial Resolution

    0.25 deg x 0.25 deg

    Temporal Resolution

    8 days

    Number of Variables

    54

    Tutorial Link

    https://github.com/SeasFire/seasfire-datacube

    <p>Datacube variables</p>
    </caption>
    <thead>
      <tr>
        <th scope="row">Full name</th>
        <th scope="col">DataArray name</th>
        <th scope="col">Unit</th>
        <th scope="col">Contact *</th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <th scope="row">Dataset: <a href="https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-pressure-levels?tab=overview">ERA5 Meteo Reanalysis Data</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Mean sea level pressure</th>
        <td>mslp</td>
        <td>Pa</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Total precipitation</th>
        <td>tp</td>
        <td>m</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Relative humidity</th>
        <td>rel_hum</td>
        <td>%</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Vapor Pressure Deficit</th>
        <td>vpd</td>
        <td>hPa</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Sea Surface Temperature</th>
        <td>sst</td>
        <td>K</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Skin temperature</th>
        <td>skt</td>
        <td>K</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Wind speed at 10 meters</th>
        <td>ws10</td>
        <td>m*s-2</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Temperature at 2 meters - Mean</th>
        <td>t2m_mean</td>
        <td>K</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Temperature at 2 meters - Min</th>
        <td>t2m_min</td>
        <td>K</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Temperature at 2 meters - Max</th>
        <td>t2m_max</td>
        <td>K</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Surface net solar radiation</th>
        <td>ssr</td>
        <td>MJ m-2</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Surface solar radiation downwards</th>
        <td>ssrd</td>
        <td>MJ m-2</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Volumetric soil water level 1</th>
        <td>swvl1</td>
        <td>m3/m3</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Land-Sea mask</th>
        <td>lsm</td>
        <td>0-1</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: Copernicus
        <p><a href="http://cds.climate.copernicus.eu/cdsapp#!/dataset/cems-fire-historical?tab=overview">CEMS</a></p>
        </th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Drought Code Maximum</th>
        <td>drought_code_max</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Drought Code Average</th>
        <td>drought_code_mean</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Fire Weather Index Maximum</th>
        <td>fwi_max</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Fire Weather Index Average</th>
        <td>fwi_mean</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: <a href="http://confluence.ecmwf.int/display/CKB/CAMS%3A+Global+Fire+Assimilation+System+%28GFAS%29+data+documentation">CAMS: Global Fire Assimilation System (GFAS)</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Carbon dioxide emissions from wildfires</th>
        <td>cams_co2fire</td>
        <td>kg/m²</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Fire radiative power</th>
        <td>cams_frpfire</td>
        <td>W/m²</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: <a href="https://climate.esa.int/en/projects/fire/data/">FireCCI - European Space Agency’s Climate Change Initiative</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Burned Areas from Fire Climate Change Initiative (FCCI)</th>
        <td>fcci_ba</td>
        <td>ha</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Valid mask of FCCI burned areas</th>
        <td>fcci_ba_valid_mask</td>
        <td>0-1</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row"><br>
        Fraction of burnable area</th>
        <td>fcci_fraction_of_burnable_area</td>
        <td>%</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Number of patches</th>
        <td>fcci_number_of_patches</td>
        <td>N</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Fraction of observed area</th>
        <td>fcci_fraction_of_observed_area</td>
        <td>%</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: Nasa MODIS <a href="https://lpdaac.usgs.gov/products/mod11c1v006/">MOD11C1</a>, <a href="https://lpdaac.usgs.gov/products/mod13c1v006/">MOD13C1</a>, <a href="https://lpdaac.usgs.gov/products/mcd15a2hv006/">MCD15A2</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Land Surface temperature at day</th>
        <td>lst_day</td>
        <td>K</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Leaf Area Index</th>
        <td>lai</td>
        <td>m²/m²</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Normalized Difference Vegetation Index</th>
        <td>ndvi</td>
        <td>unitless</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Dataset: Nasa SEDAC <a href="https://sedac.ciesin.columbia.edu/data/set/gpw-v4-population-density-adjusted-to-2015-unwpp-country-totals-rev11">Gridded Population of the World (GPW), v4</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Population density</th>
        <td>pop_dens</td>
        <td>persons per square kilometers</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: <a href="http://www.globalfiredata.org/data.html">Global Fire Emissions Database (GFED)</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Burned Areas from GFED (large fires only)</th>
        <td>gfed_ba</td>
        <td>hectares (ha)</td>
        <td>MPI</td>
      </tr>
      <tr>
        <th scope="row">Valid mask of GFED burned areas</th>
        <td>gfed_ba_valid_mask</td>
        <td>0-1</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">GFED basis regions</th>
        <td>gfed_region</td>
        <td>N</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: <a href="http://gwis.jrc.ec.europa.eu/apps/country.profile/downloads">Global Wildfire Information System (GWIS)</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Burned Areas from GWIS</th>
        <td>gwis_ba</td>
        <td>ha</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Valid mask of GWIS burned areas</th>
        <td>gwis_ba_valid_mask</td>
        <td>0-1</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: <a href="https://psl.noaa.gov/data/climateindices/list/">NOAA Climate Indices</a></th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Western Pacific Index</th>
        <td>oci_wp</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Pacific North American Index</th>
        <td>oci_pna</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">North Atlantic Oscillation</th>
        <td>oci_nao</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Southern Oscillation Index</th>
        <td>oci_soi</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Global Mean Land/Ocean Temperature</th>
        <td>oci_gmsst</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Pacific Decadal Oscillation</th>
        <td>oci_pdo</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Eastern Asia/Western Russia</th>
        <td>oci_ea</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">East Pacific/North Pacific Oscillation</th>
        <td>oci_epo</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Nino 3.4 Anomaly</th>
        <td>oci_nino_34_anom</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Bivariate ENSO Timeseries</th>
        <td>oci_censo</td>
        <td>unitless</td>
        <td>NOA</td>
      </tr>
      <tr>
        <th scope="row">Dataset: ESA CCI</th>
        <td> </td>
        <td> </td>
        <td> </td>
      </tr>
      <tr>
        <th scope="row">Land Cover
    
  7. Efficient Keyword-Based Search for Top-K Cells in Text Cube - Dataset - NASA...

    • data.nasa.gov
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). Efficient Keyword-Based Search for Top-K Cells in Text Cube - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/efficient-keyword-based-search-for-top-k-cells-in-text-cube
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    Previous studies on supporting free-form keyword queries over RDBMSs provide users with linked-structures (e.g.,a set of joined tuples) that are relevant to a given keyword query. Most of them focus on ranking individual tuples from one table or joins of multiple tables containing a set of keywords. In this paper, we study the problem of keyword search in a data cube with text-rich dimension(s) (so-called text cube). The text cube is built on a multidimensional text database, where each row is associated with some text data (a document) and other structural dimensions (attributes). A cell in the text cube aggregates a set of documents with matching attribute values in a subset of dimensions. We define a keyword-based query language and an IR-style relevance model for coring/ranking cells in the text cube. Given a keyword query, our goal is to find the top-k most relevant cells. We propose four approaches, inverted-index one-scan, document sorted-scan, bottom-up dynamic programming, and search-space ordering. The search-space ordering algorithm explores only a small portion of the text cube for finding the top-k answers, and enables early termination. Extensive experimental studies are conducted to verify the effectiveness and efficiency of the proposed approaches. Citation: B. Ding, B. Zhao, C. X. Lin, J. Han, C. Zhai, A. N. Srivastava, and N. C. Oza, “Efficient Keyword-Based Search for Top-K Cells in Text Cube,” IEEE Transactions on Knowledge and Data Engineering, 2011.

  8. CBERS/WFI - Level-4-SR - Data Cube - LCF 8 days

    • fedeo.ceos.org
    • cmr.earthdata.nasa.gov
    • +1more
    png
    Updated Apr 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    BR/INPE (2024). CBERS/WFI - Level-4-SR - Data Cube - LCF 8 days [Dataset]. https://fedeo.ceos.org/collections/series/items/CBERS-WFI-8D-1?httpAccept=text/html
    Explore at:
    pngAvailable download formats
    Dataset updated
    Apr 4, 2024
    Dataset provided by
    National Institute for Space Researchhttp://www.inpe.br/
    Time period covered
    Jan 1, 2020 - Nov 8, 2025
    Description

    Earth Observation Data Cube generated from CBERS-4/WFI and CBERS-4A/WFI Level-4 SR products over Brazil extension. This dataset is provided in Cloud Optimized GeoTIFF (COG) file format. The dataset is processed with 64 meters of spatial resolution, reprojected and cropped to BDC_LG grid Version 2 (BDC_LG V2), considering a temporal compositing function of 8 days using the Least Cloud Cover First (LCF) best pixel approach.

  9. Z

    Crosswalks among metadata schemas for data cube descriptions in RELIANCE

    • data.niaid.nih.gov
    • data.europa.eu
    Updated May 10, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Corcho, Oscar; González-Guardia, Esteban; Garijo, Daniel (2021). Crosswalks among metadata schemas for data cube descriptions in RELIANCE [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_4744767
    Explore at:
    Dataset updated
    May 10, 2021
    Dataset provided by
    Universidad Politécnica de Madrid
    Authors
    Corcho, Oscar; González-Guardia, Esteban; Garijo, Daniel
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This Excel file contains crosswalks among different metadata schemas that can be used for the description of data cubes in the areas of Marine Science, Earth Sciences and Climate Research. These data cubes common contain observations of some variables in some feature of interest, taken by Earth Observation systems (e.g., satellites) or as in-situ observations.

  10. STAC Collection - CBERS-4/MUX - Level-4-SR - Data Cube - LCF 2 months

    • data.inpe.br
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    INPE/MCTI, STAC Collection - CBERS-4/MUX - Level-4-SR - Data Cube - LCF 2 months [Dataset]. https://data.inpe.br/geonetwork/srv/api/records/CBERS4-MUX-2M-1
    Explore at:
    www:link-1.0-http--link, www:link-2.0-http--linkAvailable download formats
    Dataset provided by
    National Institute for Space Researchhttp://www.inpe.br/
    Authors
    INPE/MCTI
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Jan 1, 2016 - Jun 30, 2025
    Area covered
    Description

    Earth Observation Data Cube generated from CBERS-4/MUX Level-4 SR product over Brazil extension. This dataset is provided in Cloud Optimized GeoTIFF (COG) file format. The dataset is processed with 20 meters of spatial resolution, reprojected and cropped to BDC_MD grid Version 2 (BDC_MD V2), considering a temporal compositing function of 2 months using the Least Cloud Cover First (LCF) best pixel approach.

  11. Segmented 3D Lung Cube Dataset

    • kaggle.com
    zip
    Updated Jun 3, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Engr Mohsin Ali Khan (2025). Segmented 3D Lung Cube Dataset [Dataset]. https://www.kaggle.com/datasets/engrmohsinalikhan/segmented-3d-lung-cube-dataset
    Explore at:
    zip(1794145881 bytes)Available download formats
    Dataset updated
    Jun 3, 2025
    Authors
    Engr Mohsin Ali Khan
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Title: Segmented 3D Lung Cube Dataset

    Overview

    This dataset accompanies the IEEE Access publication: "Segmented 3D Lung Cube Dataset and Dual-Model Framework for COVID-19 Severity Prediction" 📄 Read the Paper 📌 DOI: 10.1109/ACCESS.2024.3501234

    The dataset comprises pre-processed and segmented 3D CT lung volumes derived from the publicly available STOIC Database, containing CT scans from 2,000 patients. These volumetric images were generated using a detailed 10-step pipeline involving intensity windowing, image resampling, lung extraction, and cube alignment. The dataset is curated to support deep learning-based severity prediction of COVID-19, particularly for critical outcomes such as intubation or death within one month.

    Files Included

    allcts.npy

    This NumPy file contains the segmented 3D chest CT lung cubes from 2,000 patients.

    • Shape: [2000, 128, 64, 128]
    • Data type: uint8 (converted from float32 for compression)
    • Each volume represents a cubically segmented region of the lungs, downsampled and normalized from the original 3D CTs.
    • The coronal axis has been reduced to 64 slices to reduce dimensionality.
    • The file follows the same patient order as listed in reference.csv.

    allmasks.npz

    This compressed NumPy file contains binary segmentation masks (lungs only) for each of the 2,000 CT volumes.

    • Shape: [2000, 128, 64, 128]
    • Each binary mask can be applied directly to the corresponding CT cube to extract the lung regions.
    • These masks were generated using U-Net (R231) followed by hole filling.
    • Patient order matches reference.csv.

    reference.csv

    This CSV file mirrors the original metadata from the STOIC challenge dataset.

    • Contains patient CT identifiers and severity labels (including probability of intubation or death within 1 month).
    • Useful for supervised learning tasks.

    Use Case

    This dataset is suitable for training and evaluating 3D deep learning models for:

    • Lung segmentation
    • COVID-19 severity prediction
    • Hybrid 2D-3D learning approaches

    Citation

    If you use this dataset in your research, please cite:

    M. A. Khan, A. Shaukat, Z. Mustansar, and M. U. Akram, "Segmented 3D Lung Cube Dataset and Dual-Model Framework for COVID-19 Severity Prediction," IEEE Access, pp. 1–1, Jan. 2024. DOI: 10.1109/ACCESS.2024.3501234

  12. KEYWORD SEARCH IN TEXT CUBE: FINDING TOP-K RELEVANT CELLS - Dataset - NASA...

    • data.nasa.gov
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). KEYWORD SEARCH IN TEXT CUBE: FINDING TOP-K RELEVANT CELLS - Dataset - NASA Open Data Portal [Dataset]. https://data.nasa.gov/dataset/keyword-search-in-text-cube-finding-top-k-relevant-cells
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    KEYWORD SEARCH IN TEXT CUBE: FINDING TOP-K RELEVANT CELLS BOLIN DING, YINTAO YU, BO ZHAO, CINDY XIDE LIN, JIAWEI HAN, AND CHENGXIANG ZHAI Abstract. We study the problem of keyword search in a data cube with text-rich dimension(s) (so-called text cube). The text cube is built on a multidimensional text database, where each row is associated with some text data (e.g., a document) and other structural dimensions (attributes). A cell in the text cube aggregates a set of documents with matching attribute values in a subset of dimensions. A cell document is the concatenation of all documents in a cell. Given a keyword query, our goal is to find the top-k most relevant cells (ranked according to the relevance scores of cell documents w.r.t. the given query) in the text cube. We define a keyword-based query language and apply IR-style relevance model for scoring and ranking cell documents in the text cube. We propose two efficient approaches to find the top-k answers. The proposed approaches support a general class of IR-style relevance scoring formulas that satisfy certain basic and common properties. One of them uses more time for pre-processing and less time for answering online queries; and the other one is more efficient in pre-processing and consumes more time for online queries. Experimental studies on the ASRS dataset are conducted to verify the efficiency and effectiveness of the proposed approaches.

  13. FedScope Diversity Cubes

    • catalog.data.gov
    • s.cnmilf.com
    Updated Jan 26, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Office of Personnel Management (2024). FedScope Diversity Cubes [Dataset]. https://catalog.data.gov/dataset/fedscope-diversity-cubes-714ca
    Explore at:
    Dataset updated
    Jan 26, 2024
    Dataset provided by
    United States Office of Personnel Managementhttps://opm.gov/
    Description

    This set of quarterly cubes provides employee population data for the new Ethnicity and Race Indicator (ERI). The numbers reflect the actual number of employees as of a specific point in time. The following workforce characteristics are available for analysis: Agency, State/Country, Age (5 year interval), Education Level, Ethnicity and Race Indicator (ERI), Length of Service (5 year interval), GS & Equivalent Grade, Occupation, Occupation Category, Pay Plan & Grade, Salary Level ($10,000 interval), STEM Occupations, Supervisory Status, Type of Appointment, Work Schedule, Work Status, Employment, Average Salary, Average Length of Service. Diversity cubes will be available for the most recent 8 quarters and the 5 previous end of fiscal year (September) files.

  14. f

    Data from: Living Earth: Implementing national standardised land cover...

    • tandf.figshare.com
    txt
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Christopher J. Owers; Richard M. Lucas; Daniel Clewley; Carole Planque; Suvarna Punalekar; Belle Tissott; Sean M. T. Chua; Pete Bunting; Norman Mueller; Graciela Metternicht (2023). Living Earth: Implementing national standardised land cover classification systems for Earth Observation in support of sustainable development [Dataset]. http://doi.org/10.6084/m9.figshare.15067604.v1
    Explore at:
    txtAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    Taylor & Francis
    Authors
    Christopher J. Owers; Richard M. Lucas; Daniel Clewley; Carole Planque; Suvarna Punalekar; Belle Tissott; Sean M. T. Chua; Pete Bunting; Norman Mueller; Graciela Metternicht
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    Earth
    Description

    Earth Observation (EO) has been recognised as a key data source for supporting the United Nations Sustainable Development Goals (SDGs). Advances in data availability and analytical capabilities have provided a wide range of users access to global coverage analysis-ready data (ARD). However, ARD does not provide the information required by national agencies tasked with coordinating the implementation of SDGs. Reliable, standardised, scalable mapping of land cover and its change over time and space facilitates informed decision making, providing cohesive methods for target setting and reporting of SDGs. The aim of this study was to implement a global framework for classifying land cover. The Food and Agriculture Organisation’s Land Cover Classification System (FAO LCCS) provides a global land cover taxonomy suitable to comprehensively support SDG target setting and reporting. We present a fully implemented FAO LCCS optimised for EO data; Living Earth, an open-source software package that can be readily applied using existing national EO infrastructure and satellite data. We resolve several semantic challenges of LCCS for consistent EO implementation, including modifications to environmental descriptors, inter-dependency within the modular-hierarchical framework, and increased flexibility associated with limited data availability. To ensure easy adoption of Living Earth for SDG reporting, we identified key environmental descriptors to provide resource allocation recommendations for generating routinely retrieved input parameters. Living Earth provides an optimal platform for global adoption of EO4SDGs ensuring a transparent methodology that allows monitoring to be standardised for all countries.

  15. A global land-use data cube 1992-2020 based on the Human Appropriation of...

    • zenodo.org
    zip
    Updated Apr 25, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sarah Matej; Sarah Matej; Florian Weidinger; Florian Weidinger; Lisa Kaufmann; Lisa Kaufmann; Nicolas Roux; Nicolas Roux; Simone Gingrich; Simone Gingrich; Helmut Haberl; Helmut Haberl; Fridolin Krausmann; Fridolin Krausmann; Karl-Heinz Erb; Karl-Heinz Erb (2025). A global land-use data cube 1992-2020 based on the Human Appropriation of Net Primary Production: Dataset 1 [Dataset]. http://doi.org/10.5281/zenodo.13990766
    Explore at:
    zipAvailable download formats
    Dataset updated
    Apr 25, 2025
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Sarah Matej; Sarah Matej; Florian Weidinger; Florian Weidinger; Lisa Kaufmann; Lisa Kaufmann; Nicolas Roux; Nicolas Roux; Simone Gingrich; Simone Gingrich; Helmut Haberl; Helmut Haberl; Fridolin Krausmann; Fridolin Krausmann; Karl-Heinz Erb; Karl-Heinz Erb
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset is part of the LUIcube, a global dataset on land-use at 30 arcsecond spatial resolution. The LUIcube includes information on area, the change in NPP due to land conversions (HANPPluc), the harvested NPP (including losses, HANPPharv), and the NPP remaining in ecosystems after harvest (NPPeco) for 32 land-use classes in annual time-steps from 1992 to 2020. A detailed description of the LUIcube is available in the accompanying publication.

    The layers of land-use areas are provided in square kilometers (km²) per grid cell. All NPP flows are provided in tC/yr per grid cell. Adding HANPPharv to NPPeco results in the actual NPP available before harvest (NPPact=NPPeco+HANPPharv), and adding HANPPluc to NPPact results in the potential NPP available in the hypothetical absence of land use (NPPpot=NPPact+HANPPluc) for the given land-use class. Area-intensive values (in gC/m²/yr) can be calculated by dividing the NPP flows by the area of the respective land-use class per grid cell. HANPP in % of NPPpot can be calculated by summing up HANPPharv and HANPPluc and dividing it by NPPpot. Areas and NPP flows of land-use classes can be aggregated to calculate their overall HANPP.

    This Zenodo repository provides data on following land-use classes: unused productive wilderness areas (WILD-core); productive wilderness areas that are sporadically used at very low intensity (WILD-periphery); unused unproductive wilderness areas (WILD-nps); forestry areas, mainly coniferous (FO-con); forestry areas, mainly non-coniferous (FO-ncon); settlements, urban areas and infrastructure (BU-builtup)

  16. Data from: cube-sat

    • kaggle.com
    zip
    Updated Aug 4, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sabil Shrestha (2021). cube-sat [Dataset]. https://www.kaggle.com/datasets/sabilshrestha/cubesat
    Explore at:
    zip(670809904 bytes)Available download formats
    Dataset updated
    Aug 4, 2021
    Authors
    Sabil Shrestha
    Description

    Dataset

    This dataset was created by Sabil Shrestha

    Contents

  17. e

    Hyperspectral data-cubes and reference pollutants of 302 urban wastewater...

    • opendata.eawag.ch
    • opendata-stage.eawag.ch
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Hyperspectral data-cubes and reference pollutants of 302 urban wastewater samples - Package - ERIC [Dataset]. https://opendata.eawag.ch/dataset/hyperspectral-data-cubes-and-reference-pollutants-of-302-urban-wastewater-samples
    Explore at:
    Description

    Overview of the experiment We conducted this experiment to collect a dataset of hyperspectral data-cubes of wastewater samples, along with reference laboratory analyses of various wastewater pollutants. The goal was to train data-driven models to predict pollution levels in a sample using hyperspectral data-cubes. Therefore, for ten days, we collected samples from four wastewater treatment facilities around Melbourne, Australia. The samples come from three urban wastewater treatment facilities and one stormwater treatment facility. We conducted the sampling between 04/08/2024 and 15/08/2024. Once sampled, we analysed wastewater in the laboratory for reference physical and chemical pollutants and acquired hyperspectral images. To extend the dataset, we also created a combination of stormwater and wastewater samples for which we measured a hyperspectral data-cube and some reference pollutants. This repository also includes background information about data pre-processing and validation. Repository organization: How to use the data? The repository is organized into numbered folders. Most folders contain a readme.md file in Markdown format, explaining their contents. All data are stored in non-proprietary formats: CSV for most files, except for hyperspectral acquisitions, which are in ENVI format (compatible with Python). Raw data are kept in their original format, sometimes lacking metadata such as units or column descriptions. This information is provided in the corresponding readme.md files. Pre-processed data, however, contain consistent column names, including units. Jupyter notebooks are included to pre-process and validate the data.

  18. D

    NSW eastern forest soil condition: Spatio-temporal data cube maps

    • data.nsw.gov.au
    html, pdf, zip
    Updated Oct 23, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NSW Natural Resources Commission (2025). NSW eastern forest soil condition: Spatio-temporal data cube maps [Dataset]. https://data.nsw.gov.au/data/dataset/nsw-eastern-forest-soil-condition-data-cube-maps
    Explore at:
    pdf, html, zipAvailable download formats
    Dataset updated
    Oct 23, 2025
    Dataset authored and provided by
    NSW Natural Resources Commission
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    New South Wales
    Description

    This dataset created by the University of Sydney, includes time series digital soil map products of soil organic carbon (SOC) between January 1990 and December 2020 for the Regional Forest Agreement regions of eastern NSW. Modelling was completed using a data cube platform incorporating machine learning space-time framework and geospatial technologies. Products provide estimates of SOC concentrations and associated trends through time. Also important covariates required to drive this spatio-temporal modelling are identified using the Recursive Feature Elimination algorithm (RFE), which including a range of predictors that vary in space, time and space and time.

    Full description of the digital soil maps and methods are presented in: Moyce MC, Gray JM, Wilson BR, Jenkins BR, Young MA, Ugbaje SU, Bishop TFA, Yang X, Henderson LE, Milford HB, Tulau MJ, 2021. Determining baselines, drivers and trends of soil health and stability in New South Wales forests: NSW Forest Monitoring & Improvement Program, Final report v1.1 for NSW Natural Resources Commission by NSW Department of Planning, Industry and Environment and University of Sydney.

    The metadata's data packages section includes project scripts and code, final project report and an external Cloudstor link to download the predicted SOC map products,

  19. d

    Synthetic temporal dataset for temporal trend analysis and retrieval

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Jul 31, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jing Ao; Kara Schatz; Rada Chirkova (2025). Synthetic temporal dataset for temporal trend analysis and retrieval [Dataset]. http://doi.org/10.5061/dryad.q573n5trf
    Explore at:
    Dataset updated
    Jul 31, 2025
    Dataset provided by
    Dryad Digital Repository
    Authors
    Jing Ao; Kara Schatz; Rada Chirkova
    Time period covered
    May 7, 2024
    Description

    This repository contains a synthetic, temporal data set that was generated by the authors by sampling values from the Gaussian distribution. The dataset contains eight nontemporal dimensions, a temporal dimension, and a numerical measure attribute. The data set was generated according to the scheme and procedure detailed in this source paper: Kaufmann, M., Fischer, P.M., May, N., Tonder, A., Kossmann, D. (2014). TPC-BiH: A Benchmark for Bitemporal Databases. In: Performance Characterization and Benchmarking. TPCTC 2013. Lecture Notes in Computer Science, vol 8391. Springer, Cham. The data set can be used for analyzing and locating temporal trends of interest, where a temporal trend is generated by selecting the desired values of the nontemporal dimensions, and then selecting the corresponding values of the temporal dimension and the numerical measure attribute. Locating temporal trends of interest, e.g., unusual trends, is a common task in many applications and domains. It can also be o..., , , # Synthetic temporal dataset for temporal trend analysis and retrieval

    https://doi.org/10.5061/dryad.q573n5trf

    The data set can be used for analyzing and locating temporal trends of interest, where a temporal trend is generated by selecting the desired values of the nontemporal dimensions, and then selecting the corresponding values of the temporal dimension and the numerical measure attribute. Locating temporal trends of interest, e.g., unusual trends, is a common task in many applications and domains. It can also be of interest to understand which nontemporal dimensions are associated with the temporal trends of interest. To this end, the data set can be used for analyzing and locating temporal trends in the data cube induced by the data set, e.g., retrieving outlier temporal trends using an outlier detector.Â

    We generated the synthetic temporal data set [1], which contains up to 8 nontemporal dimensions, one temporal dimension, and a nume...

  20. Cubes on conveyor belt

    • kaggle.com
    zip
    Updated Jun 4, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Edge Impulse (2024). Cubes on conveyor belt [Dataset]. https://www.kaggle.com/datasets/edgeimpulse/cubes-on-conveyor-belt
    Explore at:
    zip(34531379 bytes)Available download formats
    Dataset updated
    Jun 4, 2024
    Dataset provided by
    Edgeimpulse, Inc.
    Authors
    Edge Impulse
    License

    Apache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
    License information was derived automatically

    Description

    This dataset has been collected by Edge Impulse and used extensively to design the FOMO (Faster Objects, More Objects) object detection architecture. See FOMO documentation or the announcement blog post.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F1642573%2F79883abbfc2db2889457586f367002d9%2FScreenshot%202024-06-04%20at%2015.22.46.png?generation=1717508155176192&alt=media" alt="">

    The dataset is composed of 70 images including: - 32 blue cubes, - 32 green cubes, - 30 red cubes - 28 yellow cubes

    Download link: cubes on a conveyor belt dataset in Edge Impulse Object Detection format.

    You can also retrieve this dataset from this Edge Impulse public project.

    Data exported from an object detection project in the Edge Impulse Studio is exported in this format, see below to understand the format.

    How to use this dataset

    To import this data into a new Edge Impulse project, either use:

    edge-impulse-uploader --clean --info-file info.labels
    

    Understand Edge Impulse object detection format

    The Edge Impulse object detection acquisition format provides a simple and intuitive way to store images and associated bounding box labels. Folders containing data in this format will take the following structure:

    .
    ├── testing
    │  ├── bounding_boxes.labels
    │  ├── cubes.23im33f2.jpg
    │  ├── cubes.23j3rclu.jpg
    │  ├── cubes.23j4jeee.jpg
    │  ...
    │  └── cubes.23j4k0rk.jpg
    └── training
      ├── bounding_boxes.labels
      ├── blue.23ijdngd.jpg
      ├── combo.23ijkgsd.jpg
      ├── cubes.23il4pon.jpg
      ├── cubes.23im28tb..jpg
      ...
      └── yellow.23ijdp4o.jpg
    
    2 directories, 73 files
    

    The subdirectories contain image files in JPEG or PNG format. Each image file represents a sample and is associated with its respective bounding box labels in the bounding_boxes.labels file.

    The bounding_boxes.labels file in each subdirectory provides detailed information about the labeled objects and their corresponding bounding boxes. The file follows a JSON format, with the following structure:

    • version: Indicates the version of the label format.
    • files: A list of objects, where each object represents an image and its associated labels.
      • path: The path or file name of the image.
      • category: Indicates whether the image belongs to the training or testing set.
      • (optional) label: Provides information about the labeled objects.
      • type: Specifies the type of label (e.g., a single label).
      • label: The actual label or class name of the object.
      • (Optional) metadata: Additional metadata associated with the image, such as the site where it was collected, the timestamp or any useful information.
      • boundingBoxes: A list of objects, where each object represents a bounding box for an object within the image.
      • label: The label or class name of the object within the bounding box.
      • x, y: The coordinates of the top-left corner of the bounding box.
      • width, height: The width and height of the bounding box.

    bounding_boxes.labels example:

    {
      "version": 1,
      "files": [
        {
          "path": "cubes.23im33f2.jpg",
          "category": "testing",
          "label": {
            "type": "label",
            "label": "cubes"
          },
          "metadata": {
            "version": "2023-1234-LAB"
          },
          "boundingBoxes": [
            {
              "label": "green",
              "x": 105,
              "y": 201,
              "width": 91,
              "height": 90
            },
            {
              "label": "blue",
              "x": 283,
              "y": 233,
              "width": 86,
              "height": 87
            }
          ]
        },
        {
          "path": "cubes.23j3rclu.jpg",
          "category": "testing",
          "label": {
            "type": "label",
            "label": "cubes"
          },
          "metadata": {
            "version": "2023-4567-PROD"
          },
          "boundingBoxes": [
            {
              "label": "red",
            ...
    
Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Matteo Lissandrini; Matteo Lissandrini (2023). Accompanying Dataset migr_asyappctzm for Efficient Analytical Queries on Semantic Web Data Cubes [Dataset]. http://doi.org/10.5281/zenodo.8210998
Organization logo

Accompanying Dataset migr_asyappctzm for Efficient Analytical Queries on Semantic Web Data Cubes

Explore at:
application/gzip, binAvailable download formats
Dataset updated
Aug 3, 2023
Dataset provided by
Zenodohttp://zenodo.org/
Authors
Matteo Lissandrini; Matteo Lissandrini
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This dataset shows how the Eurostat data cube in the orginal publicatin is modelled in QB4OLAP.

This data is based on statistical data about asylum applications to the European Union, provided by Eurostat on

http://ec.europa.eu/eurostat/web/products-datasets/-/migr_asyappctzm

Further data has been integrated from: https://github.com/lorenae/qb4olap/tree/master/examples

Search
Clear search
Close search
Google apps
Main menu