100+ datasets found
  1. LAS&T: Large Shape And Texture Dataset

    • kaggle.com
    zip
    Updated Aug 2, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Sagi Eppel (2025). LAS&T: Large Shape And Texture Dataset [Dataset]. https://www.kaggle.com/datasets/sagieppel/las-and-t-large-shape-and-texture-dataset
    Explore at:
    zip(50958840696 bytes)Available download formats
    Dataset updated
    Aug 2, 2025
    Authors
    Sagi Eppel
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    The Large Shape And Texture Dataset (LAS&T)

    LAS&T is the largest and most diverse dataset for shape, texture and material recognition and retrieval in 2D and 3D scenes, with 650,000 images, based on real world shapes and textures.

    Overview

    The LAS&T Dataset aims to test the most basic aspect of vision in the most general way. Mainly the ability to identify any shape, texture, and material in any setting and environment, without being limited to specific types or classes of objects, materials, and environments. For shapes, this means identifying and retrieving any shape in 2D or 3D with every element of the shape changed between images, including the shape material and texture, orientation, size, and environment. For textures and materials, the goal is to recognize the same texture or material when appearing on different objects, environments, and light conditions. The dataset relies on shapes, textures, and materials extracted from real-world images, leading to an almost unlimited quantity and diversity of real-world natural patterns. Each section of the dataset (shapes, and textures), contains 3D parts that rely on physics-based scenes with realistic light materials and object simulation and abstract 2D parts. In addition, the real-world benchmark for 3D shapes. Main Project Page

    The dataset is composed of 4 parts:

    3D shape recognition and retrieval. 2D shape recognition and retrieval. 3D Materials recognition and retrieval. 2D Texture recognition and retrieval.

    Additional assets is as set of 350,000 natural 2D shapes extracted from real world images (SHAPES_COLLECTION_350k.zip)

    Each can be trained and tested independently.

    Shapes Recognition and Retrieval:

    For shape recognition the goal is to identify the same shape in different images, where the material/texture/color of the shape is changed, the shape is rotated, and the background is replaced. Hence, only the shape remains the same in both images. Note that this means the model can't use any contextual cues and most rely on the shape information alone.

    File structure:

    All jpg images that are in the exact same subfolder contain the exact same shape (but with different texture/color/background/orientation).

    Textures and Materials Recognition and Retrieval

    For texture and materials, the goal is to identify and match images containing the same material or textures, however the shape/object on which the material texture is applied is different, and so is the background and light. Removing contextual clues and forcing the model to use only the texture/material for the recognition process.

    File structure:

    All jpg images that are in the exact same subfolder contain the exact same texture/material (but overlay on different objects with different background/and illumination/orientation).

    Data Generation:

    The images in the synthetic part of the dataset were created by automatically extracting shapes and textures from natural images and combining them in synthetic images. This created synthetic images that completely rely on real-world patterns, making extremely diverse and complex shapes and textures. As far as we know this is the largest and most diverse shape and texture recognition/retrieval dataset. 3D data was generated using physics-based material and rendering (blender) making the images physically grounded and enabling using the data to train for real-world examples.

    Real-world images data:

    For 3D shape recognition and retrieval, we also supply a real-world natural image benchmark. With a variety of natural images containing the exact same 3D shape but made/coated with different materials and in different environments and orientations. The goal is again to identify the same shape in different images.

    File structure:

    File containing the word 'synthetic' contains synthetic images that can be used for training or testing, the type of data (2D shapes, 3D shapes, 2D textures, 3D materials) appears in the file name, as well as the number of images. Files containing "MULTI TESTS" in their name, contains various of small tests (500 images) that can be used to test some how single variation effect the recognition the recognition (orientation/background), and are less suitable for general training or testing.

    Supporting Scripts

    The file Files starting with "Scripts" contains the scripts used to generate the dataset and the scripts used to evaluate various of LVLMs on this dataset.

    Shapes Collections

    The file SHAPES_COLLECTION_350k.zip contains 350,000 2D shapes extracted from natural images and used for the dataset generation.

    Evaluating and Testing

    For evaluating and testing see: SCRIPTS_Testing_LVLM_ON_LAST_VQA.zip This can be use to test leading LVLMs using api, create human tests, and in general turn the dataset into multichoice question images similar to the one in the paper.

  2. m

    Multimodal Tactile Texture Dataset

    • data.mendeley.com
    Updated Aug 15, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Bruno Monteiro Rocha Lima (2023). Multimodal Tactile Texture Dataset [Dataset]. http://doi.org/10.17632/n666tk4mw9.1
    Explore at:
    Dataset updated
    Aug 15, 2023
    Authors
    Bruno Monteiro Rocha Lima
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The objective of this dataset is to provide a comprehensive collection of data that explores the recognition of tactile textures in dynamic exploration scenarios. The dataset was generated using a tactile-enabled finger with a multi-modal tactile sensing module. By incorporating data from pressure, gravity, angular rate, and magnetic field sensors, the dataset aims to facilitate research on machine learning methods for texture classification.

    The data is stored in pickle files, which can be read using Panda’s library in Python. The data files are organized in a specific folder structure and contain multiple readings for each texture and exploratory velocity. The dataset contains raw data of the recorded tactile measurements for 12 different textures and 3 different exploratory velocities stored in pickle files.

    Pickles_30 - Folder containing pickle files with tactile data at an exploratory velocity of 30 mm/s. Pickles_40 - Folder containing pickle files with tactile data at an exploratory velocity of 40 mm/s. Pickles_45 - Folder containing pickle files with tactile data at an exploratory velocity of 45 mm/s. Texture_01 to Texture_12 - Folders containing pickle files for each texture, labelled as texture_01, texture_02, and so on. Full_baro - Folder containing pickle files with barometer data for each texture. Full_imu - Folder containing pickle files with IMU (Inertial Measurement Unit) data for each texture.

    The "reading-pickle-file.ipynb" file is a script for reading and plotting the dataset.

  3. SMAPVEX12 Soil Texture Map V001

    • catalog.data.gov
    • nsidc.org
    • +4more
    Updated Oct 30, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    NASA NSIDC DAAC (2025). SMAPVEX12 Soil Texture Map V001 [Dataset]. https://catalog.data.gov/dataset/smapvex12-soil-texture-map-v001
    Explore at:
    Dataset updated
    Oct 30, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This data set consists of soil texture classification data derived from field surveys as part of the Soil Moisture Active Passive Validation Experiment 2012 (SMAPVEX12). The soil texture classification map provides information about vegetation present in the study area.

  4. Z

    Data from: HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral...

    • data.niaid.nih.gov
    • zenodo.org
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Khan, Haris Ahmad; Mihoubi, Sofiane; Mathon, Benjamin; Thomas, Jean-Baptiste; Hardeberg, Jon Yngve (2020). HyTexiLa: High Resolution Visible and Near Infrared Hyperspectral Texture Images [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_2356539
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Univ. Lille, CNRS, Centrale Lille, UMR 9189—CRIStAL, Centre de Recherche en Informatique Signal et Automatique de Lille, F-59000 Lille, France
    The Norwegian Colour and Visual Computing Laboratory, NTNU–Norwegian University of Science and Technology, 2815 Gjøvik, Norway
    Authors
    Khan, Haris Ahmad; Mihoubi, Sofiane; Mathon, Benjamin; Thomas, Jean-Baptiste; Hardeberg, Jon Yngve
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    We present a dataset of close range hyperspectral images of materials that span the visible and near infrared spectrums: HyTexiLa (Hyperspectral Texture images acquired in Laboratory). The data is intended to provide high spectral and spatial resolution reflectance images of 112 materials to study spatial and spectral textures. In this paper we discuss the calibration of the data and the method for addressing the distortions during image acquisition. We provide a spectral analysis based on non-negative matrix factorization to quantify the spectral complexity of the samples and extend local binary pattern operators to the hyperspectral texture analysis. The results demonstrate that although the spectral complexity of each of the textures is generally low, increasing the number of bands permits better texture classification, with the opponent band local binary pattern feature giving the best performance.

  5. CLASIC07 Soil Texture Data V001

    • data.nasa.gov
    • search.dataone.org
    • +4more
    Updated Mar 31, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    nasa.gov (2025). CLASIC07 Soil Texture Data V001 [Dataset]. https://data.nasa.gov/dataset/clasic07-soil-texture-data-v001
    Explore at:
    Dataset updated
    Mar 31, 2025
    Dataset provided by
    NASAhttp://nasa.gov/
    Description

    This data set contains soil texture data obtained for the Cloud and Land Surface Interaction Campaign 2007 (CLASIC07). The original data were extracted from a multi-layer soil characteristics database for the conterminous United States called CONUS-Soil and generated for the regional study area. Data are representative of the conditions present in the regional study area during the general timeline of the CLASIC07 campaign.

  6. d

    Data from: Central Valley Hydrologic Model version 2 (CVHM2): Well Log...

    • catalog.data.gov
    • data.usgs.gov
    • +1more
    Updated Nov 13, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    U.S. Geological Survey (2025). Central Valley Hydrologic Model version 2 (CVHM2): Well Log Lithology Database and Texture Model [Dataset]. https://catalog.data.gov/dataset/central-valley-hydrologic-model-version-2-cvhm2-well-log-lithology-database-and-texture-mo
    Explore at:
    Dataset updated
    Nov 13, 2025
    Dataset provided by
    United States Geological Surveyhttp://www.usgs.gov/
    Description

    These data encompass the geologic framework model for the Central Valley Hydrologic Model Version 2 (CVHM2) study. This includes (1) the Well Log Database which contains borehole information and lithology used in creating the geologic framework, (2) Well Logs with Classification Information which explains how percent coarse values were determined for each borehole, and (3) the Three-Dimensional Framework Model.

  7. Data from: Texture Image

    • kaggle.com
    zip
    Updated Jun 27, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Iqbal Maulana (2025). Texture Image [Dataset]. https://www.kaggle.com/datasets/cardinalacre/texture-image
    Explore at:
    zip(1781439 bytes)Available download formats
    Dataset updated
    Jun 27, 2025
    Authors
    Iqbal Maulana
    Description

    Dataset

    This dataset was created by Iqbal Maulana

    Contents

  8. E

    SoilGrids250m 2017-03 - Texture class (USDA system)

    • data.moa.gov.et
    • data.isric.org
    • +1more
    0169748, tif
    Updated Oct 25, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FDRE - Ministry of Agriculture (MoA) (2023). SoilGrids250m 2017-03 - Texture class (USDA system) [Dataset]. https://data.moa.gov.et/dataset/soilgrids250m-2017-03-texture-class-usda-system
    Explore at:
    tif, 0169748Available download formats
    Dataset updated
    Oct 25, 2023
    Dataset provided by
    FDRE - Ministry of Agriculture (MoA)
    Description

    Texture class (USDA system) at 7 standard depths predicted using the global compilation of soil ground observations. Accuracy assessement of the maps is availble in Hengl et at. (2017) DOI: 10.1371/journal.pone.0169748. Data provided as GeoTIFFs with internal compression (co='COMPRESS=DEFLATE')

  9. h

    Describable-Textures-Dataset-DTD

    • huggingface.co
    • kaggle.com
    Updated Aug 4, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    cansakirt (2023). Describable-Textures-Dataset-DTD [Dataset]. https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Aug 4, 2023
    Authors
    cansakirt
    Description

    Not sure about the license.

    Source: https://www.robots.ox.ac.uk/~vgg/data/dtd/

      Describable Textures Dataset (DTD)
    

    The Describable Textures Dataset (DTD) is an evolving collection of textural images in the wild, annotated with a series of human-centric attributes, inspired by the perceptual properties of textures. This data is made available to the computer vision community for research purposes. Download… See the full description on the dataset page: https://huggingface.co/datasets/cansa/Describable-Textures-Dataset-DTD.

  10. Texture Dashboard

    • data.csiro.au
    Updated Jul 23, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Simon Loveday; Lauren Stevens (2024). Texture Dashboard [Dataset]. https://data.csiro.au/collection/csiro:63011
    Explore at:
    Dataset updated
    Jul 23, 2024
    Dataset provided by
    CSIROhttp://www.csiro.au/
    Authors
    Simon Loveday; Lauren Stevens
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Dataset funded by
    CSIROhttp://www.csiro.au/
    AgResearch, New Zealand
    Description

    An application to analyse texture data (distance-time-force) from compression testing instruments such as the Stable Microsystems Texture Analyser. Lineage: Application was authored by Lauren Stevens and Simon Loveday in 2024, to accompany a publication authored by Simon (Hutchings et al. 2024 Journal of Texture Studies).

  11. d

    Surface Soil Texture - Dataset - data.sa.gov.au

    • data.sa.gov.au
    Updated Jun 9, 2016
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2016). Surface Soil Texture - Dataset - data.sa.gov.au [Dataset]. https://data.sa.gov.au/data/dataset/surface-texture
    Explore at:
    Dataset updated
    Jun 9, 2016
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    South Australia
    Description

    Surface texture (which refers to approximate clay content) influences many important soil qualities such as waterholding capacity, fertility and erodibility. Mapping shows the most common surface texture within each map unit, while more detailed proportion data are supplied for calculating respective areas of each surface soil texture class (spatial data statistics).

  12. Sediment Texture

    • fisheries.noaa.gov
    • catalog.data.gov
    zip
    Updated Aug 17, 2018
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Office for Coastal Management (2018). Sediment Texture [Dataset]. https://www.fisheries.noaa.gov/inport/item/66197
    Explore at:
    zipAvailable download formats
    Dataset updated
    Aug 17, 2018
    Dataset provided by
    Office for Coastal Management
    Time period covered
    Aug 17, 2018
    Area covered
    United States, Territorial Sea, Exclusive Economic Zone, Outer Continental Shelf,
    Description

    These data show point sample sediment location and texture within the United States Exclusive Economic Zone. This is an aggregate data product compiled from the USGS usSEABED and the East Coast Sediment Texture Database, and NOAA Electronic Navigational Charts. A new generalized texture value was compiled by normalizing across the three input data sets. Additional attributes such as Munsell col...

  13. Z

    Data from: Processing of haptic texture information over sequential...

    • data.niaid.nih.gov
    Updated Jan 21, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lezkan, Alexandra; Drewing, Knut (2020). Processing of haptic texture information over sequential exploration movements [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_826652
    Explore at:
    Dataset updated
    Jan 21, 2020
    Dataset provided by
    Justus-Liebig University Giessen
    Authors
    Lezkan, Alexandra; Drewing, Knut
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Where textures are defined by repetitive small spatial structures, exploration covering a greater extent will lead to signal repetition. We investigated how sensory estimates derived from these signals are integrated. In Experiment 1 participants stroked with the index finger one to eight times across two virtual gratings. Half of the participants discriminated according to ridge amplitude, the other half according to ridge spatial period. In both tasks just noticeable differences (JNDs) decreased with an increasing number of strokes. Those gains from additional exploration were over 3 times smaller than predicted for optimal observers who have access to equally reliable, and therefore equally weighted estimates for the entire exploration. We assume that the sequential nature of the exploration leads to memory decay of sensory estimates. Thus, participants compare an overall estimate of the first stimulus, which is affected by memory decay, to stroke-specific estimates during the exploration of the second stimulus. This was tested in Experiments 2 & 3. The spatial period of one stroke across either the first or second of two sequentially presented gratings was slightly discrepant from periods in all other strokes. This allowed calculating weights of stroke-specific estimates in the overall percept. As predicted, weights were approximately equal for all strokes in the first stimulus, while weights decreased during the exploration of the second stimulus. A quantitative Kalman filter model of our assumptions was consistent with the data. Hence, our results support an optimal integration model for sequential information given that memory decay affects comparison processes.

  14. n

    Global Soil Texture and Derived Water-Holding Capacities (Webb et al.)

    • earthdata.nasa.gov
    • search.dataone.org
    • +3more
    Updated Sep 5, 2000
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    ORNL_CLOUD (2000). Global Soil Texture and Derived Water-Holding Capacities (Webb et al.) [Dataset]. http://doi.org/10.3334/ORNLDAAC/548
    Explore at:
    Dataset updated
    Sep 5, 2000
    Dataset authored and provided by
    ORNL_CLOUD
    Description

    A standardized global data set of soil horizon thicknesses and textures (particle size distributions) was compiled by Webb et al. This data set will be used for the improved ground hydrology parameterization design for the Goddard Institute for Space Studies General Circulation Model (GISS GCM) Model III. The data set specifies the top and bottom depths and the percent abundance of sand, silt, and clay of individual soil horizons in each of the 106 soil types cataloged for nine continental divisions. When combined with the World Soil Data File (Zobler, 1986), the result is a global data set of variations in physical properties throughout the soil profile. These properties are important in the determination of water storage in individual soil horizons and exchange of water with the lower atmosphere. The incorporation of this data set into the GISS GCM should improve model performance by including more realistic variability in land-surface properties. All data are global at a 1 degree resolution and are provided in ASCII format. The primary data consist of 2 files. One file contains the depth and particle size (percent sand, silt, and clay) information for each major continent, soil type, and soil horizon. The other file contains ocean/continental coding (corresponding to FAO/UNESCO Soil Map of the World) (FAO/UNESCO, 1971-1981) and Zobler soil type classifications (Zobler, 1986). A fortran code for reading these data files is provided. In addition to the primary data files, there are also 5 derived data sets available for download: 1) soil profile thickness, 2) potential storage of water in the soil profile, 3) potential storage of water in the root zone, 4) potential storage of water derived from soil texture, 5) data set used to prescribe water-holding capacity in the GISS GCM (Model II). Data Citation The data set should be cited as follows: Webb, Robert W., Cynthia E. Rosenzweig, and Elissa R. Levine. 2000. Global Soil Texture and Derived Water-Holding Capacities (Webb et al.). Available on-line from Oak Ridge National Laboratory Distributed Active Archive Center, Oak Ridge, Tennessee, U.S.A.

  15. Z

    Data from: Unequal - but fair? Weights in the serial integration of haptic...

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lezkan, Alexandra; Drewing, Knut (2020). Unequal - but fair? Weights in the serial integration of haptic texture information [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_602286
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Justus-Liebig University Giessen
    Authors
    Lezkan, Alexandra; Drewing, Knut
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The sense of touch is characterized by its sequential nature. In texture perception, enhanced spatio-temporal extension of exploration leads to better discrimination performance due to combination of repetitive information. We have previously shown that the gains from additional exploration are smaller than the Maximum Likelihood Estimation (MLE) model of an ideal observer would assume. Here we test if this suboptimal integration can be explained by unequal weighting of information. Participants stroke 2 to 5 times across a virtual grating and judged the ridge period in a 2IFC task. We presented slightly discrepant period information in one of the strokes in the standard grating. Results show linearly decreasing weights of this information with spatio-temporal distance (number of intervening strokes) to the comparison grating. For each exploration extension (number of strokes) the stroke with the highest number of intervening strokes to the comparison was completely disregarded. The results are consistent with the notion that memory limitations are responsible for the unequal weights. This study raises the question if models of optimal integration should include memory decay as an additional source of variance and thus not expect equal weights.

    Lezkan, A. & Drewing, K. (2014). Unequal - but fair? Weights in the serial integration of haptic texture information. Haptics: Neuroscience, Devices, Modeling, and Applications (pp. 386-392). Springer: Heidelberg.

    The Zip file contains all data relative to the publication. The data of each participant is contained in a separate file.

    A description of the variables is contained in the file VARIABLE_CODES.txt

  16. o

    Texture

    • opencontext.org
    Updated Dec 18, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Douglas R Clark; Larry G Herr (2022). Texture [Dataset]. https://opencontext.org/predicates/cc6475cf-d9b7-46dc-91d8-04b3998bda93
    Explore at:
    Dataset updated
    Dec 18, 2022
    Dataset provided by
    Open Context
    Authors
    Douglas R Clark; Larry G Herr
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An Open Context "predicates" dataset item. Open Context publishes structured data as granular, URL identified Web resources. This "Variables" record is part of the "Madaba Plains Project-`Umayri" data publication.

  17. SMAPVEX08 Soil Texture Data, Version 1

    • nsidc.org
    • search.dataone.org
    • +3more
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    National Snow and Ice Data Center, SMAPVEX08 Soil Texture Data, Version 1 [Dataset]. http://doi.org/10.5067/4R2T2E348XG4
    Explore at:
    Dataset authored and provided by
    National Snow and Ice Data Center
    Description

    This data set contains soil texture data that were extracted from a multi-layer soil characteristics database for the conterminous United States and generated for each regional study area. Data are representative of the conditions present in the regional study areas during the general timeline of the Soil Moisture Active Passive Validation Experiment 2008 (SMAPVEX08) campaign.

  18. Z

    Data from: Going against the grain – Texture orientation affects direction...

    • data.niaid.nih.gov
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lezkan, Alexandra; Drewing, Knut (2020). Going against the grain – Texture orientation affects direction of exploratory movement [Dataset]. https://data.niaid.nih.gov/resources?id=zenodo_600700
    Explore at:
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Justus-Liebig University Giessen
    Authors
    Lezkan, Alexandra; Drewing, Knut
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    In haptic perception sensory signals depend on how we actively move our hands. For textures with periodically repeating grooves, movement direction can determine temporal cues to spatial frequency. Moving in line with texture orientation does not generate temporal cues. In contrast, moving orthog-onally to texture orientation maximizes the temporal frequency of stimulation, and thus optimizes temporal cues. Participants performed a spatial frequency discrimination task between stimuli of two types. The first type showed the de-scribed relationship between movement direction and temporal cues, the second stimulus type did not. We expected that when temporal cues can be optimized by moving in a certain direction, movements will be adjusted to this direction. However, movement adjustments were assumed to be based on sensory infor-mation, which accumulates over the exploration process. We analyzed 3 indi-vidual segments of the exploration process. As expected, participants only ad-justed movement directions in the final exploration segment and only for the stimulus type, in which movement direction influenced temporal cues. We con-clude that sensory signals on the texture orientation are used online during ex-ploration in order to adjust subsequent movements. Once sufficient sensory evi-dence on the texture orientation was accumulated, movements were directed to optimize temporal cues.

    Lezkan, A. & Drewing, K. (2016). Going against the grain – Texture orientation affects direction of exploratory movement, part I. Haptics: Perception, Devices, Control, and Applications (pp. 430-440).

    The Zip file contains all data relative to the publication.

    A description of the variables is contained in the file VARIABLE_CODES.txt

  19. R

    A dataset of classification and lexicon about food textures

    • entrepot.recherche.data.gouv.fr
    tsv
    Updated Sep 21, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Caroline Bondu; Michel Visalli; Christian Salles; Elisabeth Guichard; Magalie Weber; Caroline Bondu; Michel Visalli; Christian Salles; Elisabeth Guichard; Magalie Weber (2022). A dataset of classification and lexicon about food textures [Dataset]. http://doi.org/10.57745/DFKFYL
    Explore at:
    tsv(15270)Available download formats
    Dataset updated
    Sep 21, 2022
    Dataset provided by
    Recherche Data Gouv
    Authors
    Caroline Bondu; Michel Visalli; Christian Salles; Elisabeth Guichard; Magalie Weber; Caroline Bondu; Michel Visalli; Christian Salles; Elisabeth Guichard; Magalie Weber
    License

    https://spdx.org/licenses/etalab-2.0.htmlhttps://spdx.org/licenses/etalab-2.0.html

    Description

    This dataset includes a classification and a lexicon of food textures that can be used for generic purposes in food science.

  20. u

    Data from: TeXture Under eXplainable Insights

    • portalinvestigacio.uib.cat
    • zenodo.org
    Updated 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miró Nicolau, Miquel; Jaume-i-Capó, Antoni; Moya-Alcover, Gabriel; Miró Nicolau, Miquel; Jaume-i-Capó, Antoni; Moya-Alcover, Gabriel (2025). TeXture Under eXplainable Insights [Dataset]. https://portalinvestigacio.uib.cat/documentos/67a9c7bb19544708f8c70c36
    Explore at:
    Dataset updated
    2025
    Authors
    Miró Nicolau, Miquel; Jaume-i-Capó, Antoni; Moya-Alcover, Gabriel; Miró Nicolau, Miquel; Jaume-i-Capó, Antoni; Moya-Alcover, Gabriel
    Description

    The TeXture Under eXplainable Insights (TXUXI) dataset family provides synthetic datasets designed to evaluate eXplainable Artificial Intelligence (XAI) methods using ground truth explanations. It includes three versions: TXUXIv1, TXUXIv2, and TXUXIv3, each progressively increasing in complexity to test the robustness of XAI approaches.

    The datasets consist of images featuring geometric shapes such as crosses, squares, and circles, with controlled variations in position, size, and intensity. The backgrounds vary in complexity: TXUXIv1 includes uniform line patterns, TXUXIv2 uses a consistent natural texture (wood) sourced from the Describable Textures Dataset (DTD), and TXUXIv3 features highly diverse natural textures from the DTD, encompassing 5,640 unique backgrounds.

    Each dataset comprises 52,000 samples, with 50,000 allocated for training and 2,000 for validation. Ground truth explanations are provided, enabling a controlled and objective evaluation of XAI methods under different scenarios. The datasets were designed to analyze the fidelity of XAI methods while addressing common challenges such as noise generation and sensitivity to out-of-distribution (OOD) samples.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Sagi Eppel (2025). LAS&T: Large Shape And Texture Dataset [Dataset]. https://www.kaggle.com/datasets/sagieppel/las-and-t-large-shape-and-texture-dataset
Organization logo

LAS&T: Large Shape And Texture Dataset

Explore at:
zip(50958840696 bytes)Available download formats
Dataset updated
Aug 2, 2025
Authors
Sagi Eppel
License

https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

Description

The Large Shape And Texture Dataset (LAS&T)

LAS&T is the largest and most diverse dataset for shape, texture and material recognition and retrieval in 2D and 3D scenes, with 650,000 images, based on real world shapes and textures.

Overview

The LAS&T Dataset aims to test the most basic aspect of vision in the most general way. Mainly the ability to identify any shape, texture, and material in any setting and environment, without being limited to specific types or classes of objects, materials, and environments. For shapes, this means identifying and retrieving any shape in 2D or 3D with every element of the shape changed between images, including the shape material and texture, orientation, size, and environment. For textures and materials, the goal is to recognize the same texture or material when appearing on different objects, environments, and light conditions. The dataset relies on shapes, textures, and materials extracted from real-world images, leading to an almost unlimited quantity and diversity of real-world natural patterns. Each section of the dataset (shapes, and textures), contains 3D parts that rely on physics-based scenes with realistic light materials and object simulation and abstract 2D parts. In addition, the real-world benchmark for 3D shapes. Main Project Page

The dataset is composed of 4 parts:

3D shape recognition and retrieval. 2D shape recognition and retrieval. 3D Materials recognition and retrieval. 2D Texture recognition and retrieval.

Additional assets is as set of 350,000 natural 2D shapes extracted from real world images (SHAPES_COLLECTION_350k.zip)

Each can be trained and tested independently.

Shapes Recognition and Retrieval:

For shape recognition the goal is to identify the same shape in different images, where the material/texture/color of the shape is changed, the shape is rotated, and the background is replaced. Hence, only the shape remains the same in both images. Note that this means the model can't use any contextual cues and most rely on the shape information alone.

File structure:

All jpg images that are in the exact same subfolder contain the exact same shape (but with different texture/color/background/orientation).

Textures and Materials Recognition and Retrieval

For texture and materials, the goal is to identify and match images containing the same material or textures, however the shape/object on which the material texture is applied is different, and so is the background and light. Removing contextual clues and forcing the model to use only the texture/material for the recognition process.

File structure:

All jpg images that are in the exact same subfolder contain the exact same texture/material (but overlay on different objects with different background/and illumination/orientation).

Data Generation:

The images in the synthetic part of the dataset were created by automatically extracting shapes and textures from natural images and combining them in synthetic images. This created synthetic images that completely rely on real-world patterns, making extremely diverse and complex shapes and textures. As far as we know this is the largest and most diverse shape and texture recognition/retrieval dataset. 3D data was generated using physics-based material and rendering (blender) making the images physically grounded and enabling using the data to train for real-world examples.

Real-world images data:

For 3D shape recognition and retrieval, we also supply a real-world natural image benchmark. With a variety of natural images containing the exact same 3D shape but made/coated with different materials and in different environments and orientations. The goal is again to identify the same shape in different images.

File structure:

File containing the word 'synthetic' contains synthetic images that can be used for training or testing, the type of data (2D shapes, 3D shapes, 2D textures, 3D materials) appears in the file name, as well as the number of images. Files containing "MULTI TESTS" in their name, contains various of small tests (500 images) that can be used to test some how single variation effect the recognition the recognition (orientation/background), and are less suitable for general training or testing.

Supporting Scripts

The file Files starting with "Scripts" contains the scripts used to generate the dataset and the scripts used to evaluate various of LVLMs on this dataset.

Shapes Collections

The file SHAPES_COLLECTION_350k.zip contains 350,000 2D shapes extracted from natural images and used for the dataset generation.

Evaluating and Testing

For evaluating and testing see: SCRIPTS_Testing_LVLM_ON_LAST_VQA.zip This can be use to test leading LVLMs using api, create human tests, and in general turn the dataset into multichoice question images similar to the one in the paper.

Search
Clear search
Close search
Google apps
Main menu