100+ datasets found
  1. USDA ARS Image Gallery

    • catalog.data.gov
    • agdatacommons.nal.usda.gov
    • +2more
    Updated Apr 21, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Agricultural Research Service (2025). USDA ARS Image Gallery [Dataset]. https://catalog.data.gov/dataset/usda-ars-image-gallery-7f166
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    Agricultural Research Servicehttps://www.ars.usda.gov/
    Description

    This Image Gallery is provided as a complimentary source of high-quality digital photographs available from the Agricultural Research Service information staff. Photos, (over 2,000 .jpegs) in the Image Gallery are copyright-free, public domain images unless otherwise indicated. Resources in this dataset:Resource Title: USDA ARS Image Gallery (Web page) . File Name: Web Page, url: https://www.ars.usda.gov/oc/images/image-gallery/ Over 2000 copyright-free images from ARS staff.

  2. Pexels 110k 768p JPEG

    • kaggle.com
    zip
    Updated Dec 29, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Innominate817 (2022). Pexels 110k 768p JPEG [Dataset]. https://www.kaggle.com/datasets/innominate817/pexels-110k-768p-min-jpg
    Explore at:
    zip(61914161639 bytes)Available download formats
    Dataset updated
    Dec 29, 2022
    Authors
    Innominate817
    License

    Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
    License information was derived automatically

    Description

    This dataset contains resized and cropped free-use stock photos and corresponding image attributes.

    All the images have minimum dimensions of 768p and maximum dimensions that are multiples of 32. Each one has a set of image attributes associated with it. Many entries are missing some values, but all should at least have a title.

    Depth maps are available as a separate dataset. - Depth Maps: Pexels 110k 768p JPEG Depth Maps

    Sample Image with Depth Map

    https://github.com/cj-mills/pexels-dataset/raw/main/images/3185509-img-depth-pair-768p.png">

    Sample Entry

    img_id3186010
    titlePink and White Ice Cream Neon Signage
    aspect_ratio0.749809
    main_color[128, 38, 77]
    colors[#000000, #a52a2a, #bc8f8f, #c71585, #d02090, #d8bfd8]
    tags[bright, chocolate, close-up, cold, cream, creamy, cup, dairy product, delicious, design, dessert, electricity, epicure, flavors, fluorescent, food, food photography, goody, hand, ice cream, icecream, illuminated, indulgence, light pink background, neon, neon lights, neon sign, pastry, pink background, pink wallpaper, scoop, sweet, sweets, tasty]
    adultvery_unlikely
    aperture1.8
    cameraiPhone X
    focal_length4.0
    google_place_idChIJkUjxJ7it1y0R4qOVTbWHlR4 ...
  3. F

    German Image Captioning Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). German Image Captioning Dataset [Dataset]. https://www.futurebeeai.com/dataset/multi-modal-dataset/german-image-caption-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introduction

    Welcome to the German Language Image Captioning Dataset! A collection of Images with associated text captions to facilitate the development of AI models capable of generating high-quality captions for images. This dataset is meticulously crafted to support research and innovation in computer vision and natural language processing.

    Image Data

    This dataset features over 5,000 high-resolution images sourced from diverse categories and scenes. Each image is meticulously selected to encompass a wide array of contexts, objects, and environments, ensuring comprehensive coverage for training robust image captioning models.

    Sources: Images are sourced from public databases and proprietary collections.
    Clarity and Relevance: Each image is vetted for visual clarity and relevance, ensuring it accurately represents real-world scenarios.
    Copyright: All selected images are free from copyright restrictions, allowing for unrestricted use in research and development.
    Format: Images in the dataset are available in various formats like JPEG, PNG, and HEIC.
    Image Categories: The dataset spans a wide range of image categories to ensure thorough training, fine-tuning, and testing of image captioning models. categories include:
    Daily Life: Images about household objects, activities, and daily routines.
    Nature and Environment: Images related to natural scenes, plants, animals, and weather.
    Technology and Gadgets: Images about electronic devices, tools, and machinery.
    Human Activities: Images about people, their actions, professions, and interactions.
    Geography and Landmarks: Images related to specific locations, landmarks, and geographic features.
    Food and Dining: Images about different foods, meals, and dining settings.
    Education: Images related to educational settings, materials, and activities.
    Sports and Recreation: Images about various sports, games, and recreational activities.
    Transportation: Images about vehicles, travel methods, and transportation infrastructure.
    Cultural and Historical: Images about cultural artifacts, historical events, and traditions.

    Caption Data

    Each image in the dataset is paired with a high-quality descriptive caption. These captions are carefully crafted to provide detailed and contextually rich descriptions of the images, enhancing the dataset's utility for training sophisticated image captioning algorithms.

    Caption Details:
    Human Generated: Each caption is generated by native German people.
    Quality Assurance: Captions are meticulously reviewed for linguistic accuracy, coherence, and relevance to the corresponding images.
    Contextual Relevance: Captions are generated by keeping the visual insights like objects, scenes, actions, and settings depicted in the images.

    Metadata

    Each image-caption pair is accompanied by comprehensive metadata to facilitate informed decision-making in model development:

    Image File Name
    Category
    Caption

    Usage and Applications

    The Image Captioning Dataset serves various applications across different domains:

    Training Image Captioning Models: Provides high-quality data for training and fine-tuning Generative AI models to generate accurate and

  4. p

    Myocardial perfusion scintigraphy image database

    • physionet.org
    Updated Sep 9, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Wesley Calixto; Solange Nogueira; Fernanda Luz; Thiago Fellipe Ortiz de Camargo (2025). Myocardial perfusion scintigraphy image database [Dataset]. http://doi.org/10.13026/ce2z-dw74
    Explore at:
    Dataset updated
    Sep 9, 2025
    Authors
    Wesley Calixto; Solange Nogueira; Fernanda Luz; Thiago Fellipe Ortiz de Camargo
    License

    Open Database License (ODbL) v1.0https://www.opendatacommons.org/licenses/odbl/1.0/
    License information was derived automatically

    Description

    This database provides a collection of myocardial perfusion scintigraphy images in DICOM format with all metadata and segmentations (masks) in NIfTI format. The images were obtained from patients undergoing scintigraphy examinations to investigate cardiac conditions such as ischemia and myocardial infarction. The dataset encompasses a diversity of clinical cases, including various perfusion patterns and underlying cardiac conditions. All images have been properly anonymized, and the age range of the patients is from 20 to 90 years. This database represents a valuable source of information for researchers and healthcare professionals interested in the analysis and diagnosis of cardiac diseases. Moreover, it serves as a foundation for the development and validation of image processing algorithms and artificial intelligence techniques applied to cardiovascular medicine. Available for free on the PhysioNet platform, its aim is to promote collaboration and advance research in nuclear cardiology and cardiovascular medicine, while ensuring the replicability of studies.

  5. House Rooms & Streets Image Dataset

    • kaggle.com
    Updated Oct 14, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mike Mazurov (2022). House Rooms & Streets Image Dataset [Dataset]. https://www.kaggle.com/datasets/mikhailma/house-rooms-streets-image-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 14, 2022
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Mike Mazurov
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    This dataset contains 2 folders with images of different rooms of houses and street views. The house data consist few categories such as: bath, bed, din, kitchen and living. The street data consist few categories such as: apartment, church, garage, house, industrial, office building, retail and roofs.

    I took pictures of rooms here and pictures of houses here, resized them to 224x224, removed Google watermarks and merged 2 datasets together.

    In general, I used this data for my tasks, but decided that this data set might be useful to someone else, if so feel free to upvote me 🤗

  6. o

    Optical Coherence Tomography Image Retinal Database

    • openicpsr.org
    • search.gesis.org
    Updated Feb 15, 2019
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Peyman Gholami; Vasudevan Lakshminarayanan (2019). Optical Coherence Tomography Image Retinal Database [Dataset]. http://doi.org/10.3886/E108503V1
    Explore at:
    Dataset updated
    Feb 15, 2019
    Dataset provided by
    University of Waterloo
    Authors
    Peyman Gholami; Vasudevan Lakshminarayanan
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    An open source Optical Coherence Tomography Image Database containing different retinal OCT images with different pathological conditions. Please use the following citation if you use the database: Peyman Gholami, Priyanka Roy, Mohana Kuppuswamy Parthasarathy, Vasudevan Lakshminarayanan, "OCTID: Optical Coherence Tomography Image Database", arXiv preprint arXiv:1812.07056, (2018). For more information and details about the database see: https://arxiv.org/abs/1812.07056

  7. d

    Data from: MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW...

    • catalog.data.gov
    • datasets.ai
    • +2more
    Updated Apr 11, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Dashlink (2025). MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH [Dataset]. https://catalog.data.gov/dataset/multi-temporal-remote-sensing-image-classification-a-multi-view-approach
    Explore at:
    Dataset updated
    Apr 11, 2025
    Dataset provided by
    Dashlink
    Description

    MULTI-TEMPORAL REMOTE SENSING IMAGE CLASSIFICATION - A MULTI-VIEW APPROACH VARUN CHANDOLA AND RANGA RAJU VATSAVAI Abstract. Multispectral remote sensing images have been widely used for automated land use and land cover classification tasks. Often thematic classification is done using single date image, however in many instances a single date image is not informative enough to distinguish between different land cover types. In this paper we show how one can use multiple images, collected at different times of year (for example, during crop growing season), to learn a better classifier. We propose two approaches, an ensemble of classifiers approach and a co-training based approach, and show how both of these methods outperform a straightforward stacked vector approach often used in multi-temporal image classification. Additionally, the co-training based method addresses the challenge of limited labeled training data in supervised classification, as this classification scheme utilizes a large number of unlabeled samples (which comes for free) in conjunction with a small set of labeled training data.

  8. q

    A virtual laboratory on cell division using a publicly-available image...

    • qubeshub.org
    Updated Aug 28, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Eric Shelden*; Erika Offerdahl; Graham Johnson (2021). A virtual laboratory on cell division using a publicly-available image database [Dataset]. http://doi.org/10.24918/cs.2019.15
    Explore at:
    Dataset updated
    Aug 28, 2021
    Dataset provided by
    QUBES
    Authors
    Eric Shelden*; Erika Offerdahl; Graham Johnson
    Description

    Cell division is a key concept in cell biology. While there are many popular activities to teach students the stages of mitosis, most make use of simple schematics, cartoons, or textbook diagrams. Others engage students in acting out the stages, or modeling them with physical objects (i.e. noodles, pipe cleaners). These approaches are useful for developing student knowledge and comprehension of the stages of cell division, but do not readily convey the real-life processes of mitosis. Moreover, they do not teach students how cell biologists study these processes, nor the difficulties with imaging real cells. Here, we provide an activity to reinforce student knowledge of mitosis, demonstrate how data on mitosis and other dynamic cellular processes can be collected, and introduce methods of data analysis for real cellular images using research-quality digital images from a free public database. This activity guides students through a virtual experiment that can be easily scaled for large introductory classes or low-resource settings. The activity focuses on experimentally determining the timing of the stages of cell division, directing the attention of students to the tasks that are completed at each stage and promoting understanding of the underlying mechanisms. Before the experiment, the students generate testable predictions for the relative amount of time each step of mitosis takes, provide a mechanistic reason for their prediction, and explain how they will test their predictions using imaging data. Students then identify the stages of cell division in a curated set of digital images and determine how to convert their data into relative amount of time for each phase of mitosis. Finally, students are asked to relate their findings to their original predictions, reinforcing their increasing understanding of the cell cycle. Students praised the practical application of their knowledge and development of image interpretation skills that would be used in a cell biology research setting.

  9. H

    Data from: The HAM10000 dataset, a large collection of multi-source...

    • dataverse.harvard.edu
    • opendatalab.com
    • +1more
    Updated Feb 7, 2023
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Philipp Tschandl (2023). The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions [Dataset]. http://doi.org/10.7910/DVN/DBW86T
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Feb 7, 2023
    Dataset provided by
    Harvard Dataverse
    Authors
    Philipp Tschandl
    License

    https://dataverse.harvard.edu/api/datasets/:persistentId/versions/4.0/customlicense?persistentId=doi:10.7910/DVN/DBW86Thttps://dataverse.harvard.edu/api/datasets/:persistentId/versions/4.0/customlicense?persistentId=doi:10.7910/DVN/DBW86T

    Description

    Training of neural networks for automated diagnosis of pigmented skin lesions is hampered by the small size and lack of diversity of available dataset of dermatoscopic images. We tackle this problem by releasing the HAM10000 ("Human Against Machine with 10000 training images") dataset. We collected dermatoscopic images from different populations, acquired and stored by different modalities. The final dataset consists of 10015 dermatoscopic images which can serve as a training set for academic machine learning purposes. Cases include a representative collection of all important diagnostic categories in the realm of pigmented lesions: Actinic keratoses and intraepithelial carcinoma / Bowen's disease (akiec), basal cell carcinoma (bcc), benign keratosis-like lesions (solar lentigines / seborrheic keratoses and lichen-planus like keratoses, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhage, vasc). More than 50% of lesions are confirmed through histopathology (histo), the ground truth for the rest of the cases is either follow-up examination (follow_up), expert consensus (consensus), or confirmation by in-vivo confocal microscopy (confocal). The dataset includes lesions with multiple images, which can be tracked by the lesion_id-column within the HAM10000_metadata file. Due to upload size limitations, images are stored in two files: HAM10000_images_part1.zip (5000 JPEG files) HAM10000_images_part2.zip (5015 JPEG files) Additional data for evaluation purposes The HAM10000 dataset served as the training set for the ISIC 2018 challenge (Task 3), with the same sources contributing the majority of the validation- and test-set as well. The test-set images are available herein as ISIC2018_Task3_Test_Images.zip (1511 images), the ground-truth in the same format as the HAM10000 data (public since 2023) is available as ISIC2018_Task3_Test_GroundTruth.csv.. The ISIC-Archive also provides the challenge images and metadata (training, validation, test) at their "ISIC Challenge Datasets" page. Comparison to physicians Test-set evaluations of the ISIC 2018 challenge were compared to physicians on an international scale, where the majority of challenge participants outperformed expert readers: Tschandl P. et al., Lancet Oncol 2019 Human-computer collaboration The test-set images were also used in a study comparing different methods and scenarios of human-computer collaboration: Tschandl P. et al., Nature Medicine 2020 Following corresponding metadata is available herein: ISIC2018_Task3_Test_NatureMedicine_AI_Interaction_Benefit.csv: Human ratings for Test images with and without interaction with a ResNet34 CNN (Malignancy Probability, Multi-Class probability, CBIR) or Human-Crowd Multi-Class probabilities. This is data was collected for and analyzed in Tschandl P. et al., Nature Medicine 2020, therefore please refer to this publication when using the data. Some details on the abbreviated column headings: image_id: This is the ISIC image_id of an image at the time of the study. There should be no duplications in the combination image_id & interaction_modality. As not every image was shown with every interaction modality, not every combination is present. prob_m_dx_akiec, ... : m is "machine probabilities". Values are values after softmax, and "_mal" is all malignant classes summed. prob_h_dx_akiec, ... : h is "human probabilities". Values are aggregated percentages of human ratings from past studies distinguishing between seven classes. Note there is no "prob_h_mal" as this was none of the tested interaction modalities. user_dx_without_interaction_akiec, ...: Number of participants choosing this diagnosis without interaction. user_dx_with_interaction_akiec, ...: Number of participants choosing this diagnosis with interaction. HAM10000_segmentations_lesion_tschandl.zip: To evaluate regions of CNN activations in Tschandl P. et al., Nature Medicine 2020 (please refer to this publication when using the data), a single dermatologist (Tschandl P) created binary segmentation masks for all 10015 images from the HAM10000 dataset. Masks were initialized with the segmentation network as described by Tschandl et al., Computers in Biology and Medicine 2019, and following verified, corrected or replaced via the free-hand selection tool in FIJI.

  10. m

    Getty Images Holdings Inc. - Investments

    • macro-rankings.com
    csv, excel
    Updated Aug 23, 2025
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    macro-rankings (2025). Getty Images Holdings Inc. - Investments [Dataset]. https://www.macro-rankings.com/markets/stocks/gety-nyse/cashflow-statement/investments
    Explore at:
    csv, excelAvailable download formats
    Dataset updated
    Aug 23, 2025
    Dataset authored and provided by
    macro-rankings
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Area covered
    united states
    Description

    Investments Time Series for Getty Images Holdings Inc.. Getty Images Holdings, Inc. provides creative and editorial visual content solutions in the Americas, Europe, the Middle East, Africa, and Asia-Pacific. It offers creative, which includes royalty-free photos, illustrations, vectors, videos, and generative AI-services; editorial, which consists of photos and videos covering entertainment, sports, and news; and other products and services, such as music licensing, digital asset management, distribution services, print sales, and data access and/or licensing. The company also provides its stills, images, and videos through its website Gettyimages.com, which serves enterprise agency, media, and corporate customers; iStock.com, an e-commerce platform that primarily serves small and medium-sized businesses, including the freelance market; Unsplash.com, a platform that offers free stock photo downloads and paid subscriptions to high-growth prosumer and semi-professional creator segments; and Unsplash+, an unlimited paid subscription that provides access to model released content with expanded legal protections. In addition, it maintains privately-owned photographic archives covering news, sport, and entertainment, as well as variety of subjects, including lifestyle, business, science, health, wellness, beauty, sports, transportation, and travel. The company was founded in 1995 and is headquartered in Seattle, Washington.

  11. F

    English Product Image OCR Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). English Product Image OCR Dataset [Dataset]. https://www.futurebeeai.com/dataset/ocr-dataset/english-product-image-ocr-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introducing the English Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the English language.

    Dataset Contain & Diversity

    Containing a total of 2000 images, this English OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.

    To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible English text.

    Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.

    All these images were captured by native English people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.

    Metadata

    Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.

    The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of English text recognition models.

    Update & Custom Collection

    We're committed to expanding this dataset by continuously adding more images with the assistance of our native English crowd community.

    If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.

    Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.

    License

    This Image dataset, created by FutureBeeAI, is now available for commercial use.

    Conclusion:

    Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the English language. Your journey to enhanced language understanding and processing starts here.

  12. F

    Spanish Product Image OCR Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). Spanish Product Image OCR Dataset [Dataset]. https://www.futurebeeai.com/dataset/ocr-dataset/spanish-product-image-ocr-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introducing the Spanish Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Spanish language.

    Dataset Contain & Diversity

    Containing a total of 2000 images, this Spanish OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.

    To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible Spanish text.

    Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.

    All these images were captured by native Spanish people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.

    Metadata

    Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.

    The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Spanish text recognition models.

    Update & Custom Collection

    We're committed to expanding this dataset by continuously adding more images with the assistance of our native Spanish crowd community.

    If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.

    Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.

    License

    This Image dataset, created by FutureBeeAI, is now available for commercial use.

    Conclusion:

    Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Spanish language. Your journey to enhanced language understanding and processing starts here.

  13. F

    German Product Image OCR Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). German Product Image OCR Dataset [Dataset]. https://www.futurebeeai.com/dataset/ocr-dataset/german-product-image-ocr-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introducing the German Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the German language.

    Dataset Contain & Diversity

    Containing a total of 2000 images, this German OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.

    To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible German text.

    Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.

    All these images were captured by native German people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.

    Metadata

    Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.

    The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of German text recognition models.

    Update & Custom Collection

    We're committed to expanding this dataset by continuously adding more images with the assistance of our native German crowd community.

    If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.

    Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.

    License

    This Image dataset, created by FutureBeeAI, is now available for commercial use.

    Conclusion:

    Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the German language. Your journey to enhanced language understanding and processing starts here.

  14. i

    Happy Image Classification Dataset

    • images.cv
    zip
    Updated Nov 29, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2025). Happy Image Classification Dataset [Dataset]. https://images.cv/dataset/happy-image-classification-dataset
    Explore at:
    zipAvailable download formats
    Dataset updated
    Nov 29, 2025
    License

    https://images.cv/licensehttps://images.cv/license

    Description

    Labeled Happy images suitable for training and evaluating computer vision and deep learning models.

  15. F

    Punjabi Product Image OCR Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). Punjabi Product Image OCR Dataset [Dataset]. https://www.futurebeeai.com/dataset/ocr-dataset/punjabi-product-image-ocr-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introducing the Punjabi Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Punjabi language.

    Dataset Contain & Diversity

    Containing a total of 2000 images, this Punjabi OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.

    To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible Punjabi text.

    Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.

    All these images were captured by native Punjabi people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.

    Metadata

    Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.

    The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Punjabi text recognition models.

    Update & Custom Collection

    We're committed to expanding this dataset by continuously adding more images with the assistance of our native Punjabi crowd community.

    If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.

    Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.

    License

    This Image dataset, created by FutureBeeAI, is now available for commercial use.

    Conclusion:

    Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Punjabi language. Your journey to enhanced language understanding and processing starts here.

  16. F

    Thai Product Image OCR Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). Thai Product Image OCR Dataset [Dataset]. https://www.futurebeeai.com/dataset/ocr-dataset/thai-product-image-ocr-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introducing the Thai Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Thai language.

    Dataset Contain & Diversity

    Containing a total of 2000 images, this Thai OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.

    To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible Thai text.

    Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.

    All these images were captured by native Thai people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.

    Metadata

    Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.

    The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Thai text recognition models.

    Update & Custom Collection

    We're committed to expanding this dataset by continuously adding more images with the assistance of our native Thai crowd community.

    If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.

    Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.

    License

    This Image dataset, created by FutureBeeAI, is now available for commercial use.

    Conclusion:

    Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Thai language. Your journey to enhanced language understanding and processing starts here.

  17. Anime Images Dataset

    • kaggle.com
    zip
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Banana_Leopard (2023). Anime Images Dataset [Dataset]. https://www.kaggle.com/datasets/diraizel/anime-images-dataset
    Explore at:
    zip(910502838 bytes)Available download formats
    Dataset updated
    Jun 1, 2023
    Authors
    Banana_Leopard
    License

    https://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/

    Description

    Context

    This dataset contains anime images for 231 different anime, with approximately 380 image for each of those anime. Please note that you might need to clean the image directories a bit, since the images might contain merchandise and live-action photos in addition to the actual anime itself.

    Scripts

    If you'd like to take a look at the scripts used to make this dataset, you can find them on this GitHub repo.
    Feel free to extend it, scrape your own images, etc. etc.

    Inspiration

    As a big anime fan, I found a lot of anime related datasets on Kaggle. I was however disappointed to find no dataset containing anime specific images for popular anime. Some other great datasets that I've been inspired by include: - Top 250 Anime 2023 - Anime Recommendations Database - Anime Recommendation Database 2020 - Anime Face Dataset - Safebooru - Anime Image Metadata

    Process

    1. You need a list of anime to scrape it. You can either:
      • Make your own list. This is what I do in the directory called "scraped_anime_list".
      • Use someone else's list. This is what I do in the directory called "kaggle_anime_list" and "top_anime_list".
    2. To be honest, I wanted to make my own list. To make a list of anime, I used the python wrapper of the unofficial MAL (MyAnimeList) API called JikanPy. JikanPy scraped MAL.
    3. Animes on MAL have a unique identifier called anime id, think of this as a unique number for each anime. This is supposed to be sequential but there are a lot of gaps from valid anime id to the next, which I discovered based on this post.
    4. These IDs can go from 1 - 100,000 and maybe beyond. However, I decided to go through the anime ids one by one from 1-50,000 and retrive the id, rank and anime_name. This is what you will find in the folder called "scraped_anime_list". Note that I prefer using the English name of the anime if it exists, and if it doesn't I get the Japanese name. Please use this list to obtain the anime ids if you intend to scrape MAL yourself, it will save you a LOT of time.
    5. I thought that someone else might've gone through and same process and voila, I found MyAnimeList Dataset on kaggle. I didn't want to wait for my scraper to finish scraping, so I decided to use this "anime_cleaned.csv" version of this list. The lists from this dataset are what you find in the "kaggle_anime_list" folder.
    6. Cleaning anime names is a task in and of itself. Within the GitHub repo, refer to the file called "notes_and_todo.md" to look at all the cleaning troubles. I tried my best to remove all:
      • Anime Movies: Since you have for instance One Piece (the anime) and One Piece Movie 1, One Piece Movie 2, and so on.
      • Seasons: MAL is an anime ranker. Different anime seasons can show up on the list with different ranks. I retain the original anime name (the most basic ones, for instance, just "Gintama" instead of "Gintama Season 4".
    7. Ultimately, I manually curated around 300 anime names, which reduced to 231 after removing duplicates, since after the curation, "Gintama" and "Gintama: Enchousen" would both be named "Gintama". This list with the duplicates is what you find in the file called "UsableAnimeList.xlsx" within the "top_anime_list" folder.
    8. This list is then rid of the duplicates and used to scrape the image URLs for each anime found in the folder called "anime_img_urls".
    9. These URLs are then used to scrape the anime images themselves, found in the folder called "anime_images".
    10. Also the tags are only a guide, feel free to use this dataset for any Deep Learning task.

    Sources

  18. F

    Russian Product Image OCR Dataset

    • futurebeeai.com
    wav
    Updated Aug 1, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    FutureBee AI (2022). Russian Product Image OCR Dataset [Dataset]. https://www.futurebeeai.com/dataset/ocr-dataset/russian-product-image-ocr-dataset
    Explore at:
    wavAvailable download formats
    Dataset updated
    Aug 1, 2022
    Dataset provided by
    FutureBeeAI
    Authors
    FutureBee AI
    License

    https://www.futurebeeai.com/policies/ai-data-license-agreementhttps://www.futurebeeai.com/policies/ai-data-license-agreement

    Dataset funded by
    FutureBeeAI
    Description

    Introducing the Russian Product Image Dataset - a diverse and comprehensive collection of images meticulously curated to propel the advancement of text recognition and optical character recognition (OCR) models designed specifically for the Russian language.

    Dataset Contain & Diversity

    Containing a total of 2000 images, this Russian OCR dataset offers diverse distribution across different types of front images of Products. In this dataset, you'll find a variety of text that includes product names, taglines, logos, company names, addresses, product content, etc. Images in this dataset showcase distinct fonts, writing formats, colors, designs, and layouts.

    To ensure the diversity of the dataset and to build a robust text recognition model we allow limited (less than five) unique images from a single resource. Stringent measures have been taken to exclude any personally identifiable information (PII) and to ensure that in each image a minimum of 80% of space contains visible Russian text.

    Images have been captured under varying lighting conditions – both day and night – along with different capture angles and backgrounds, to build a balanced OCR dataset. The collection features images in portrait and landscape modes.

    All these images were captured by native Russian people to ensure the text quality, avoid toxic content and PII text. We used the latest iOS and Android mobile devices above 5MP cameras to click all these images to maintain the image quality. In this training dataset images are available in both JPEG and HEIC formats.

    Metadata

    Along with the image data, you will also receive detailed structured metadata in CSV format. For each image, it includes metadata like image orientation, county, language, and device information. Each image is properly renamed corresponding to the metadata.

    The metadata serves as a valuable tool for understanding and characterizing the data, facilitating informed decision-making in the development of Russian text recognition models.

    Update & Custom Collection

    We're committed to expanding this dataset by continuously adding more images with the assistance of our native Russian crowd community.

    If you require a custom product image OCR dataset tailored to your guidelines or specific device distribution, feel free to contact us. We're equipped to curate specialized data to meet your unique needs.

    Furthermore, we can annotate or label the images with bounding box or transcribe the text in the image to align with your specific project requirements using our crowd community.

    License

    This Image dataset, created by FutureBeeAI, is now available for commercial use.

    Conclusion:

    Leverage the power of this product image OCR dataset to elevate the training and performance of text recognition, text detection, and optical character recognition models within the realm of the Russian language. Your journey to enhanced language understanding and processing starts here.

  19. n

    A normative database of free-breathing pediatric thoracic 4D dynamic MRI...

    • data.niaid.nih.gov
    • search.dataone.org
    • +1more
    zip
    Updated May 9, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yubing Tong; Jayaram Udupa; Drew Torigian; Patrick Cahill (2024). A normative database of free-breathing pediatric thoracic 4D dynamic MRI images [Dataset]. http://doi.org/10.5061/dryad.vmcvdnczf
    Explore at:
    zipAvailable download formats
    Dataset updated
    May 9, 2024
    Dataset provided by
    University of Pennsylvania
    Children's Hospital of Philadelphia
    Authors
    Yubing Tong; Jayaram Udupa; Drew Torigian; Patrick Cahill
    License

    https://spdx.org/licenses/CC0-1.0.htmlhttps://spdx.org/licenses/CC0-1.0.html

    Description

    In pediatric patients with respiratory abnormalities, it is important to understand the alterations in regional dynamics of the lungs and other thoracoabdominal components, which in turn requires a quantitative understanding of what is considered as normal in healthy children. Currently, such a normative database of regional respiratory structure and function in healthy children does not exist. The shared open-source normative database is from our ongoing virtual growing child (VGC) project, which includes 4D dynamic magnetic resonance imaging (dMRI) images during one breathing cycle for each normal child and also 10 object segmentations at end expiration (EE) and end inspiration (EI) phases of the respiratory cycle in the 4D image. The lung volumes at EE and EI as well as the excursion volumes of chest wall and diaphragm from EE to EI, left and right separately, are also reported. The database has 2,820 3D segmentations from 141 healthy children, which to our knowledge is the largest dMRI dataset of healthy children to date. The database is unique and provides dMRI images, object segmentations, and quantitative regional respiratory measurement parameters of volumes for healthy children. The database can serve as a reference standard to quantify regional respiratory abnormalities in young patients with various respiratory conditions and facilitate treatment planning and response assessment. The database can be useful to advance future AI-based research on image-based object segmentation and analysis. Methods The normative database is from our ongoing NIH funded virtual growing child (VGC) project. All dMRI scans are acquired from healthy children during free-breathing. The dMRI protocol was as follows: 3T MRI scanner (Verio, Siemens, Erlangen, Germany), true-FISP bright-blood sequence, TR=3.82 ms, TE=1.91 ms, voxel size ~1×1×6 mm3, 320×320 matrix, bandwidth 258 Hz, and flip angle 76o. With recent advances, for each sagittal location across the thorax and abdomen, we acquire 40 2D slices over several tidal breathing cycles at ~480 ms/slice. On average, 35 sagittal locations are imaged, yielding a total of ~1400 2D MRI slices, with a resulting total scan time of 11-13 minutes for any particular subject. The collected dMRI goes through the procedure of 4D image construction, image processing, object segmentation, and then volumetric measurements from segmentations. (1) 4D image construction: For the acquired dMRI scans, we utilized an automated 4D image construction approach [1] to form one 4D image over one breathing cycle (consisting of typically 5-8 respiratory phases) from each acquired dMRI scan to represent the whole dynamic thoraco-abdominal body region. The algorithm selects roughly 175-280 slices (35 sagittal locations × 5-8 respiratory phases) from the 1400 acquired slices in an optimal manner using an optical flux method. (2) Image processing: Intensity standardization [2] is performed on every time point/3D volume of the 4D image. (3) Object segmentation: For each subject, there are 10 objects segmented at both EE and EI time points in this database. They include the thoracoabdominal skin outer boundary, left and right lungs, liver, spleen, left and right kidneys, diaphragm, and left and right hemi-diaphragms. All of the healthy children in this study have larger field of view (LFOV) images, which include the full thorax and abdomen in sagittal dMRI images. We used a pretrained U-Net based deep learning network to first segment all objects, and then all auto-segmentation results were visually checked and manually refined as needed, under supervision of a radiologist (DAT) with over 25 years of expertise in MRI and thoracoabdominal radiology. Manual segmentations have been performed for all objects in all data sets. (4) Volumetric measurements based on object segmentations for lung (left and right separately) volumes at end expiration/end inspiration, as well as for chest wall and diaphragm excursion volumes (left and right separately) are reported. [1] Hao Y, Udupa JK, Tong Y, Wu C, Li H, McDonough JM, Lott C, Qiu C, Galagedera N, Anari JB, Torigian DA, Cahill PJ. OFx: A method of 4D image construction from free-breathing non-gated MRI slice acquisitions of the thorax via optical flux. Med Image Anal. 2021;72:102088. doi: 10.1016/j.media.2021.102088. PubMed PMID: 34052519; PMCID: PMC8316349. [2] Nyul LG, Udupa JK. On standardizing the MR image intensity scale. Magn Reson Med. 1999;42(6):1072-81. doi: 10.1002/(sici)1522-2594(199912)42:61072::aid-mrm113.0.co;2-m. PubMed PMID: 10571928.

  20. d

    Annotated Imagery Data |Object Detection Data| AI Training Data| Car images...

    • datarade.ai
    .json, .xml, .csv
    Updated Nov 12, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Pixta AI (2022). Annotated Imagery Data |Object Detection Data| AI Training Data| Car images | 100,000 Stock Images [Dataset]. https://datarade.ai/data-products/car-datasets-in-multiple-scenes-pixta-ai
    Explore at:
    .json, .xml, .csvAvailable download formats
    Dataset updated
    Nov 12, 2022
    Dataset authored and provided by
    Pixta AI
    Area covered
    Taiwan, Korea (Republic of), China, Japan, Hong Kong
    Description
    1. Overview This dataset is a collection of 100,000+ images of cars in multiple scenes that are ready to use for optimizing the accuracy of computer vision models. All of the contents is sourced from PIXTA's stock library of 100M+ Asian-featured images and videos. PIXTA is the largest platform of visual materials in the Asia Pacific region offering fully-managed services, high quality contents and data, and powerful tools for businesses & organisations to enable their creative and machine learning projects.

    2. Annotated Imagery Data of car images This dataset contains 4,000+ images of cars. Each data set is supported by both AI and human review process to ensure labelling consistency and accuracy. Contact us for more custom datasets.

    3. About PIXTA PIXTASTOCK is the largest Asian-featured stock platform providing data, contents, tools and services since 2005. PIXTA experiences 15 years of integrating advanced AI technology in managing, curating, processing over 100M visual materials and serving global leading brands for their creative and data demands.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Agricultural Research Service (2025). USDA ARS Image Gallery [Dataset]. https://catalog.data.gov/dataset/usda-ars-image-gallery-7f166
Organization logo

USDA ARS Image Gallery

Explore at:
17 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Apr 21, 2025
Dataset provided by
Agricultural Research Servicehttps://www.ars.usda.gov/
Description

This Image Gallery is provided as a complimentary source of high-quality digital photographs available from the Agricultural Research Service information staff. Photos, (over 2,000 .jpegs) in the Image Gallery are copyright-free, public domain images unless otherwise indicated. Resources in this dataset:Resource Title: USDA ARS Image Gallery (Web page) . File Name: Web Page, url: https://www.ars.usda.gov/oc/images/image-gallery/ Over 2000 copyright-free images from ARS staff.

Search
Clear search
Close search
Google apps
Main menu