Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Vision Test is a dataset for object detection tasks - it contains Fish annotations for 637 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
The Street View House Numbers (SVHN) dataset is a dataset of 604,300 images of house numbers taken from Google Street View. The dataset is split into a training set of 73,257 images, a test set of 26,032 images, and a validation set of 50,113 images. The images in the dataset are all 32 x 32 pixels in size and are in grayscale. The dataset is used to train and evaluate machine learning models for the task of digit recognition.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Raw Test Computer Vision Project is a dataset for object detection tasks - it contains Human annotations for 623 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
This dataset tracks the number of days since the row count on a dataset asset has changed. It's purpose is to ensure datasets are updating as expected. This dataset is identical to the Socrata Asset Inventory with added Checkpoint Date and Days Since Row Count Change attributes.
test-viewer-false
A UV script for hfjobs.
Usage
hfjobs run ghcr.io/astral-sh/uv:python3.12
uv run https://huggingface.co/datasets/davanstrien/test-viewer-false/resolve/main/script.py
Learn More
Learn more about UV scripts in the UV documentation.
Script Details
Script: script.py Description: Template UV script for hfjobs.
Created with hfjobs
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Open Chrono-Morph Viewer (OCMV) is an open-source image visualization and animation platform handling volumetric time series, especially those containing unequally sampled volumes, such as those produced by 4D (3D+time) cardiac alignment. OCMV is available at https://github.com/ShangWangLab/OpenChronoMorphViewer. This dataset is for demonstrating OCMV’s unique timeline features. It shows the dynamics of the embryonic mouse heart around embryonic day 8.5, consisting of 150 NRRD volumes spanning two timescales. The fast timescale represents the heartbeat phase over the complete cardiac cycle, while the slow timescale represents the development at three timepoints spanning two hours. Across the three developmental timepoints, the number of volumes sampling the heartbeat changes, as does the volume size (number of voxels). The data have two channels: the first channel represents the tissue structure, and the second channel represents the blood flow velocity component along the vertical axis spanning ±9.8 mm/s. The acquisition and processing of this dataset followed the methods described in https://doi.org/10.1364/BOE.475027. All animal manipulations were approved by the Institutional Animal Care and Use Committee at Baylor College of Medicine, and the experiments followed the approved procedures and guidelines.
Acknowledgement: We thank Dr. Irina V. Larina (Baylor College of Medicine) for her support with the experiment.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Test data for the software demo of the manuscript 'Calibration-free estimation of field dependent aberrations for single molecule localization microscopy across large fields of view'. This dataset contains the first 800 frames from the 3D microtuble dataset from Figure 2 of the manuscript. It is a test dataset for the Software Demo accompanying the manuscript.
It concerns raw data for Single Molecule Localization Microscopy of microtubules of HeLa cells over a large 97 x 97 μm field of view. The images were acquired with 2 cylindrical lenses at the Fourier plane of the microscope to introduce astigmatism. The data was acquired in order to demonstrate the estimation of field dependent aberrations from single molecule localization.
Subscribers can find out export and import data of 23 countries by HS code or product’s name. This demo is helpful for market analysis.
A 2k sample dataset for testing multimodal (text+vision+audio) format. This is compatible with HF's processor apply_chat_template. Load in Axolotl via: datasets: - path: Nanobit/text-vision-audio-2k-test type: chat_template
Make sure to download the image and audio via: wget https://huggingface.co/datasets/Nanobit/text-vision-audio-2k-test/resolve/main/African_elephant.jpg wget https://huggingface.co/datasets/Nanobit/text-vision-audio-2k-test/resolve/main/En-us-African_elephant.oga… See the full description on the dataset page: https://huggingface.co/datasets/Nanobit/text-vision-audio-2k-test.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Noisy-speech set used to test Deep Xi (https://github.com/anicolson/DeepXi). The clean speech and noise used to create the noisy-speech set are also available. The clean-speech recordings are from Librispeech test-clean (http://www.openslr.org/12/).
Mistaken eyewitness identifications continue to be a major contributor to miscarriages of justice. Previous experiments suggested that implicit identification procedures such as the Concealed Information Test (CIT) might be a promising alternative to classic lineups when encoding conditions during the crime were favorable. We tested this idea by manipulating view congruency (frontal vs. profile view) between encoding and test. Participants witnessed a videotaped mock theft that showed the thief and victim almost exclusively from frontal or profile view. At test, viewing angle was either congruent or incongruent with the view during encoding. We tested eyewitness identification with the RT-CIT (N = 74), and with a traditional simultaneous photo lineup (N = 97). The CIT showed strong capacity to diagnose face recognition (d = 0.91 [0.64; 1.18]) but unexpectedly, view congruency did not moderate this effect. View congruency moderated lineup performance for one of the two lineups. Following these unexpected findings, we conducted a replication with a stronger congruency manipulation and larger sample size. CIT (N = 156) showed moderate capacity to diagnose face recognition (d = 0.63 [0.46; 0.80]) and now view congruency did moderate the CIT effect. For lineups (N = 156), view congruency again moderated performance for one of the two lineups. Capacity for diagnosing face recognition was similar for lineups and RT-CIT in our first comparison but much stronger for lineups in our second comparison. Future experiments might investigate more conditions that affect performance in lineups vs. the RT-CIT differentially.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Lub txhab nyiaj Check Dataset (Cov Ntaub Ntawv AI): Synthetic bank checks muaj cov duab kos uas tsim los tsim kom rov ua dua cov tsos thiab cov ntsiab lus ntawm cov ntawv txheeb xyuas tiag tiag. Nws suav nrog ntau yam xws li cov npe them nyiaj, tus lej, hnub tim, kos npe, thiab cov lej kos. Cov ntaub ntawv no yog siv rau kev cob qhia thiab tshuaj xyuas Cov Ntaub Ntawv AI hauv cov haujlwm xws li kho qhov muag pom tus cwj pwm (OCR), kuaj xyuas, thiab rho tawm cov ntaub ntawv, muab ib puag ncig tswj rau kev tsim qauv yam tsis muaj kev txhawj xeeb txog kev ceev ntiag tug ntawm cov ntawv pov thawj tiag.
All 311 Service Requests from 2010 to present. This information is automatically updated daily.
THIS IS NOT REAL DATA.
THE FOLLOWING ELEMENTS HAVE NOT BEEN TESTED
Vertical Datum - we are waiting on encoding advice from MEDIN
THE FOLLOWING ELEMENT IS STORED IN THE METADATA and XML BUT IS NOT VISIBLE iN MAIN VIEW/PDF/PERMALINK. However this will be addressed at a later point with the Datum issue. Vertical CRS type e.g. EPSG 5701
• This is NRW's test dataset to ensure staff can refer back to a fully valid record to help themselves solve validation errors
• It is also to ensure all elements are displaying and exporting to multiple XML schemas. The Metadata is written to be compliant with the NRW, MEDIN and GEMINI Standard.
• It may temporarily be in public view when checking view formats for the public but will be removed later.
This data set comes from data held by the Driver and Vehicle Standards Agency (DVSA).
It isn’t classed as an ‘official statistic’. This means it’s not subject to scrutiny and assessment by the UK Statistics Authority.
The government is trialling driving test changes in 2015 and 2016 to make it a better test of the driver’s ability to drive safely on their own.
This data shows the numbers of approved driving instructors and learner drivers taking part in the trial, and the number of tests booked.
CSV, 206 Bytes
Data you cannot find could be published as:
You can send an FOI request if you still cannot find the information you need.
By law, DVSA cannot send you information that’s part of an official statistic that hasn’t yet been published.
The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.
Splits: The first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.
Based on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
the so-called LMUAV.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
2 wells out of a 384-well plate, in ome-zarr format.
3 channels, 2 segmentations sharing the same table source.
Attribution 3.0 (CC BY 3.0)https://creativecommons.org/licenses/by/3.0/
License information was derived automatically
The International Development Association (IDA) credits are public and publicly guaranteed debt extended by the World Bank Group. IDA provides development credits, grants and guarantees to its recipient member countries to help meet their development needs. Credits from IDA are at concessional rates. Data are in U.S. dollars calculated using historical rates. This dataset contains the latest available snapshot of the IDA Statement of Credits and Grants.
Dataset Card for test-vision-generation-Qwen2VL-7B-Vision-Instruct
This dataset has been created with distilabel.
Dataset Summary
This dataset contains a pipeline.yaml which can be used to reproduce the pipeline that generated it in distilabel using the distilabel CLI: distilabel pipeline run --config "https://huggingface.co/datasets/TharunSivamani/test-vision-generation-Qwen2VL-7B-Vision-Instruct/raw/main/pipeline.yaml"
or explore the configuration:… See the full description on the dataset page: https://huggingface.co/datasets/TharunSivamani/test-vision-generation-Qwen2VL-7B-Vision-Instruct.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Vision Test is a dataset for object detection tasks - it contains Fish annotations for 637 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).