Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Histogram is a dataset for object detection tasks - it contains CCA annotations for 971 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Data visualization is important for statistical analysis, as it helps convey information efficiently and shed lights on the hidden patterns behind data in a visual context. It is particularly helpful to display circular data in a two-dimensional space to accommodate its nonlinear support space and reveal the underlying circular structure which is otherwise not obvious in one-dimension. In this article, we first formally categorize circular plots into two types, either height- or area-proportional, and then describe a new general methodology that can be used to produce circular plots, particularly in the area-proportional manner, which in our opinion is the more appropriate choice. Formulas are given that are fairly simple yet effective to produce various circular plots, such as smooth density curves, histograms, rose diagrams, dot plots, and plots for multiclass data. Supplemental materials for this article are available online.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Gaussian Histogram is a dataset for object detection tasks - it contains GLIOMA annotations for 5,873 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
What Lies Beneath: A Call for Distribution-based Visual Question & Answer Datasets
Publication: TBD (linked on publication)
GitHub Repo: ReadingTimeMachine/LLM_VQA_JCDL2025
This is a histogram-based dataset for visual question and answer (VQA) with humans and large language/multimodal models (LMMs). Data contains synthetically generated single-panel histograms images, data used to create histograms, bounding box data for titles, axis and tick labels, and data⌠See the full description on the dataset page: https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_histograms.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Median Histogram is a dataset for object detection tasks - it contains GLIOMA annotations for 5,873 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Charts, Histograms, and Time Series ⢠Create a histogram graph from band values of an image collection ⢠Create a time series graph from band values of an image collection
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Gaussian Histogram 2 Class is a dataset for object detection tasks - it contains GLIOMA annotations for 3,062 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterHistogram plot of the average alignment accuracy averaged over 10 runs for each viral genome shown in Table 1 and each aligner. Reads crossing splice junction regions are shown in pink, reads not crossing splice junction regions are shown in blue).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset contains daily histograms of wind speed at 100m ("WS100"), wind direction at 100 m ("WD100") and an atmospheric stability proxy ("STAB") derived from the ERA5 hourly data on single levels [1] accessed via the Copernicus Climate Change Climate Data Store [2]. The dataset covers six geographical regions (illustrated in regions.png) on a reduced 0.5 x 0.5 degrees regular grid and covers the period 1994 to 2023 (both years included). The dataset is packaged as a zip folder per region which contains a range of monthly zip folders following the convention of zarr ZipStores (more details here: https://zarr.readthedocs.io/en/stable/api/storage.html). Thus, the monthly zip folders are intended to be used in connection with the xarray python package (no unzipping of the monthly files needed).Wind speed and wind direction are derived from the U- and V-components. The stability metric makes use of a 5-class classification scheme [3] based on the Obukhov length whereby the required Obukhov length was computed using [4]. The following bins (left edges) have been used to create the histograms:Wind speed: [0, 40) m/s (bin width 1 m/s)Wind direction: [0,360) deg (bin width 15 deg)Stability: 5 discrete stability classes (1: very unstable, 2: unstable, 3: neutral, 4: stable, 5: very stable)Main Purpose: The dataset serves as minimum input data for the CLIMatological REPresentative PERiods (climrepper) python package (https://gitlab.windenergy.dtu.dk/climrepper/climrepper) in preparation for public release).References:[1] Hersbach, H., Bell, B., Berrisford, P., Biavati, G., HorĂĄnyi, A., MuĂąoz Sabater, J., Nicolas, J., Peubey, C., Radu, R., Rozum, I., Schepers, D., Simmons, A., Soci, C., Dee, D., ThĂŠpaut, J-N. (2023): ERA5 hourly data on single levels from 1940 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), DOI: 10.24381/cds.adbb2d47 (Accessed Nov. 2024)[2] Copernicus Climate Change Service, Climate Data Store, (2023): ERA5 hourly data on single levels from 1940 to present. Copernicus Climate Change Service (C3S) Climate Data Store (CDS), DOI: 10.24381/cds.adbb2d47 (Accessed Nov. 2024)'[3] Holtslag, M. C., Bierbooms, W. A. A. M., & Bussel, G. J. W. van. (2014). Estimating atmospheric stability from observations and correcting wind shear models accordingly. In Journal of Physics: Conference Series (Vol. 555, p. 012052). IOP Publishing. https://doi.org/10.1088/1742-6596/555/1/012052[4] Copernicus Knowledge Base, ERA5: How to calculate Obukhov Length, URL: https://confluence.ecmwf.int/display/CKB/ERA5:+How+to+calculate+Obukhov+Length, last accessed: Nov 2024
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Histogram 2 Class is a dataset for object detection tasks - it contains GLIOMA annotations for 3,062 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
The tasks (called items in the study) are the first 6 histogram and all 6 case-value plot tasks (hence, the first 12 tasks from the data in dataset 1_Raw_Data_Students). It contains all data needed for reproducing the results described in the qualitative article belonging to this dataset, including for example, codebook, coding of transcripts, RStudio file for calculating accuracy and precision. Also detailed coding results, including second coder results. Note that the raw data of this project as well as the design of the project, materials and so on are in the dataset: 1_Raw_Data_Students. The latter dataset is needed for replicating the whole eye-tracking study.
Facebook
TwitterFigures containing a histogram of frequency of effect sizes on AG and BG herbivores and a funnel plot of effect size and sample sizes indicating absence of publication bias.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This paper presents methods for the determination of players' positions and contact time points by tracking the players and the ball in beach volleyball videos. Two player tracking methods are compared, a classical particle filter and a rigid grid integral histogram tracker. Due to mutual occlusion of the players and the camera perspective, results are best for the front players, with 74,6% and 82,6% of correctly tracked frames for the particle method and the integral histogram method, respectively. Results suggest an improved robustness against player confusion between different particle sets when tracking with a rigid grid approach. Faster processing and less player confusions make this method superior to the classical particle filter. Two different ball tracking methods are used that detect ball candidates from movement difference images using a background subtraction algorithm. Ball trajectories are estimated and interpolated from parabolic flight equations. The tracking accuracy of the ball is 54,2% for the trajectory growth method and 42,1% for the Hough line detection method. Tracking results of over 90% from the literature could not be confirmed. Ball contact frames were estimated from parabolic trajectory intersection, resulting in 48,9% of correctly estimated ball contact points.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Abstract Introduction Breast cancer is the first leading cause of death for women in Brazil as well as in most countries in the world. Due to the relation between the breast density and the risk of breast cancer, in medical practice, the breast density classification is merely visual and dependent on professional experience, making this task very subjective. The purpose of this paper is to investigate image features based on histograms and Haralick texture descriptors so as to separate mammographic images into categories of breast density using an Artificial Neural Network. Methods We used 307 mammographic images from the INbreast digital database, extracting histogram features and texture descriptors of all mammograms and selecting them with the K-means technique. Then, these groups of selected features were used as inputs of an Artificial Neural Network to classify the images automatically into the four categories reported by radiologists. Results An average accuracy of 92.9% was obtained in a few tests using only some of the Haralick texture descriptors. Also, the accuracy rate increased to 98.95% when texture descriptors were mixed with some features based on a histogram. Conclusion Texture descriptors have proven to be better than gray levels features at differentiating the breast densities in mammographic images. From this paper, it was possible to automate the feature selection and the classification with acceptable error rates since the extraction of the features is suitable to the characteristics of the images involving the problem.
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Do you âsee" what I âsee"? A Multi-panel Visual Question and Answer Dataset for Large Language Model Chart Analysis
Publication: TBD (linked on publication)
GitHub Repo: TBD (linked on publication)
This is a multi-panel figure dataset for visual question and answer (VQA) to test large language/multimodal models (LMMs). Data contains synthetically generated multi-panel figures with histogram, scatter, and line plots. Included are full data used to create plots⌠See the full description on the dataset page: https://huggingface.co/datasets/ReadingTimeMachine/visual_qa_multipanel.
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Distance between Barro Colorado Island and the MainlandFigure 4 from Bradfer-Lawrence et al., (2023). Distance from each meter of BCIâs shoreline to the closest point on the mainland. Inset histogram shows the frequency distribution of the total 60,614 m of BCIâs shoreline. Minimum distance is 249 m.Workflow used to generate this datasetThe distance to mainland layer is based on the STRI spatial dataset Barro Colorado Nature Monument Boundaries created by Milton Solano (https://stridata-si.opendata.arcgis.com/datasets/SI::barro-colorado-nature-monument-boundaries/).In QGIS (v.3.16), we:Split the BCNM boundaries into separate vector files for Barro Colorado Island (BCI), and the mainland peninsulas of the BCNM.Converted those polygon vectors to lines.Used the QChainage plugin to create additional vector layers with a single point for every 1 metre of the BCI and mainland BCNM shorelines.Used the âDistance to nearest hub (line to hub)â tool to connect each point on the BCI shoreline with the closest point on the BCNM mainland peninsulas.According to this analysis, the shortest distance from BCI to the mainland is 248.8 metres.HistogramThe histogram in the above plot was created in R, using the following script.R Script
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains data and software related to an experiment in which we determine the lifetime of the cesium 52D5/2 state using atoms in a vapor cell. More information is available in the following paper: arXiv:1912.10089 We provide the data and Python scripts for data evaluation in six folders. We zipped these folders with Windows 10 Enterprise, Version 1903. In the following, we describe how to use data and scripts to get the lifetime results published in our paper. Raw Time-Tags Here, we provide the raw measurement data. We perform several experiment cycles. An excitation laser is switched on at the beginning of each cycle. In the middle of the cycle, it is switched off. We use two single-photon counting modules (SPCM): one detects fluorescence photons emitted by the atoms, the other reference light from the excitation laser beam. We record the arrival times of those photons with respect to the beginning of the cycle. These time delays can be used to create a histogram and to determine the lifetime of the cesium 52D5/2 state. For each measurement, we provide two data files which are encoded in âUTF-8â: âfigx_xxx_reference_time_tags.datâ âfigx_xxx_fluorescence_time_tags.datâ where âfigx_xxxâ is a unique tag indicating the figure and point to which this data corresponds in our paper. The âfigx_xxx_fluorescence_raw_data.datâ and âfigx_xxx_reference_raw_data.datâ files contain the raw time delays in picoseconds of the fluorescence and the reference photons, respectively. We provide raw time delays in the following folders: âfig3_time_tagsâ: The data used in figure 3. âfig4_time_tagsâ: The data used in figure 4. This folder has six subfolders, named âpoint_xâ, where x indicates to which point of figure 4 the data belongs. The data of the subfolders âpoint_x_yâ was used for points x and y of figure 4 (the time-tags of the fluorescence photons were split into two sub-datasets with equal size). âfig5_time_tagsâ: The data underlying figure 5. This folder has subfolders from â23Câ to â116Câ where the name indicates the temperature in units of °C of the vapor cell during the measurement. Note that the various measurements have different cycle lengths because reabsorption makes the decay of the fluorescence signal longer. For the lifetime value at a temperature of 23 °C, we used the lifetime which we found in figure 4. For some measurements, the time-tags of the reference SPCM are missing because only one SPCM was available for these measurements. Histograms Since the files of the raw measurement data are large, we also provide histograms of the time tags. For all datasets discussed above, we generated a histogram with a bin length of 5 ns. We save these histograms with the same file name as the files with the raw time tags but with the ending â_histoâ instead of â_time_tagsâ, e.g., âfig3_fluorescence_histo.datâ and âfig3_reference_histo.datâ. We always provide two file formats: a data file (.dat), containing rows with the start time of a bin in microseconds, and the number of SPCM counts due to the fluorescence signal until the start of the next bin, separated by a comma. These files are encoded in âUTF-8â. a NumPy compressed array format file (.npz), which includes two arrays: The first array is called âtimeâ and contains the starting times of the bins. The second array is called âcountsâ and includes the corresponding measured number of fluorescence photons per bin. It is possible to load the arrays into a Python script with numpy.load (tested with NumPy version 1.18.1). Additional Information on the Measurements We provide a JavaScript Object Notation file (.json) for each measurement. These files provide the following information about every measurement: temperature of the cell, number of detected photons, photons per cycle, and the total measurement duration. They are named âfigx_xxx_info.jsonâ, where âfigx_xxxâ is the same indicator as discussed in section âRaw Time-Tagsâ. Scripts This folder contains two sample scripts to illustrate how our data can be processed with Python. The first Python script (generate_histograms.py) generates a histogram of the photon arrival times. The second Python script performs a fit in order to determine the lifetime of the cesium 52D5/2 state. We wrote these scripts with Python 3.6.5. To avoid errors, one should download all zipped folders and extract them to the same folder. The script âgenerate_histograms.pyâ processes the fluorescence photon detection events stored in the folder âfig3_time_tagsâ. The file âfig3 _fluorescence_time_tags.datâ is read into the script, and a histogram is generated. To run the script, the following Python libraries are required: NumPy (version 1.18.1), os (version 0.1.4), and json (version 2.0.9). The script âfit_data.pyâ loads the file âfig3_fluorescence_histogram.npzâ from the folder âhistograms\ fig3_time_tagsâ in NumPy arrays. We perform a least-square fit on the histogram of the fluorescence decay. From the fit, we get the l
Facebook
TwitterApache License, v2.0https://www.apache.org/licenses/LICENSE-2.0
License information was derived automatically
Carefully extracted subset from the ImageNet dataset for Night Vision colorization task.
The night vision images are created by preprocessing the actual images. Preprocessing steps included: 1. Converting to grayscale 2. Equalizing histogram 3. Adding noise 4. Making the image darker 5. Adding a vignette effect 6. Resizing the image to 224x224
The colorful images are provided to use as a ground truth, and the night vision images are also provided.
Only preprocessing needed while training the model is taking the ground truths as RGB properly. Also, normalizing the night vision images may be useful.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset is used to create a histogram displaying the aging of defendants in jail, for today's defendants in jail on superior court criminal cases. The starting dataset is https://sharefulton.fultoncountyga.gov/Government/Superior-Court-Defendants-in-Jail/raqb-js7j , and is aggregated, filtered, and manipulated into this form using the R programming language.
Splitgraph serves as an HTTP API that lets you run SQL queries directly on this data to power Web applications. For example:
See the Splitgraph documentation for more information.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Histogram Toshiba is a dataset for object detection tasks - it contains CCA annotations for 433 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Histogram is a dataset for object detection tasks - it contains CCA annotations for 971 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).