This table represents vehicle class counts and can be joined to our Traffic Count Locations data using RECORDNUM as the common identifier. RECORDNUM - record identifier AADTT - Average Annual Daily Truck Traffic BUS_PERCENT - Percent buses (Class 4) SU_TRUCK_PERCENT - Percent single-unit trucks (Classes 5-7) MU_TRUCK_PERCENT - Percent multi-unit trucks (Classes 8+) TRUCK_PERCENT - Total truck percent (Classes 5+) For more information, contact our Office of Travel Monitoring: Joshua Rocks, Manager Phone: 215.238.2854 | Email: jrocks@dvrpc.org
Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
License information was derived automatically
Crowd Counting Dataset
The dataset includes images featuring crowds of people ranging from 0 to 5000 individuals. The dataset includes a diverse range of scenes and scenarios, capturing crowds in various settings. Each image in the dataset is accompanied by a corresponding JSON file containing detailed labeling information for each person in the crowd for crowd count and classification.
Types of crowds in the dataset: 0-1000, 1000-2000, 2000-3000, 3000-4000 and 4000-5000 This… See the full description on the dataset page: https://huggingface.co/datasets/TrainingDataPro/crowd-counting-dataset.
To effectively evaluate OmniCount across open-vocabulary, supervised, and few-shot counting tasks, a dataset catering to a broad spectrum of visual categories and instances featuring various visual categories with multiple instances and classes per image is essential. The current datasets, primarily designed for object counting focusing on singular object categories like humans and vehicles, fall short for multi-label object counting tasks. Despite the presence of multi-class datasets like MS COCO, their utility is limited for counting due to the sparse nature of object appearance. Addressing this gap, we created a new dataset with 30,230 images spanning 191 diverse categories, including kitchen utensils, office supplies, vehicles, and animals. This dataset, featuring a wide range of object instance counts per image ranging from 1 to 160 and an average count of 10, bridges the existing void and establishes a benchmark for assessing counting models in varied scenarios.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Vehicle Count New Class is a dataset for object detection tasks - it contains Auto Tempotraveller annotations for 1,918 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Here are a few use cases for this project:
Traffic Management and Congestion Analysis: The "Cars Counting" model can be used by city planners and transportation authorities to analyze traffic patterns, identify congested areas, and optimize traffic flow by adjusting traffic signals, building new roads, or implementing traffic restrictions.
Parking Lot Management: Facility operators and parking lot managers can use the model to count and monitor vehicle types in parking lots, allocate appropriate space for different vehicle classes, and optimize parking space utilization.
Road Safety and Accident Reduction: Authorities can use the model to identify roads with high frequencies of trucks, buses, and motorcycles, and then focus on improving road infrastructure and implementing safety measures in these areas to reduce accident rates.
Environmental Impact Assessment: Environmental agencies can use the "Cars Counting" model to gather data on vehicle types in specific areas, estimate emissions from different vehicle classes, and implement policies to reduce pollution levels and improve air quality.
Marketing and Advertising: Companies in the automotive industry can use the model to study the prevalence of different vehicle types in different areas or target markets, enabling them to create targeted marketing campaigns and plan product launches based on the popularity of vehicle types within a specific demographic.
http://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
There's a story behind every dataset and here's mine, One of my professor takes too many yawns in the class, Every time it happens we get distracted from the topic and feel sleepy. To make the class more interesting, I started taking yawn count initially but as time went on, I got bored and stopped taking observations.
One fine day I was regretting the decision to stop counting, I looked at the previous data and found out that the range of yawns is 3 -11 So, I generated random data in the range of 3-11 for this dataset.
I want to thank my prof and my boredom for this dataset.
We introduce a dataset of 147 object categories containing over 6000 images that are suitable for the few-shot counting task. We collected and annotated images ourselves. Our dataset consists of 6135 images across a di- verse set of 147 object categories, from kitchen utensils and office stationery to vehicles and animals. The object count in our dataset varies widely, from 7 to 3731 objects, with an average count of 56 objects per image. In each image, each object instance is annotated with a dot at its approxi- mate center. In addition, three object instances are selected randomly as exemplar instances; these exemplars are also annotated with axis-aligned bounding boxes.
Help us provide the most useful data by completing our ODP User Feedback Survey for School Nutrition Data About the Dataset This dataset serves as source data for the Texas Department of Agriculture Food and Nutrition Meal Served Dashboard. Data is based on the School Nutrition Program (SNP) Meal Reimbursement and All Summer Sites (SFSP and SSO) Meal Count datasets currently published on the Texas Open Data Portal. For the purposes of dashboard reporting, summer meal program meals served during the school year include SFSP and SSO meals served September 2021 through May 2022. The School Nutrition Program meals are reported by program year which runs July 1 through June 30. In March 2020, USDA began allowing flexibility in nutrition assistance program policies in order to support continued meal access should the coronavirus pandemic (COVID-19) impact meal service operation. Flexibilities were extended into the 2021-2022 program year and allowed School Nutrition Programs to operate Seamless Summer Option through the 2021-2022 school year. For more information on the policies implemented for this purpose, please visit our website at SquareMeals.org. An overview of all SNP data available on the Texas Open Data Portal can be found at our TDA Data Overview - School Nutrition Programs page. An overview of all TDA Food and Nutrition data available on the Texas Open Data Portal can be found at our TDA Data Overview - Food and Nutrition Open Data page. More information about accessing and working with TDA data on the Texas Open Data Portal can be found on the SquareMeals.org website on the TDA Food and Nutrition Open Data page. About Dataset Updates TDA aims to update this dataset by the 15th of the month until 60 days after the close of the program year. About the Agency The Texas Department of Agriculture administers 12 U.S. Department of Agriculture nutrition programs in Texas including the National School Lunch and School Breakfast Programs, the Child and Adult Care Food Program (CACFP), and summer meal programs. TDA’s Food and Nutrition division provides technical assistance and training resources to partners operating the programs and oversees the USDA reimbursements they receive to cover part of the cost associated with serving food in their facilities. By working to ensure these partners serve nutritious meals and snacks, the division adheres to its mission — Feeding the Hungry and Promoting Healthy Lifestyles. For more information on these programs, please visit us at SquareMeals.org.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This is a dataset of blood cells photos, originally open sourced by cosmicad and akshaylambda.
There are 364 images across three classes: WBC
(white blood cells), RBC
(red blood cells), and Platelets
. There are 4888 labels across 3 classes (and 0 null examples).
Here's a class count from Roboflow's Dataset Health Check:
https://i.imgur.com/BVopW9p.png" alt="BCCD health">
And here's an example image:
https://i.imgur.com/QwyX2aD.png" alt="Blood Cell Example">
Fork
this dataset (upper right hand corner) to receive the raw images, or (to save space) grab the 500x500 export.
This is a small scale object detection dataset, commonly used to assess model performance. It's a first example of medical imaging capabilities.
We're releasing the data as public domain. Feel free to use it for any purpose.
It's not required to provide attribution, but it'd be nice! :)
Roboflow makes managing, preprocessing, augmenting, and versioning datasets for computer vision seamless.
Developers reduce 50% of their boilerplate code when using Roboflow's workflow, automate annotation quality assurance, save training time, and increase model reproducibility.
MIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
Here are a few use cases for this project:
Aquaculture Monitoring: In shrimp farms, this model can be used to classify and count shrimps automatically, allowing farm management to track the growth and health of the shrimps, manage feed levels, and estimate harvest times.
Fisheries Sciences Research: Researchers studying shrimps can use this model to automatically identify and quantify different classes of shrimps in captured footage or images, speeding up data collection.
Commercial Fishing: The model could provide real-time quantification and categorization of shrimp catch on fishing vessels, enabling an accurate measure of the haul and helping to ensure correct compliance with fishing quota legislation.
Quality Assurance in Food Processing: Food processing plants dealing with shrimp can leverage the model to automate quality inspection, which aids in sorting the shrimps based on their sizes or types, improving efficiency and standardization.
Environmental Monitoring and Conservation: The model can play a crucial role in monitoring biodiversity in a certain water area by identifying and counting the shrimp population. This information can be used to determine the health of the ecosystem and inform conservation strategies.
This data publication contains overstory tree measurements collected between 1931 and 2003 at the Bartlett Experimental Forest in Bartlett, New Hampshire. These cruise plots measure all trees greater than 1.5 inches in diameter at breast height, across a 0.25 acre plot. Plots were installed and first inventoried in 1931-1932, with follow-up in 1939-1940. Since then, there were partial remeasurements in the 1950s and 1960s of Compartments scheduled for harvesting treatment followed by complete remeasurements in 1991-1992 and 2001-2003. Data are available in two data sets. 1) Overstory measurements from 1931-1992 including species codes, diameter class (1 inch classes) and count. 2) Overstory measurements from 2001-2003 including species codes, diameter class (1 inch classes) and count. Additionally, approximately 3% of the over 56,000 tree records from 2001-2003 include height, crown and diameter at breast height (to the nearest 0.1 inch) measurements.
Local authority and Local Enterprise Partnership data sets for key economic data by rural and urban breakdown.
<p class="gem-c-attachment_metadata"><span class="gem-c-attachment_attribute">MS Excel Spreadsheet</span>, <span class="gem-c-attachment_attribute">211 KB</span></p>
<p class="gem-c-attachment_metadata">This file may not be suitable for users of assistive technology.</p>
<details data-module="ga4-event-tracker" data-ga4-event='{"event_name":"select_content","type":"detail","text":"Request an accessible format.","section":"Request an accessible format.","index_section":1}' class="gem-c-details govuk-details govuk-!-margin-bottom-0" title="Request an accessible format.">
Request an accessible format.
If you use assistive technology (such as a screen reader) and need a version of this document in a more accessible format, please email <a href="mailto:defra.helpline@defra.gov.uk" target="_blank" class="govuk-link">defra.helpline@defra.gov.uk</a>. Please tell us what format you need. It will help us if you say what assistive technology you use.
Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically
Knowledge of critical properties, such as critical temperature, pressure, density, as well as acentric factor, is essential to calculate thermo-physical properties of chemical compounds. Experiments to determine critical properties and acentric factors are expensive and time intensive; therefore, we developed a machine learning (ML) model that can predict these molecular properties given the SMILES representation of a chemical species. We explored directed message passing neural network (D-MPNN) and graph attention network as ML architecture choices. Additionally, we investigated featurization with additional atomic and molecular features, multitask training, and pretraining using estimated data to optimize model performance. Our final model utilizes a D-MPNN layer to learn the molecular representation and is supplemented by Abraham parameters. A multitask training scheme was used to train a single model to predict all the critical properties and acentric factors along with boiling point, melting point, enthalpy of vaporization, and enthalpy of fusion. The model was evaluated on both random and scaffold splits where it shows state-of-the-art accuracies. The extensive data set of critical properties and acentric factors contains 1144 chemical compounds and is made available in the public domain together with the source code that can be used for further exploration.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
This dataset as reported to the Rural Payments Agency contains count of premises and source types. Attribution statement: © Rural Payments Agency
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection. The publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld. ILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”. The ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.
Total number of non-empty WordNet synsets: 21841 Total number of images: 14197122 Number of images with bounding box annotations: 1,034,908 Number of synsets with SIFT features: 1000 Number of images with SIFT features: 1.2 million
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The database for this study (Briganti et al. 2018; the same for the Braun study analysis) was composed of 1973 French-speaking students in several universities or schools for higher education in the following fields: engineering (31%), medicine (18%), nursing school (16%), economic sciences (15%), physiotherapy, (4%), psychology (11%), law school (4%) and dietetics (1%). The subjects were 17 to 25 years old (M = 19.6 years, SD = 1.6 years), 57% were females and 43% were males. Even though the full dataset was composed of 1973 participants, only 1270 answered the full questionnaire: missing data are handled using pairwise complete observations in estimating a Gaussian Graphical Model, meaning that all available information from every subject are used.
The feature set is composed of 28 items meant to assess the four following components: fantasy, perspective taking, empathic concern and personal distress. In the questionnaire, the items are mixed; reversed items (items 3, 4, 7, 12, 13, 14, 15, 18, 19) are present. Items are scored from 0 to 4, where “0” means “Doesn’t describe me very well” and “4” means “Describes me very well”; reverse-scoring is calculated afterwards. The questionnaires were anonymized. The reanalysis of the database in this retrospective study was approved by the ethical committee of the Erasmus Hospital.
Size: A dataset of size 1973*28
Number of features: 28
Ground truth: No
Type of Graph: Mixed graph
The following gives the description of the variables:
Feature | FeatureLabel | Domain | Item meaning from Davis 1980 |
---|---|---|---|
001 | 1FS | Green | I daydream and fantasize, with some regularity, about things that might happen to me. |
002 | 2EC | Purple | I often have tender, concerned feelings for people less fortunate than me. |
003 | 3PT_R | Yellow | I sometimes find it difficult to see things from the “other guy’s” point of view. |
004 | 4EC_R | Purple | Sometimes I don’t feel very sorry for other people when they are having problems. |
005 | 5FS | Green | I really get involved with the feelings of the characters in a novel. |
006 | 6PD | Red | In emergency situations, I feel apprehensive and ill-at-ease. |
007 | 7FS_R | Green | I am usually objective when I watch a movie or play, and I don’t often get completely caught up in it.(Reversed) |
008 | 8PT | Yellow | I try to look at everybody’s side of a disagreement before I make a decision. |
009 | 9EC | Purple | When I see someone being taken advantage of, I feel kind of protective towards them. |
010 | 10PD | Red | I sometimes feel helpless when I am in the middle of a very emotional situation. |
011 | 11PT | Yellow | sometimes try to understand my friends better by imagining how things look from their perspective |
012 | 12FS_R | Green | Becoming extremely involved in a good book or movie is somewhat rare for me. (Reversed) |
013 | 13PD_R | Red | When I see someone get hurt, I tend to remain calm. (Reversed) |
014 | 14EC_R | Purple | Other people’s misfortunes do not usually disturb me a great deal. (Reversed) |
015 | 15PT_R | Yellow | If I’m sure I’m right about something, I don’t waste much time listening to other people’s arguments. (Reversed) |
016 | 16FS | Green | After seeing a play or movie, I have felt as though I were one of the characters. |
017 | 17PD | Red | Being in a tense emotional situation scares me. |
018 | 18EC_R | Purple | When I see someone being treated unfairly, I sometimes don’t feel very much pity for them. (Reversed) |
019 | 19PD_R | Red | I am usually pretty effective in dealing with emergencies. (Reversed) |
020 | 20FS | Green | I am often quite touched by things that I see happen. |
021 | 21PT | Yellow | I believe that there are two sides to every question and try to look at them both. |
022 | 22EC | Purple | I would describe myself as a pretty soft-hearted person. |
023 | 23FS | Green | When I watch a good movie, I can very easily put myself in the place of a leading character. |
024 | 24PD | Red | I tend to lose control during emergencies. |
025 | 25PT | Yellow | When I’m upset at someone, I usually try to “put myself in his shoes” for a while. |
026 | 26FS | Green | When I am reading an interesting story or novel, I imagine how I would feel if the events in the story were happening to me. |
027 | 27PD | Red | When I see someone who badly needs help in an emergency, I go to pieces. |
028 | 28PT | Yellow | Before criticizing somebody, I try to imagine how I would feel if I were in their place |
More information about the dataset is contained in empathy_description.html file.
Vehicle Classification – Weekday & WeekendThe 2018 weekday vehicle classification feature class provides traffic count data by time of day. To create the summary vehicle classification file, the FHWA truck classes are combined into single-unit trucks and multi-unit trucks. Then for each resulting class, the data is summed by time-of-day. After removing the data for the federal holidays, the mean of all weekday counts is reported by class for each directional station and time period. The dataset contains separate related tables for 2014, 2015, 2016, and 2017 vehicle classification by weekday and weekend time of day. Tables included for download are zip files containing 2016 and 2017 hourly vehicle classification (FHWA 13 classes), broken down by state. The vehicle classification data come from MD SHA, VDOT, and DDOTTime of Day is broken down as:AM (6:00 AM - 9:00 AM)Mid Day (9:00 AM - 3:00 PM)PM (3:00 PM - 7:00 PM)Night (After 7:00 PM)Vehicle Types:Motorcycles (Class 1)Passenger cars (Class 2)Pickups & Panel Vans (Class 3)Buses (Class 4)Single Unit Trucks (Class 5-7)Multi-Unit Trucks (8-13)Vehicle Classification based on the FHWA 13 Vehicle Classification.For more information about FHWA vehicle classification, visit the FHWA webpage
Help us provide the most useful data by completing our ODP User Feedback Survey for School Nutrition Data About the Dataset This dataset serves as source data for the Texas Department of Agriculture Food and Nutrition Meal Served Dashboard. Data is based on the School Nutrition Program (SNP) Meal Reimbursement, Seamless Summer Option (SSO) Meal Count, and Summer Food Service Program (SFSP) Meal Count datasets currently published on the Texas Open Data Portal. For the purposes of dashboard reporting, the school year for summer meal programs is defined as March 2020 through May 2020. The School Nutrition Program meals are reported by program year which runs July 1 through June 30. In March 2020, USDA began allowing flexibility in nutrition assistance program policies in order to support continued meal access should the coronavirus pandemic (COVID-19) impact meal service operation. Flexibilities were extended into the 2021-2022 program year and allowed School Nutrition Programs to operate Seamless Summer Option through the 2021-2022 school year. For more information on the policies implemented for this purpose, please visit our website at SquareMeals.org. An overview of all SNP data available on the Texas Open Data Portal can be found at our TDA Data Overview - School Nutrition Programs page. An overview of all TDA Food and Nutrition data available on the Texas Open Data Portal can be found at our TDA Data Overview - Food and Nutrition Open Data page. More information about accessing and working with TDA data on the Texas Open Data Portal can be found on the SquareMeals.org website on the TDA Food and Nutrition Open Data page. About Dataset Updates TDA aims to update this dataset by the 15th of the month until 60 days after the close of the program year. About the Agency The Texas Department of Agriculture administers 12 U.S. Department of Agriculture nutrition programs in Texas including the National School Lunch and School Breakfast Programs, the Child and Adult Care Food Program (CACFP), and summer meal programs. TDA’s Food and Nutrition division provides technical assistance and training resources to partners operating the programs and oversees the USDA reimbursements they receive to cover part of the cost associated with serving food in their facilities. By working to ensure these partners serve nutritious meals and snacks, the division adheres to its mission — Feeding the Hungry and Promoting Healthy Lifestyles. For more information on these programs, please visit us at SquareMeals.org.
Introduction The data set is based on 3,004 images collected by the Pancam instruments mounted on the Opportunity and Spirit rovers from NASA's Mars Exploration Rovers (MER) mission. We used rotation, skewing, and shearing augmentation methods to increase the total collection to 70,864 (see Image Augmentation section for more information). Based on the MER Data Catalog User Survey [1], we identified 25 classes of both scientific (e.g. soil trench, float rocks, etc.) and engineering (e.g. rover deck, Pancam calibration target, etc.) interests (see Classes section for more information). The 3,004 images were labeled on Zooniverse platform, and each image is allowed to be assigned with multiple labels. The images are either 512 x 512 or 1024 x 1024 pixels in size (see Image Sampling section for more information). Classes There is a total of 25 classes for this data set. See the list below for class names, counts, and percentages (the percentages are computed as count divided by 3,004). Note that the total counts don't sum up to 3,004 and the percentages don't sum up to 1.0 because each image may be assigned with more than one class. Class name, count, percentage of dataset Rover Deck, 222, 7.39% Pancam Calibration Target, 14, 0.47% Arm Hardware, 4, 0.13% Other Hardware, 116, 3.86% Rover Tracks, 301, 10.02% Soil Trench, 34, 1.13% RAT Brushed Target, 17, 0.57% RAT Hole, 30, 1.00% Rock Outcrop, 1915, 63.75% Float Rocks, 860, 28.63% Clasts, 1676, 55.79% Rocks (misc), 249, 8.29% Bright Soil, 122, 4.06% Dunes/Ripples, 1000, 33.29% Rock (Linear Features), 943, 31.39% Rock (Round Features), 219, 7.29% Soil, 2891, 96.24% Astronomy, 12, 0.40% Spherules, 868, 28.89% Distant Vista, 903, 30.23% Sky, 954, 31.76% Close-up Rock, 23, 0.77% Nearby Surface, 2006, 66.78% Rover Parts, 301, 10.02% Artifacts, 28, 0.93% Image Sampling Images in the MER rover Pancam archive are of sizes ranging from 64x64 to 1024x1024 pixels. The largest size, 1024x1024, was by far the most common size in the archive. For the deep learning dataset, we elected to sample only 1024x1024 and 512x512 images as the higher resolution would be beneficial to feature extraction. In order to ensure that the data set is representative of the total image archive of 4.3 million images, we elected to sample via "site code". Each Pancam image has a corresponding two-digit alphanumeric "site code" which is used to track location throughout its mission. Since each "site code" corresponds to a different general location, sampling a fixed proportion of images taken from each site ensure that the data set contained some images from each location. In this way, we could ensure that a model performing well on this dataset would generalize well to the unlabeled archive data as a whole. We randomly sampled 20% of the images at each site within the subset of Pancam data fitting all other image criteria, applying a floor function to non-whole number sample sizes, resulting in a dataset of 3,004 images. Train/validation/test sets split The 3,004 images were split into train, validation, and test data sets. The split was done so that roughly 60, 15, and 25 percent of the 3,004 images would end up as train, validation, and test data sets respectively, while ensuing that images from a given site are not split between train/validaiton/test data sets. This resulted in 1,806 train images, 456 validation images, and 742 test images. Augmentation To augment the images in train and validation data sets (note that images in the test data set were not augmented), three augmentation methods were chosen that best represent transformations that could be realistically seen in Pancam images. The three augmentations methods are rotation, skew, and shear. The augmentation methods were applied with random magnitude, followed by a random horizontal flipping, to create 30 augmented images for each image. Since each transformation is followed by a square crop in order to keep input shape consistent, we had to constrict the magnitude limits of each augmentation to avoid cropping out important features at the edges of input images. Thus, rotations were limited to 15 degrees in either direction, the 3-dimensional skew was limited to 45 degrees in any direction, and shearing was limited to 10 degrees in either direction. Note that augmentation was done only on training and validation images. Directory Contents images: contains all 70,864 images train-set-v1.1.0.txt: label file for the training data set val-set-v1.1.0.txt: label file for the validation data set test-set-v1.1.0.txt: label file for the testing data set Images with relatively short file names (e.g., 1p128287181mrd0000p2303l2m1.img.jpg) are original images, and images with long file names (e.g., 1p128287181mrd0000p2303l2m1.img.jpg_04140167-5781-49bd-a913-6d4d0a61dab1.jpg) are augmented images. The label files are formatted as "Image name, Class1, Class2, ..., ClassN". Reference [1] S.B. Cole, J.C. Aubele, B.A. Cohen, S.M. Milkovich, and S.A...
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
LAR.i Laboratory - Université du Québec à Chicoutimi (UQAC) 2021-08-24 Name: Image dataset of various soil types in an urban city Published journal paper: Gensytskyy, O., Nandi, P., Otis, M.JD. et al. Soil friction coefficient estimation using CNN included in an assistive system for walking in urban areas. J Ambient Intell Human Comput 14, 14291–14307 (2023). https://doi.org/10.1007/s12652-023-04667-w This dataset contains images of various types of soils and was used for the project "An assistive system for walking in urban areas". The images were taken using a smartphone camera in a vertical orientation and are high-quality. The files are named with two characters, being the first letter and last letter of its class name, following by their number. Capture location : City of Saguenay, Quebec Canada. Class count : 8 Total number of images : 493 Classes and number of images per class: Asphalt (89) Concrete (80) Epoxy_coated_interior (34) Grass (90) Gravel (58) Scrattered_snow (40) Snow (68) Wood (34)
This table represents vehicle class counts and can be joined to our Traffic Count Locations data using RECORDNUM as the common identifier. RECORDNUM - record identifier AADTT - Average Annual Daily Truck Traffic BUS_PERCENT - Percent buses (Class 4) SU_TRUCK_PERCENT - Percent single-unit trucks (Classes 5-7) MU_TRUCK_PERCENT - Percent multi-unit trucks (Classes 8+) TRUCK_PERCENT - Total truck percent (Classes 5+) For more information, contact our Office of Travel Monitoring: Joshua Rocks, Manager Phone: 215.238.2854 | Email: jrocks@dvrpc.org