Facebook
TwitterA series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, displaying Matplotlib images, sorting contours, detecting edges, and much more easier with OpenCV and both Python 2.7 and Python 3.
Add this library as dataset to your Kaggle notebook and write down the code below:
import sys
sys.path.append('../input/imutils-054/imutils-0.5.4')
import imutils
Adrian Rosebrock
Library was was taken from pip imutils 0.5.4
It was taken from Unsplash
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This dataset was exported via roboflow.com on June 7, 2023 at 11:52 AM GMT
Roboflow is an end-to-end computer vision platform that helps you * collaborate with your team on computer vision projects * collect & organize images * understand and search unstructured image data * annotate, and create datasets * export, train, and deploy computer vision models * use active learning to improve your dataset over time
For state of the art Computer Vision training notebooks you can use with this dataset, visit https://github.com/roboflow/notebooks
To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
The dataset includes 4142 images. Trash-detection are annotated in COCO format.
The following pre-processing was applied to each image: * Auto-orientation of pixel data (with EXIF-orientation stripping) * Resize to 640x640 (Stretch)
The following augmentation was applied to create 3 versions of each source image: * 50% probability of horizontal flip * Random rotation of between -45 and +45 degrees * Random brigthness adjustment of between -20 and 0 percent * Random Gaussian blur of between 0 and 2 pixels
The following transformations were applied to the bounding boxes of each image: * Random exposure adjustment of between -32 and +32 percent
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Vehicle Detection Dataset
This dataset is designed for vehicle detection tasks, featuring a comprehensive collection of images annotated for object detection. This dataset, originally sourced from Roboflow (https://universe.roboflow.com/object-detection-sn8ac/ai-traffic-system), was exported on May 29, 2025, at 4:59 PM GMT and is now publicly available on Kaggle under the CC BY 4.0 license.
../train/images../valid/images../test/imagesThis dataset was created and exported via Roboflow, an end-to-end computer vision platform that facilitates collaboration, image collection, annotation, dataset creation, model training, and deployment. The dataset is part of the ai-traffic-system project (version 1) under the workspace object-detection-sn8ac. For more details, visit: https://universe.roboflow.com/object-detection-sn8ac/ai-traffic-system/dataset/1.
This dataset is ideal for researchers, data scientists, and developers working on vehicle detection and traffic monitoring systems. It can be used to: - Train and evaluate deep learning models for object detection, particularly using the YOLOv11 framework. - Develop AI-powered traffic management systems, autonomous driving applications, or urban mobility solutions. - Explore computer vision techniques for real-world traffic scenarios.
For advanced training notebooks compatible with this dataset, check out: https://github.com/roboflow/notebooks. To explore additional datasets and pre-trained models, visit: https://universe.roboflow.com.
The dataset is licensed under CC BY 4.0, allowing for flexible use, sharing, and adaptation, provided appropriate credit is given to the original source.
This dataset is a valuable resource for building robust vehicle detection models and advancing computer vision applications in traffic systems.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Dataset Title: Exploring Mars: A Comprehensive Dataset of Rover Photos and Metadata Description
This dataset provides an extensive collection of Mars rover images paired with in-depth metadata. Sourced from various Mars missions, this dataset is a treasure trove for anyone interested in space exploration, planetary science, or computer vision.
Components:
Dataset Origin
The dataset was compiled from various Mars missions conducted over the years. Special care has been taken to include a diverse set of images to enable a wide range of analyses and applications. Objective
As a learner delving into the field of Computer Vision, my objectives for this project are multi-fold:
Research Questions
Tools and Technologies
I plan to utilize Python for this project, particularly libraries like OpenCV for image processing, Pandas for data manipulation, and Matplotlib/Seaborn for data visualization. For machine learning tasks, I will likely use scikit-learn or TensorFlow.
Learning and Development
This project serves as both a learning exercise and a stepping stone toward more complex computer vision projects. I aim to document my learning journey, challenges, and milestones in a series of Kaggle notebooks. Collaboration and Feedback
I warmly invite the Kaggle community to offer suggestions, critiques, or even collaborate on this venture. Your insights could be invaluable in enhancing the depth and breadth of this project.
Facebook
TwitterFiles containing over 240 images related to Civil Construction, files and codes used to assess risks to workers using ChatGPT4o. Attention! These images should only be used for academic studies; the sources of the images and their respective authors must be maintained.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This dataset, the Human Bone Fractures Multi-modal Image Dataset (HBFMID), is a collection of medical images (X-ray and MRI) focused on detecting bone fractures in various parts of the human body. It's designed to support research in computer vision and deep learning for medical applications. ๐งโโ๏ธ๐ป
The HBFMID dataset contains a total of 1539 images of human bones, including both X-ray and MRI modalities. The dataset covers a wide range of bone locations, such as:
The initial dataset consisted of 641 raw images (510 X-ray and 131 MRI). This raw data was then divided into three subsets:
The images were carefully annotated to label the presence and location of fractures. โ๏ธ
The following pre-processing steps were applied to the images:
To increase the dataset size and improve the robustness of machine learning models, various augmentation techniques were applied to the training set, resulting in approximately 1347 training images (449 x 3). The augmentation techniques included:
This dataset was exported using Roboflow, an end-to-end computer vision platform that facilitates:
For state-of-the-art Computer Vision training notebooks compatible with this dataset, visit https://github.com/roboflow/notebooks. ๐
Explore over 100,000 other datasets and pre-trained models on https://universe.roboflow.com. ๐
Fractured bones in this dataset are annotated in YOLOv8 format, which is widely used for object detection tasks. ๐ฏ
Computer Vision, Medical Imaging, Deep Learning
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This means you are free to share and adapt the material for any purpose, even commercially, as long as you give appropriate credit, provide a link to the license, and indicate if changes were made. ๐
Parvin, Shahnaj (2024), โHuman Bone Fractures Multi-modal Image Dataset (HBFMID)โ, Mendeley Data, V1, doi: 10.17632/xwfs6xbk47.1
American International University Bangladesh ๐ง๐ฉ
Facebook
TwitterThis data set contains Cell Segmentations for the Human Protein Atlas - Single Cell Classification dataset.
The segmentations were done using code from this notebook by Raman.
The other columns in the dataframe were taken from these to datasets: HPA - Processed Train Dataframe With Cell-Wise RLE HPA - Test Dataframe With Cell-Wise RLE
Note that this dataset does not contain the cell masks or RLEs.
Facebook
TwitterThe PANDA dataset is a collection of microscopy scans from the PANDA challenge that aimed to classify the severity of prostate cancer. The official competition page can be found here. The PANDA dataset has microscopy scans from two different institutions (Radboud and Karolinska). This dataset can be used to explore how data augmentation, using neural style transfer, can help with classification and balance the variance between images processed in different ways.
The all_images directory contains the tiles from the panda_tiles dataset but aggregated together. Instead of having 12 different images for one scan, the tiles are combined into a single image.
The radboud_aug directory contains some of the Radboud images, but augmented using neural style transfer. The notebook used to create the Radboud images can be accessed here. The train.csv is the same csv file provided by the PANDA challenge, but pruned to only include the data that tiles existed for.
Facebook
TwitterYou work as a social media moderator for your firm. Your key responsibility is to tag uploaded content (images) during Pride Month based on its sentiment (positive, negative, or random) and categorize them for internal reference and SEO optimization.
*****Task***** Your task is to build an engine that combines the concepts of OCR and NLP that accepts a .jpg file as input, extracts the text, if any, and classifies sentiment as positive or negative. If the text sentiment is neutral or an image file does not have any text, then it is classified as random.
*****Data***** You must use an external dataset to train your model. The attached dataset link contains the sample data of each category [Positive | Negative | Random] and test data.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Efficient detection of craters can be of vital significance in various space exploration missions. Previous researches have already made significant progress on this task, however the versatility and robustness of existing methods are still limited. While modern object detection methods using deep learning is gaining popularity and is probably a solution to aforementioned problems, public-accessible data for training is hard to find. This is the primary reason we propose this dataset.
The dataset mainly contains: * Image Data: Images of Mars and Moon surface which MAY contain craters. The data source is mixed. For Mars images, images are mainly from ASU and USGS; Currently all Moon images are from NASA Lunar Reconnaissance Orbiter mission. All images are preprocessed with RoboFlow to remove EXIF rotation and resize to 640*640. * Labels: Each image has its associated labelling file in YOLOv5 text format. The anotation work was performed by ourselves, and mainly serves the purpose of object detection. * Trained YOLOv5 model file: For each new version, we will upload our pretrained YOLOv5 model file using the latest version of data. The network strcture currently in use is YOLOv5m6.
This dataset is somewhat challenging compared to trivial object detection task: * Craters can greatly vary in size * The dataset combines Mars and Moon surface images, where craters can be different in shape/color etc. * Currently only around 100 images are available for training (if train-test-valid split is performed). However, please notice that more images will be added in the future.
In our own training with YOLOv5 framework using YOLOv5m6 pretrained model, we achieve a mAP_0.5 score of 0.6919. A sample notebook explaining the procedure is available in the Code section. Below are two sample detection results using our trained model (None of them are used in training process).
https://raw.githubusercontent.com/Lincoln-Zhou/Archived/master/015_png.rf.7d5b2091b6339c9480a171a59c52c3b9.jpg" alt="Mars surface detection sample">
https://raw.githubusercontent.com/Lincoln-Zhou/Archived/master/mars_crater--100-_jpg.rf.a2ad5867efb2d73e86d9d980ca40a9fe.jpg" alt="Moon surface detection sample">
This dataset is also available on the RoboFlow platform.
This dataset is a mixture of various data sources, we would like to thank each individual who participated. A detailed list of data source will be available soon.
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
Data collection is perhaps the most crucial part of any machine learning model: without it being done properly, not enough information is present for the model to learn from the patterns leading to one output or another. Data collection is however a very complex endeavor, time-consuming due to the volume of data that needs to be acquired and annotated. Annotation is an especially problematic step, due to its difficulty, length, and vulnerability to human error and inaccuracies when annotating complex data.
With high processing power becoming ever more accessible, synthetic dataset generation is becoming a viable option when looking to generate large volumes of accurately annotated data. With the help of photorealistic renderers, it is for example possible now to generate immense amounts of data, annotated with pixel-perfect precision and whose content is virtually indistinguishable from real-world pictures.
As an exercise of synthetic dataset generation, the data offered here was generated using the Python API of Blender, with the images rendered through the Cycles raycaster. It represents plausible images representing pictures of chessboard and pieces. The goal is, from those pictures and their annotation, to build a model capable of recognizing the pieces, as well as their positions on the board.
The dataset contains a large amount of synthetic, randomly generated images representing pictures of chess images, taken at an angle overlooking the board and its pieces. Each image is associated with a .json file containing its annotations. The naming convention is that each render is associated with a number X, and that the images and annotations associated with that render are respectively named X.jpg and X.json.
The data has been generated using the Python scripts and .blend file present in this repository. The chess board and pieces models that have been used for those renders are not provided with the code.
Data characteristics :
No distinction has been hard-built between training, validation, and testing data, and is left completely up to the users. A proposed pipeline for the extraction, recognition, and placement of chess pieces is proposed in a notebook added with this dataset.
I would like to express my gratitude for the efforts of the Blender Foundation and all its participants, for their incredible open-source tool which once again has allowed me to conduct interesting projects with great ease.
Two interesting papers on the generation and use of synthetic data, which have inspired me to conduct this project :
Erroll Wood, Tadas Baltruลกaitis, Charlie Hewitt (2021) Fake It Till You Make It: Face analysis in the wild using synthetic data alone https://arxiv.org/abs/2109.15102 Salehe Erfanian Ebadi, You-Cyuan Jhang, Alex Zook (2021) PeopleSansPeople: A Synthetic Data Generator for Human-Centric Computer Vision https://arxiv.org/abs/2112.09290
Facebook
Twitterhttps://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12294787%2F4217b8c4192757a5c54a0d2fe44b8441%2Foverview.png?generation=1741620902943282&alt=media" alt="">
WildlifeReID-10k is a wildlife re-identification dataset with more than 140k images of 10k individual animals. It is a collection of 37 existing wildlife re-identification datasets with additional processing steps. WildlifeReID-10k contains diverse animals such as marine turtles, primates, birds, African herbivores, marine mammals, and domestic animals. We provide a Jupyter notebook with introduction to the dataset, a way to evaluate developed algorithms and a baseline performance. WildlifeReID-10k has two primary uses: - Design an algorithm to classify individual animals in images. This is a much more complicated task (with 10k fine-grained classes) and the intended use of the dataset. - Design an algorithm to classify species of animals. This is a simpler task (with 20 coarse-grained classes) requiring fewer resources. It is intended for researchers or the interested public who want to develop their first methods on an interesting dataset.
WildlifeReID-10k was created by the Python library wildlife-datasets by combining 37 existing wildlife re-identification datasets. We claim no rights to these datasets besides SeaTurtleID2022. When publishing results on WildlifeReID-10k, all individual datasets should be attributed (see below). An accompanying paper containing a better dataset description is currently under review.
The user of this dataset must follow the provided license file. In particular, it prohibits commercial applications and being re-uploaded. Moreover, this work and all the consisting datasets must be properly attributed. We provide the pdf file and the LaTex files in a separate folder citation for the attribution. The license files of the individual datasets must be followed. For simplicity, we provide license files of the individual datasets:
- CC BY 4.0: AAUZebraFish, CatIndividualImages, Chicks4FreeID, CowDataset, DogFaceNet, MPDD, PolarBearVidID, SealID;
- CC BY-NC-SA 4.0: ATRW, NDD20;
- CC BY-SA 3.0: StripeSpotter;
- CC BY-SA 4.0: ZindiTurtleRecall;
- CDLA-Permissive-1.0: BelugaID, GiraffeZebraID, HyenaID2022, LeopardID2022, SeaStarReID2023, WhaleSharkID;
- MIT: SMALST;
- NC-Government: AerialCattle2017, Cows2021, FriesianCattle2015, FriesianCattle2017, MultiCamCows2024, OpenCows2020;
- None: BirdIndividualID, Giraffes, IPanda50, NyalaData;
- Other: AmvrakikosTurtles, CTai, CZoo, PrimFace, [ReunionTurtles](https://www.kaggle.com/datasets/wildlifedatasets/reu...
Facebook
Twitterhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.htmlhttp://www.gnu.org/licenses/old-licenses/gpl-2.0.en.html
You are working for a company ABC which has decided to automate the attendance capturing system using only the picture of the employee (selfie). However, ABC company only has one or two passport size photographs of their employees in the system for reference. Your task is to design this image based attendance capturing system using any machine learning or computer vision approach. The system has to match an employee's selfie with the passport size photograph and output either its a match or no match. Make sure to include your own selfies and passport size picture in the training data. Note that the images with 'script' in their name are the passport size images images or reference images which are present in the company's system.
Deliverables: Ipython notebook with all the analysis and training code The trained model file Python script that takes 2 images as input arguments (one selfie, one passport size picture) and gives the output as match or no match along with the confidence score Submit all other files necessary for executing the above script Evaluation metric: Your solution would be evaluated on the hidden test data in terms of;
Accuracy for positive matches, negative matches and overall performance (weightage 70%) Code quality (10%) Processing time (10%) Any innovative approach (10%)
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This dataset is a large-scale, analysis-ready collection of 66,701 plant disease images, ready for immediate use in training a high-accuracy classification model.
It was created by aggregating, cleaning, and processing 7 different public datasets (including sources from India like the Multicrop and Sugarcane datasets) into one unified collection. All the hard work of data cleaning, deduplication, and class mapping has already been done.
This dataset is ideal for anyone who wants to train a robust plant disease classifier without spending weeks on data preparation.
This dataset contains three zip files, providing everything you need to start training:
Contains the complete set of 66,701 clean, deduplicated, and renamed images in a single master_images/images folder.
Contains the pre-made, stratified data splits.
train_split.csv: A manifest of 53,360 training images and their final class IDs.
val_split.csv:A manifest of 6,670 validation images and their class IDs.
test_split.csv: A manifest of 6,671 test images and their class IDs.
Contains the "master key" for the dataset.
class_id_map.csv: The most important file. This map links the final class_id (e.g., 42) used in the split files to its human-readable "master name" (e.g., "Potato - Early Blight").
This master dataset was created by merging the following 7 public sources. All credit goes to the original creators.
Source: https://www.kaggle.com/competitions/paddy-disease-classification
Content: Images of common paddy (rice) diseases.
Source: https://data.mendeley.com/datasets/6243z8r6t6
Content: Collected in Tamil Nadu, India. Includes Banana, Chilli, Radish, Groundnut, and Cauliflower.
Source: https://data.mendeley.com/datasets/9424skmnrk/1
Content: Collected in Maharashtra, India. Includes 5 categories of sugarcane disease.
Source: https://www.kaggle.com/datasets/andytingzhiwei/black-gram-plant-leaf-disease
Content: Images of blackgram (Vigna mungo) leaf diseases.
Source: https://github.com/pratikkayal/PlantDoc-Dataset
Content: A large dataset of various crop diseases with 29 classes.
Source: https://www.kaggle.com/datasets/emmarex/plantdisease
Content: A popular benchmark dataset containing numerous crops and diseases.
Source: https://zenodo.org/records/13762907
Content: A large, in-the-wild dataset with 115 different plant diseases and segmentation masks.
This dataset was built by performing a rigorous 8-step pipeline on these 7 sources:
Scanning & Normalizing: Read 3 different data formats (folder-based, CSV-based, and YOLO-based) from the 7 sources.
Aggregating: Merged all 70,457 images into a single master list.
Deduplicating: Used perceptual hashing (imagehash.phash) to find and remove 3,504 duplicate images.
Canonicalizing: Merged messy, duplicate labels (e.g., "Potato_Late_blight") into 332 unique "canonical classes".
Filtering & Balancing: Removed 51 rare classes with fewer than 10 images to ensure stable 80/10/10 splitting.
Finalizing: The final set of 66,701 unique images across 281 classes was saved.
Splitting: A stratified 80/10/10 split was performed and saved as train_split.csv, val_split.csv, and test_split.csv.
You can start training in 3 minutes:
Create a new Kaggle Notebook (T4 GPU recommended).
Add this dataset as an input.
!unzip -q /kaggle/input/your-dataset-name/master_images.zip -d /kaggle/working/ !unzip -q /kaggle/input/your-dataset-name/metadata.zip -d /kaggle/working/ !unzip -q /kaggle/input/your-dataset-name/outputs.zip -d /kaggle/working/
Facebook
Twitterhttps://creativecommons.org/publicdomain/zero/1.0/https://creativecommons.org/publicdomain/zero/1.0/
By Huggingface Hub [source]
AI4Math is an invaluable resource for those seeking to advance their research in developing tools for mathematical question-answering. With a total of 9,800 questions from IQ tests, FunctionQA tasks, and PaperQA presentations, the dataset provides a comprehensive collection of questions with valuable annotations. This includes information on the text of the question, related images as well as a decoded version of the image, choisable answers whenever relevant to aid answering accuracy measurement (precision), and predetermined answer types along with metadata which can provide additional insight into certain cases. By making use of this dataset researchers are able to target different areas within mathematical question-answering with precision relative to their respective goals -- be it IQ tests or natural language processing based function computation -- while assessing progress through recorded accurate answers (precision). AI4Math is truly transforming how mathematics can be applied for machine learning applications one step at a time
For more datasets, click here.
- ๐จ Your notebook can be here! ๐จ!
- Before you get started with this dataset, it is important to familiarize yourself with the columns: question, image, decoded_image, choices, unit, precisionโ , question_typeโ , answer_typeโ , metadata โand query.
- It is advisable that you read through the data dictionary provided in order to understand which columns are given in the dataset and what type of data each column contains (for example 'question' for text questions). โ
- Once you understand what information is contained within each column โ itโs time to start exploring! Use a visual exploration tool such as Tableau or Dataiku DSS to explore your data before doing any in-depth analysis or machine learning processing on it. Visual explorations can provide insights on trends across different fields including demographics and purchase history etcetera which can be interesting even if they donโt result in any direct output from machine learning or statistical models used later in analysis/prediction tasks..4 You may also want to consider using a text analyzer such as Google NL API or Word2Vec API to look for relationships between words used in certain questions and answers across all datasets โ this could help you get more insight into your current datasets and plan ideas for future research . 5 Lastly make sure you always keep track of versioning when performing tasks on any large dataset โ having multiple versions makes it easier for everyone involved since mistakes can always be reverted before reverting by accident everything related with completed analyses/models..6 After exploring your data its time for actual machine learning processing - depending on what type of activity need they may use supervised/unsupervised algorithm approaches , neural networks etcetera trying out multiple solutions looks like a good idea since some techniques might work better than others depending specific problem at hand 7 After running several experiments track down results keeping notes nearby metrics obtained along process not only during predictions but also training 8 Finally its very important evaluate models after every cycle making sure their performance stable ; many times accuracy improvement more reliable indicator valid model rather than metrics like accuracy itself 9 If satisfied results set watch performance continuously over time checking ongoing basis if everything still works correctly 10 To keep up date new developments regarding technologies being used its highly recommended subscribing mailing lists leading software products companies whose solutions using regularly
- Using the metadata and question columns to develop algorithms that automatically generate questions for certain topics as defined by the user.
- Utilizing the image column to create a computer vision model for predicting and classifying similar images.
- Analyzing the content in both the choices and answer_type columns for extracting underlying patterns in IQ Tests, FunctionQA tasks, and PaperQA presentations
If you use this dataset in your research, please credit the original authors. Data Source
License: CC0 1.0 Universal (CC0 1.0) - Public Domain Dedication No Copyright - You can copy,...
Facebook
TwitterMIT Licensehttps://opensource.org/licenses/MIT
License information was derived automatically
This repository contains a comprehensive synthetic point cloud dataset featuring ground-truth skeletons for multiple plant species. The dataset is designed to support research and development in 3D computer vision, particularly in areas such as plant modeling, skeleton extraction, and species classification.
For optimal viewing and loading of point clouds and skeletons, we strongly recommend using our custom visualization library. This tool is designed specifically for this dataset and ensures the best user experience.
Library Location: https://github.com/uc-vision/synthetic-trees/tree/main
To help you get started with the dataset, we provide an example Jupyter notebook that demonstrates how to load, process, and visualize the data.
Notebook Link: View Synthetic Tree Data Example
Important Note: To fully utilize the 3D visualization capabilities, please download the notebook and run it locally on your machine.
This dataset is particularly useful for:
We welcome your questions, suggestions, and feedback to improve this dataset. Please feel free to reach out through Kaggle or the GitHub repository for any inquiries or contributions.
If you use this dataset in your research, please cite:
@inproceedings{dobbs2023smart,
title={Smart-Tree: Neural Medial Axis Approximation of Point Clouds for 3D Tree Skeletonization},
author={Dobbs, Harry and Batchelor, Oliver and Green, Richard and Atlas, James},
booktitle={Iberian Conference on Pattern Recognition and Image Analysis},
pages={351--362},
year={2023},
organization={Springer}
}
Thank you for your interest in our Synthetic Multi-Species Point Cloud Dataset. We look forward to seeing the innovative ways in which you utilize this data in your projects and research!
Not seeing a result you expected?
Learn how you can add new datasets to our index.
Facebook
TwitterA series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, displaying Matplotlib images, sorting contours, detecting edges, and much more easier with OpenCV and both Python 2.7 and Python 3.
Add this library as dataset to your Kaggle notebook and write down the code below:
import sys
sys.path.append('../input/imutils-054/imutils-0.5.4')
import imutils
Adrian Rosebrock
Library was was taken from pip imutils 0.5.4
It was taken from Unsplash