Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
medical imaging
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
These figures are the graphical results of my Master 2 internship on automatic segmentation using SAM2(Segment Anything Model 2) an artificial intelligence. The red line represents the best cell line from which anatomical measurements were made.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Title: Bugzz lightyears: To Semantic Segmentation and Bug-yond!
This dataset comprises a collection of real and robotic toy bugs designed for a small-scale semantic segmentation project. Each bug has been captured six times from various angles, ensuring comprehensive coverage of their features and details. The dataset serves as a valuable resource for exploring semantic segmentation techniques and evaluating machine learning models.
Abstract
The remarkable capabilities of the Segment Anything Model (SAM) for tackling image segmentation tasks in an intuitive and interactive manner has sparked interest in the design of effective visual prompts. Such interest has led to the creation of automated point prompt selection strategies, typically motivated from a feature extraction perspective. However, there is still very little understanding of how appropriate these automated visual prompting strategies are… See the full description on the dataset page: https://huggingface.co/datasets/gOLIVES/SAM_PointPrompt_Dataset.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Segment Person is a dataset for instance segmentation tasks - it contains Segment Person In Neutral Image annotations for 2,188 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
open-source-metrics/image-segmentation-checkpoint-downloads dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Segment T Joint is a dataset for instance segmentation tasks - it contains SegmentedTJoint annotations for 928 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
The paper introduces an online method, named SAM-PD, that applies SAM to track and segment objects throughout the video.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Discover the Remote Sensing Object Segmentation Dataset Perfect for GIS, AI driven environmental studies, and satellite image analysis.
CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
License information was derived automatically
Lub Obvious Objects Segmentation Dataset yog ib qho tshwj xeeb sau los ntawm kev tshaj xov xwm thiab kev lom zem hauv kev pom, uas muaj cov duab sau hauv internet tag nrho ntawm ib qho kev daws teeb meem ntawm 1536 x 2048 pixels. Cov ntaub ntawv no tau mob siab rau cov segmentation ntawm cov khoom tseem ceeb uas pom tau tam sim ntawd thiab nyiam cov duab, siv ob qho tib si semantic thiab contour segmentation cov tswv yim los txhais cov khoom no ntawm qib pixel.
This statistic gives a breakdown of the global Internet of Things enabled sensors market in 2022, by segment. In 2022, motion sensors are expected to account for **** percent of the global IoT enabled sensors market. Total revenue generated by the enabled sensors market is estimated to reach ** billion U.S. dollars in 2022.
The Internet of Things enabled sensors market
Advances in the field of sensor technology continue to trigger the evolution of innovative consumer and industrial products. Without sensors, most things that are connected to the Internet of Things (IoT) today would lose much of their functionality. A thing can range from heart monitoring implants, DNA analysis devices for food monitoring or built-in sensors in automobiles. The IoT technology of sending the data to the cloud for analysis, where it is distilled and interpreted before delivering the high-value information back to the device, has allowed society to make more efficient and accurate decisions not only in people’s daily lives but also in a business environment. It is expected that the global IoT market will almost grow three-fold between 2014 and 2019 and exceed one trillion U.S. dollars in 2017. By 2019, the market is forecast to have an estimated size of more than *** trillion U.S. dollars. With a vast array of applications, the Internet of Things has seen a consistently growing number of connected devices worldwide. By 2022, it is predicted that ** billion devices will be connected to the IoT around the globe. This technology is slated to have numerous applications, predominantly in the fields of consumer electronics, industrial manufacturing, automotive and life sciences. By 2022, temperature sensors are expected to account for **** percent of the global IoT enabled sensors market.
https://choosealicense.com/licenses/other/https://choosealicense.com/licenses/other/
Dataset Description
This dataset contains the 444 images that we used for training our model - https://huggingface.co/SPRIGHT-T2I/spright-t2i-sd2. This contains the samples of this subset related to the Segment Anything images. We will release the LAION images, when the parent images are made public again. Our training and validation set are a subset of the SPRIGHT dataset, and consists of 444 and 50 images respectively, randomly sampled in a 50:50 split between LAION-Aesthetics and… See the full description on the dataset page: https://huggingface.co/datasets/SPRIGHT-T2I/18_obj_444.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
## Overview
Lane Segment V3 is a dataset for semantic segmentation tasks - it contains Lane annotations for 1,057 images.
## Getting Started
You can download this dataset for use within your own projects, or fork it into a workspace on Roboflow to create your own model.
## License
This dataset is available under the [CC BY 4.0 license](https://creativecommons.org/licenses/CC BY 4.0).
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo Res-UNet models for identifying water in Sentinel-2 RGB images of coasts.
Based on SWED*** data
https://openmldata.ukho.gov.uk/
These Residual-UNet model data are based on images of coasts and associated labels. Models have been fitted to the following types of data
1. RGB (3 band): red, green, blue
Classes are: {0: null, 1: water}.
These files are used in conjunction with Segmentation Zoo*
For each model, there are 3 files with the same root name:
1. '.json' config file: this is the file that was used by Segmentation Gym** to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
2. '.h5' weights file: this is the file that was created by the Segmentation Gym** function `train_model.py`. It contains the trained model's parameter weights. It can called by the Segmentation Gym** function `seg_images_in_folder.py` or the Segmentation Zoo* function `select_model_and_batch_process_folder.py` to segment a folder of images
3. '_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the `config` file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
References
* https://github.com/Doodleverse/segmentation_zoo
** https://github.com/Doodleverse/segmentation_gym
*** https://www.sciencedirect.com/science/article/abs/pii/S0034425722001584
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Doodleverse/Segmentation Zoo/Seg2Map Res-UNet models for CoastTrain 5-class segmentation of RGB 768x768 NAIP images
These Residual-UNet model data are based on Coast Train images and associated labels. https://coasttrain.github.io/CoastTrain/docs/Version%201:%20March%202022/data
Models have been created using Segmentation Gym* using the following dataset**: https://doi.org/10.1038/s41597-023-01929-2
Image size used by model: 768 x 768 x 3 pixels
classes:
water
whitewater
sediment
other_bare_natural_terrain
other_terrain
File descriptions
For each model, there are 5 files with the same root name:
'.json' config file: this is the file that was used by Segmentation Gym* to create the weights file. It contains instructions for how to make the model and the data it used, as well as instructions for how to use the model for prediction. It is a handy wee thing and mastering it means mastering the entire Doodleverse.
'.h5' weights file: this is the file that was created by the Segmentation Gym* function train_model.py
. It contains the trained model's parameter weights. It can called by the Segmentation Gym* function seg_images_in_folder.py
. Models may be ensembled.
'_modelcard.json' model card file: this is a json file containing fields that collectively describe the model origins, training choices, and dataset that the model is based upon. There is some redundancy between this file and the config
file (described above) that contains the instructions for the model training and implementation. The model card file is not used by the program but is important metadata so it is important to keep with the other files that collectively make the model and is such is considered part of the model
'_model_history.npz' model training history file: this numpy archive file contains numpy arrays describing the training and validation losses and metrics. It is created by the Segmentation Gym function train_model.py
'.png' model training loss and mean IoU plot: this png file contains plots of training and validation losses and mean IoU scores during model training. A subset of data inside the .npz file. It is created by the Segmentation Gym function train_model.py
Additionally, BEST_MODEL.txt contains the name of the model with the best validation loss and mean IoU
References *Segmentation Gym: Buscombe, D., & Goldstein, E. B. (2022). A reproducible and reusable pipeline for segmentation of geoscientific imagery. Earth and Space Science, 9, e2022EA002332. https://doi.org/10.1029/2022EA002332 See: https://github.com/Doodleverse/segmentation_gym
**Buscombe, D., Wernette, P., Fitzpatrick, S. et al. A 1.2 Billion Pixel Human-Labeled Dataset for Data-Driven Classification of Coastal Environments. Sci Data 10, 46 (2023). https://doi.org/10.1038/s41597-023-01929-2
Cityscapes data (dataset home page) contains labeled videos taken from vehicles driven in Germany. This version is a processed subsample created as part of the Pix2Pix paper. The dataset has still images from the original videos, and the semantic segmentation labels are shown in images alongside the original image. This is one of the best datasets around for semantic segmentation tasks.
This dataset has 2975 training images files and 500 validation image files. Each image file is 256x512 pixels, and each file is a composite with the original photo on the left half of the image, alongside the labeled image (output of semantic segmentation) on the right half.
This dataset is the same as what is available here from the Berkeley AI Research group.
The Cityscapes data available from cityscapes-dataset.com has the following license:
This dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
Can you identify you identify what objects are where in these images from a vehicle.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The “FruitSeg30_Segmentation Dataset & Mask Annotations” is a comprehensive collection of high-resolution images of various fruits, accompanied by precise segmentation masks. We structured this dataset into 30 distinct classes, which containing 1969 images and their corresponding masks, with each measuring 512×512 pixels. Each class folder contains two subfolders: “Images” with high-quality JPG images captured under diverse conditions and “Mask” with PNG files representing the segmentation masks. We meticulously collected the dataset from various locations in Malaysia, Bangladesh, and Australia, ensuring a robust and diverse collection suitable for training and evaluating image segmentation models like U-Net. This resource is ideal for automated fruit recognition and classification applications, agricultural quality control, and computer vision and image processing research. By providing precise annotations and a wide range of fruit types, this dataset serves as a valuable asset for advancing research and development in these fields.
The number of global cellular Internet of Things (IoT) connections are expected to grow the most in the broadband and critical IoT sectors in the period from 2023 to 2030, reaching around *** billion connections in 2030.
hf-internal-testing/mask-for-image-segmentation-tests dataset hosted on Hugging Face and contributed by the HF Datasets community
Attribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
With recent advances in machine learning, semantic segmentation algorithms are becoming increasingly general purpose and translatable to unseen tasks. Many key algorithmic advances in the field of medical imaging are commonly validated on a small number of tasks, limiting our understanding of the generalisability of the proposed contributions. A model which works out-of-the-box on many tasks, in the spirit of AutoML, would have a tremendous impact on healthcare. The field of medical imaging is also missing a fully open source and comprehensive benchmark for general purpose algorithmic validation and testing covering a large span of challenges, such as: small data, unbalanced labels, large-ranging object scales, multi-class labels, and multimodal imaging, etc. This challenge and dataset aims to provide such resource through the open sourcing of large medical imaging datasets on several highly different tasks, and by standardising the analysis and validation process.
Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
medical imaging