10 datasets found
  1. f

    Dataset

    • figshare.com
    application/x-gzip
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Moynuddin Ahmed Shibly (2023). Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.13577873.v1
    Explore at:
    application/x-gzipAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    figshare
    Authors
    Moynuddin Ahmed Shibly
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This is an open source - publicly available dataset which can be found at https://shahariarrabby.github.io/ekush/ . We split the dataset into three sets - train, validation, and test. For our experiments, we created two other versions of the dataset. We have applied 10-fold cross validation on the train set and created ten folds. We also created ten bags of datasets using bootstrap aggregating method on the train and validation sets. Lastly, we created another dataset using pre-trained ResNet50 model as feature extractor. On the features extracted by ResNet50 we have applied PCA and created a tabilar dataset containing 80 features. pca_features.csv is the train set and pca_test_features.csv is the test set. Fold.tar.gz contains the ten folds of images described above. Those folds are also been compressed. Similarly, Bagging.tar.gz contains the ten compressed bags of images. The original train, validation, and test sets are in Train.tar.gz, Validation.tar.gz, and Test.tar.gz, respectively. The compression has been performed for speeding up the upload and download purpose and mostly for the sake of convenience. If anyone has any question about how the datasets are organized please feel free to ask me at shiblygnr@gmail.com .I will get back to you in earliest time possible.

  2. f

    ORBIT: A real-world few-shot dataset for teachable object recognition...

    • city.figshare.com
    bin
    Updated May 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Daniela Massiceti; Lida Theodorou; Luisa Zintgraf; Matthew Tobias Harris; Simone Stumpf; Cecily Morrison; Edward Cutrell; Katja Hofmann (2023). ORBIT: A real-world few-shot dataset for teachable object recognition collected from people who are blind or low vision [Dataset]. http://doi.org/10.25383/city.14294597.v3
    Explore at:
    binAvailable download formats
    Dataset updated
    May 31, 2023
    Dataset provided by
    City, University of London
    Authors
    Daniela Massiceti; Lida Theodorou; Luisa Zintgraf; Matthew Tobias Harris; Simone Stumpf; Cecily Morrison; Edward Cutrell; Katja Hofmann
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Object recognition predominately still relies on many high-quality training examples per object category. In contrast, learning new objects from only a few examples could enable many impactful applications from robotics to user personalization. Most few-shot learning research, however, has been driven by benchmark datasets that lack the high variation that these applications will face when deployed in the real-world. To close this gap, we present the ORBIT dataset, grounded in a real-world application of teachable object recognizers for people who are blind/low vision. We provide a full, unfiltered dataset of 4,733 videos of 588 objects recorded by 97 people who are blind/low-vision on their mobile phones, and a benchmark dataset of 3,822 videos of 486 objects collected by 77 collectors. The code for loading the dataset, computing all benchmark metrics, and running the baseline models is available at https://github.com/microsoft/ORBIT-DatasetThis version comprises several zip files:- train, validation, test: benchmark dataset, organised by collector, with raw videos split into static individual frames in jpg format at 30FPS- other: data not in the benchmark set, organised by collector, with raw videos split into static individual frames in jpg format at 30FPS (please note that the train, validation, test, and other files make up the unfiltered dataset)- *_224: as for the benchmark, but static individual frames are scaled down to 224 pixels.- *_unfiltered_videos: full unfiltered dataset, organised by collector, in mp4 format.

  3. m

    Aruzz22.5K: An Image Dataset of Rice Varieties

    • data.mendeley.com
    Updated Mar 12, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Md Masudul Islam (2024). Aruzz22.5K: An Image Dataset of Rice Varieties [Dataset]. http://doi.org/10.17632/3mn9843tz2.4
    Explore at:
    Dataset updated
    Mar 12, 2024
    Authors
    Md Masudul Islam
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This extensive dataset presents a meticulously curated collection of low-resolution images showcasing 20 well-established rice varieties native to diverse regions of Bangladesh. The rice samples were carefully gathered from both rural areas and local marketplaces, ensuring a comprehensive and varied representation. Serving as a visual compendium, the dataset provides a thorough exploration of the distinct characteristics of these rice varieties, facilitating precise classification.

    Dataset Composition

    The dataset encompasses 20 distinct classes, encompassing Subol Lota, Bashmoti (Deshi), Ganjiya, Shampakatari, Sugandhi Katarivog, BR-28, BR-29, Paijam, Bashful, Lal Aush, BR-Jirashail, Gutisharna, Birui, Najirshail, Pahari Birui, Polao (Katari), Polao (Chinigura), Amon, Shorna-5, and Lal Binni. In total, the dataset comprises 4,730 original JPG images and 23,650 augmented images.

    Image Capture and Dataset Organization

    These images were captured using an iPhone 11 camera with a 5x zoom feature. Each image capturing these rice varieties was diligently taken between October 18 and November 29, 2023. To facilitate efficient data management and organization, the dataset is structured into two variants: Original images and Augmented images. Each variant is systematically categorized into 20 distinct sub-directories, each corresponding to a specific rice variety.

    Original Image Dataset

    The primary image set comprises 4,730 JPG images, uniformly sized at 853 × 853 pixels. Due to the initial low resolution, the file size was notably 268 MB. Employing compression through a zip program significantly optimized the dataset, resulting in a final size of 254 MB.

    Augmented Image Dataset

    To address the substantial image volume requirements of deep learning models for machine vision, data augmentation techniques were implemented. Total 23,650 images was obtained from augmentation. These augmented images, also in JPG format and uniformly sized at 512 × 512 pixels, initially amounted to 781 MB. However, post-compression, the dataset was further streamlined to 699 MB.

    Dataset Storage and Access

    The raw and augmented datasets are stored in two distinct zip files, namely 'Original.zip' and 'Augmented.zip'. Both zip files contain 20 sub-folders representing a unique rice variety, namely 1_Subol_Lota, 2_Bashmoti, 3_Ganjiya, 4_Shampakatari, 5_Katarivog, 6_BR28, 7_BR29, 8_Paijam, 9_Bashful, 10_Lal_Aush, 11_Jirashail, 12_Gutisharna, 13_Red_Cargo,14_Najirshail, 15_Katari_Polao, 16_Lal_Biroi, 17_Chinigura_Polao, 18_Amon, 19_Shorna5, 20_Lal_Binni.

    Train and Test Data Organization

    To ease the experimenting process for the researchers we have balanced the data and split it in an 80:20 train-test ratio. The ‘Train_n_Test.zip’ folder contains two sub-directories: ‘1_TEST’ which contains 1125 images per class and ‘2_VALID’ which contains 225 images per class.

  4. BTSbot v10 training set

    • zenodo.org
    zip
    Updated Jun 4, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Nabeel Rehemtulla; Nabeel Rehemtulla (2024). BTSbot v10 training set [Dataset]. http://doi.org/10.5281/zenodo.10839691
    Explore at:
    zipAvailable download formats
    Dataset updated
    Jun 4, 2024
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Nabeel Rehemtulla; Nabeel Rehemtulla
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Time period covered
    Aug 19, 2023
    Description

    This is the production version of the BTSbot training set, limited to public (programid=1) ZTF alerts. BTSbot is a multi-modal convolutional neural network designed for real-time identification bright extragalactic transients in Zwicky Transient Facility (ZTF) data. BTSbot provides a bright transient score to individual ZTF detections using their image data and 25 extracted features. BTSbot is able to eliminate the need for daily visual inspection of new transients by automatically identifying and requesting spectroscopic follow-up observations of new bright transient candidates.

    The training data is split into two zipped files. metadata_v10.zip contains alert packet features for the alerts in the train, validation, and test splits stored a separate .csv files. images_v10.zip contains the corresponding image cutouts stored as three .npy files. The BTSbot source code contains routines for reading these files and training a model on them. They can also easily be loaded with pandas.read_csv() and numpy.load().

    This training set data is necessary for reproducing the results of the BTSbot study, although this dataset only contains ZTF public data while the production BTSbot model also trained on ZTF partnership data.

    If you use reference this data or BTSbot please cite the BTSbot paper.

  5. Gender Detection & Classification - Face Dataset

    • kaggle.com
    Updated Oct 31, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Training Data (2023). Gender Detection & Classification - Face Dataset [Dataset]. https://www.kaggle.com/datasets/trainingdatapro/gender-detection-and-classification-image-dataset
    Explore at:
    CroissantCroissant is a format for machine-learning datasets. Learn more about this at mlcommons.org/croissant.
    Dataset updated
    Oct 31, 2023
    Dataset provided by
    Kagglehttp://kaggle.com/
    Authors
    Training Data
    License

    Attribution-NonCommercial-NoDerivs 4.0 (CC BY-NC-ND 4.0)https://creativecommons.org/licenses/by-nc-nd/4.0/
    License information was derived automatically

    Description

    Gender Detection & Classification - face recognition dataset

    The dataset is created on the basis of Face Mask Detection dataset

    Dataset Description:

    The dataset comprises a collection of photos of people, organized into folders labeled "women" and "men." Each folder contains a significant number of images to facilitate training and testing of gender detection algorithms or models.

    The dataset contains a variety of images capturing female and male individuals from diverse backgrounds, age groups, and ethnicities.

    https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F1c4708f0b856f7889e3c0eea434fe8e2%2FFrame%2045%20(1).png?generation=1698764294000412&alt=media" alt="">

    This labeled dataset can be utilized as training data for machine learning models, computer vision applications, and gender detection algorithms.

    💴 For Commercial Usage: Full version of the dataset includes 376 000+ photos of people, leave a request on TrainingData to buy the dataset

    Metadata for the full dataset:

    • assignment_id - unique identifier of the media file
    • worker_id - unique identifier of the person
    • age - age of the person
    • true_gender - gender of the person
    • country - country of the person
    • ethnicity - ethnicity of the person
    • photo_1_extension, photo_2_extension, photo_3_extension, photo_4_extension - photo extensions in the dataset
    • photo_1_resolution, photo_2_resolution, photo_3_extension, photo_4_resolution - photo resolution in the dataset

    OTHER BIOMETRIC DATASETS:

    💴 Buy the Dataset: This is just an example of the data. Leave a request on https://trainingdata.pro/datasets to learn about the price and buy the dataset

    Content

    The dataset is split into train and test folders, each folder includes: - folders women and men - folders with images of people with the corresponding gender, - .csv file - contains information about the images and people in the dataset

    File with the extension .csv

    • file: link to access the file,
    • gender: gender of a person in the photo (woman/man),
    • split: classification on train and test

    TrainingData provides high-quality data annotation tailored to your needs

    keywords: biometric system, biometric system attacks, biometric dataset, face recognition database, face recognition dataset, face detection dataset, facial analysis, gender detection, supervised learning dataset, gender classification dataset, gender recognition dataset

  6. f

    Data from: S1 Dataset -

    • plos.figshare.com
    txt
    Updated Jul 23, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Xiujuan Wang; Hui Shao; Xueli Liu; Lili Feng (2024). S1 Dataset - [Dataset]. http://doi.org/10.1371/journal.pone.0307542.s001
    Explore at:
    txtAvailable download formats
    Dataset updated
    Jul 23, 2024
    Dataset provided by
    PLOS ONE
    Authors
    Xiujuan Wang; Hui Shao; Xueli Liu; Lili Feng
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    ObjectiveThe aim was to develop a predictive tool for anticipating postpartum endometritis occurrences and to devise strategies for prevention and control.MethodsEmploying a retrospective approach, the baseline data of 200 women diagnosed with postpartum endometritis in a tertiary maternity hospital in Zhejiang Province, spanning from February 2020 to September 2022, was examined. Simultaneously, the baseline data of 1,000 women without endometritis during the same period were explored with a 1:5 ratio. Subsequently, 1,200 women were randomly allocated into a training group dataset and a test group dataset, adhering to a 7:3 split. The selection of risk factors for postpartum endometritis involved employing random forests, lasso regression, and traditional univariate and multifactor logistic regression on the training group dataset. A nomogram was then constructed based on these factors. The model’s performance was assessed using the area under the curve (AUC), calculated through plotting the receiver operating characteristic (ROC) curve. Additionally, the Brier score was employed to evaluate the model with a calibration curve. To gauge the utility of the nomogram, a clinical impact curve (CIC) analysis was conducted. This comprehensive approach not only involved identifying risk factors but also included a visual representation (nomogram) and thorough evaluation metrics, ensuring a robust tool for predicting, preventing, and controlling postpartum endometritis.ResultsIn the multivariate analysis, six factors were identified as being associated with the occurrence of maternal endometritis in the postpartum period. These factors include the number of negative finger tests (OR: 1.159; 95%CI: 1.091–1.233; P < 0.05), postpartum hemorrhage (1.003; 1.002–1.005; P < 0.05), pre-eclampsia (9.769; 4.64–21.155; P < 0.05), maternity methods (2.083; 1.187–3.7; P < 0.001), prenatal reproductive tract culture (2.219; 1.411–3.47; P < 0.05), and uterine exploration (0.441; 0.233–0.803; P < 0.001).A nomogram was then constructed based on these factors, and its predictive performance was assessed using the area under the curve (AUC). The results in both the training group data (AUC: 0.803) and the test group data (AUC: 0.788) demonstrated a good predictive value. The clinical impact curve (CIC) further highlighted the clinical utility of the nomogram.ConclusionThe development of an individualized nomogram for postpartum endometritis infection holds promise for doctors in screening high-risk women, enabling early intervention and ultimately reducing the rate of postpartum endometritis infection. This comprehensive approach, integrating key risk factors and predictive tools, enhances the potential for timely and targeted medical intervention.

  7. R

    License Plates Dataset

    • universe.roboflow.com
    zip
    Updated Oct 15, 2022
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Samrat Sahoo (2022). License Plates Dataset [Dataset]. https://universe.roboflow.com/samrat-sahoo/license-plates-f8vsn/model/3
    Explore at:
    zipAvailable download formats
    Dataset updated
    Oct 15, 2022
    Dataset authored and provided by
    Samrat Sahoo
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Plates Bounding Boxes
    Description

    Overview

    The License Plates dataset is a object detection dataset of different vehicles (i.e. cars, vans, etc.) and their respective license plate. Annotations also include examples of "vehicle" and "license-plate". This dataset has a train/validation/test split of 245/70/35 respectively. https://i.imgur.com/JmRgjBq.png" alt="Dataset Example">

    Use Cases

    This dataset could be used to create a vehicle and license plate detection object detection model. Roboflow provides a great guide on creating a license plate and vehicle object detection model.

    Using this Dataset

    This dataset is a subset of the Open Images Dataset. The annotations are licensed by Google LLC under CC BY 4.0 license. Some annotations have been combined or removed using Roboflow's annotation management tools to better align the annotations with the purpose of the dataset. The images have a CC BY 2.0 license.

    About Roboflow

    Roboflow creates tools that make computer vision easy to use for any developer, even if you're not a machine learning expert. You can use it to organize, label, inspect, convert, and export your image datasets. And even to train and deploy computer vision models with no code required. https://i.imgur.com/WHFqYSJ.png" alt="https://roboflow.com">

  8. R

    Simtrafficview5k Dataset

    • universe.roboflow.com
    zip
    Updated Dec 31, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    SG (2024). Simtrafficview5k Dataset [Dataset]. https://universe.roboflow.com/sg-icrum/simtrafficview5k
    Explore at:
    zipAvailable download formats
    Dataset updated
    Dec 31, 2024
    Dataset authored and provided by
    SG
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Variables measured
    Vehicles Bounding Boxes
    Description

    This dataset is based on a traffic intersection simulation created by Mihir Gandhi using Python and Pygame (link to repo). The simulation models the movement of vehicles across a traffic intersection with traffic lights and timers, designed for various AI and computer vision applications, including vehicle detection and traffic flow analysis. The dataset includes labeled images and annotations of vehicles at different positions within the simulated intersection, captured for object detection tasks. The vehicles were annotated using OpenCV match templating. The dataaset is already split into train (70%), validation (20%), and test (10%) sets.

    Simulation Details: The simulation provides a visual representation of traffic signals with timers, allowing vehicles to move across a set intersection in four directions. The traffic light cycles are dynamically controlled with the following features: - Vehicle Movement: Vehicles move across lanes in the intersection based on the traffic signal. - Traffic Signal Timers: Each traffic signal has a timer showing the remaining time before it changes, which is useful for simulating real-world traffic flow dynamics. - Simulation Duration: The simulation duration is customizable, and the simulation stops when the set time elapses.

    Dataset Composition: - Images: Screenshots taken at various time steps in the simulation, showing different vehicles (bike, bus, car, or truck) within the intersection. - Annotations: Annotations of vehicle positions in YOLOv5 format, which include bounding boxes around vehicles for object detection tasks. The labels include information about the vehicle type and location relative to the intersection.

    Applications: This dataset is ideal for the following use cases: - Vehicle Detection: Train computer vision models to detect and classify vehicles at traffic intersections. Traffic Flow Analysis: Develop AI models that can analyze traffic flow patterns and predict vehicle behavior at intersections. - Smart Traffic Systems: Leverage the dataset to develop models for smart traffic lights, aimed at optimizing traffic flow and reducing congestion.

    Technical Details: - Format: YOLO-compatible annotation files (bounding box coordinates, class labels) - Images: PNG images representing each snapshot of the simulation

  9. d

    MountainScape Segmentation Dataset

    • search.dataone.org
    Updated Dec 11, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Mountain Legacy Project (2024). MountainScape Segmentation Dataset [Dataset]. http://doi.org/10.5683/SP3/CEYU10
    Explore at:
    Dataset updated
    Dec 11, 2024
    Dataset provided by
    Borealis
    Authors
    Mountain Legacy Project
    Time period covered
    Jan 1, 1870 - Aug 30, 2023
    Description

    This dataset contains the MountainScape Segmentation Dataset (MS2D), a collection of oblique mountain images from the Mountain Legacy Project and corresponding manually annotated land cover masks. The dataset is split into 144 historic grayscale images collected by early phototopographic surveyors and 140 modern repeat images captured by the Mountain Legacy Project. The image resolutions range from 16 to 80 megapixels and the corresponding masks are RGB images with 8 landcover classes. The image dataset was used to train and test the Python Landscape Classifier (PyLC), a trainable segmentation network and land cover classification tool for oblique landscape photography. The dataset also contains PyTorch models trained with PyLC using the collection of images and masks.

  10. CODEBRIM: COncrete DEfect BRidge IMage Dataset

    • zenodo.org
    • explore.openaire.eu
    • +1more
    bin, zip
    Updated Jan 24, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Martin Mundt; Sagnik Majumder; Sreenivas Murali; Panagiotis Panetsos; Visvanathan Ramesh; Martin Mundt; Sagnik Majumder; Sreenivas Murali; Panagiotis Panetsos; Visvanathan Ramesh (2020). CODEBRIM: COncrete DEfect BRidge IMage Dataset [Dataset]. http://doi.org/10.5281/zenodo.2620293
    Explore at:
    zip, binAvailable download formats
    Dataset updated
    Jan 24, 2020
    Dataset provided by
    Zenodohttp://zenodo.org/
    Authors
    Martin Mundt; Sagnik Majumder; Sreenivas Murali; Panagiotis Panetsos; Visvanathan Ramesh; Martin Mundt; Sagnik Majumder; Sreenivas Murali; Panagiotis Panetsos; Visvanathan Ramesh
    Description

    CODEBRIM: COncrete DEfect BRidge IMage Dataset for multi-target multi-class concrete defect classification in computer vision and machine learning.

    Dataset as presented and detailed in our CVPR 2019 publication: http://openaccess.thecvf.com/content_CVPR_2019/html/Mundt_Meta-Learning_Convolutional_Neural_Architectures_for_Multi-Target_Concrete_Defect_Classification_With_CVPR_2019_paper.html or https://arxiv.org/abs/1904.08486 . If you make use of the dataset please cite it as follows:

    "Martin Mundt, Sagnik Majumder, Sreenivas Murali, Panagiotis Panetsos, Visvanathan Ramesh. Meta-learning Convolutional Neural Architectures for Multi-target Concrete Defect Classification with the COncrete DEfect BRidge IMage Dataset. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019"

    We offer a supplementary GitHub repository with code to reproduce the paper and data loaders: https://github.com/ccc-frankfurt/meta-learning-CODEBRIM

    For ease of use we provide the dataset in multiple different versions.

    Files contained:
    * CODEBRIM_original_images: contains the original full-resolution images and bounding box annotations
    * CODEBRIM_cropped_dataset: contains the extracted crops/patches with corresponding class labels from the bounding boxes
    * CODEBRIM_classification_dataset: contains the cropped patches with corresponding class labels split into training, validation and test sets for machine learning
    * CODEBRIM_classification_balanced_dataset: similar to "CODEBRIM_classification_dataset" but with the exact replication of training images to balance the dataset in order to reproduce results obtained in the paper.

  11. Not seeing a result you expected?
    Learn how you can add new datasets to our index.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Moynuddin Ahmed Shibly (2023). Dataset [Dataset]. http://doi.org/10.6084/m9.figshare.13577873.v1

Dataset

Explore at:
application/x-gzipAvailable download formats
Dataset updated
May 31, 2023
Dataset provided by
figshare
Authors
Moynuddin Ahmed Shibly
License

Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically

Description

This is an open source - publicly available dataset which can be found at https://shahariarrabby.github.io/ekush/ . We split the dataset into three sets - train, validation, and test. For our experiments, we created two other versions of the dataset. We have applied 10-fold cross validation on the train set and created ten folds. We also created ten bags of datasets using bootstrap aggregating method on the train and validation sets. Lastly, we created another dataset using pre-trained ResNet50 model as feature extractor. On the features extracted by ResNet50 we have applied PCA and created a tabilar dataset containing 80 features. pca_features.csv is the train set and pca_test_features.csv is the test set. Fold.tar.gz contains the ten folds of images described above. Those folds are also been compressed. Similarly, Bagging.tar.gz contains the ten compressed bags of images. The original train, validation, and test sets are in Train.tar.gz, Validation.tar.gz, and Test.tar.gz, respectively. The compression has been performed for speeding up the upload and download purpose and mostly for the sake of convenience. If anyone has any question about how the datasets are organized please feel free to ask me at shiblygnr@gmail.com .I will get back to you in earliest time possible.

Search
Clear search
Close search
Google apps
Main menu