Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The graph shows the changes in the impact factor of ^ and its corresponding percentile for the sake of comparison with the entire literature. Impact Factor is the most common scientometric index, which is defined by the number of citations of papers in two preceding years divided by the number of papers published in those years.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
List of Top Authors of Image and Vision Computing sorted by article citations.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
This graph shows how the impact factor of ^ is computed. The left axis depicts the number of papers published in years X-1 and X-2, and the right axis displays their citations in year X.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
List of Top Institutions of Image and Vision Computing sorted by citations.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
CQ100 is a diverse and high-quality dataset of color images that can be used to develop, test, and compare color quantization algorithms. The dataset can also be used in other color image processing tasks, including filtering and segmentation.
If you find CQ100 useful, please cite the following publication: M. E. Celebi and M. L. Perez-Delgado, “CQ100: A High-Quality Image Dataset for Color Quantization Research,” Journal of Electronic Imaging, vol. 32, no. 3, 033019, 2023.
You may download the above publication free of charge from: https://www.spiedigitallibrary.org/journals/journal-of-electronic-imaging/volume-32/issue-3/033019/cq100--a-high-quality-image-dataset-for-color-quantization/10.1117/1.JEI.32.3.033019.full?SSO=1
Facebook
TwitterAttribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0)https://creativecommons.org/licenses/by-nc-sa/4.0/
License information was derived automatically
Description:
This dataset is a derived version of the EyePACS-AIROGS-Light-V2 dataset by Riley Kiefer. I merged the original training, validation, and test folders into a unified structure for simplicity, and relabeled the images for consistent naming.
Using my custom segmentation model, I generated optic disc (OD) and optic cup (OC) predictions for each image to assist in glaucoma-related computer vision research.
The dataset contains:
This dataset aims to advance open glaucoma research and provide a ready-to-use segmentation benchmark.
Fundus images sourced from: - Riley Kiefer. "EyePACS-AIROGS-light-V2". Kaggle, 2024, doi: 10.34740/KAGGLE/DSV/7802508. - Riley Kiefer. "EyePACS-AIROGS-light-V1". Kaggle, 2023, doi: 10.34740/kaggle/ds/3222646. - Riley Kiefer. "Standardized Multi-Channel Dataset for Glaucoma, v19 (SMDG-19)". Kaggle, 2023, doi: 10.34740/kaggle/ds/2329670 - Steen, J., Kiefer, R., Ardali, M., Abid, M. & Amjadian, E. Standardized and Open-Access Glaucoma Dataset for Artificial Intelligence Applications. Invest. Ophthalmol. Vis. Sci. 64, 384–384 (2023). - Amjadian, E., Ardali, M. R., Kiefer, R., Abid, M. & Steen, J. Ground truth validation of publicly available datasets utilized in artificial intelligence models for glaucoma detection. Invest. Ophthalmol. Vis. Sci. 64, 392–392 (2023). - R. Kiefer, M. Abid, M. R. Ardali, J. Steen and E. Amjadian, "Automated Fundus Image Standardization Using a Dynamic Global Foreground Threshold Algorithm," 2023 8th International Conference on Image, Vision and Computing (ICIVC), Dalian, China, 2023, pp. 460-465, doi: 10.1109/ICIVC58118.2023.10270429. - Kiefer, Riley, et al. "A Catalog of Public Glaucoma Datasets for Machine Learning Applications: A detailed description and analysis of public glaucoma datasets available to machine learning engineers tackling glaucoma-related problems using retinal fundus images and OCT images." Proceedings of the 2023 7th International Conference on Information System and Data Mining. 2023. - R. Kiefer, J. Steen, M. Abid, M. R. Ardali and E. Amjadian, "A Survey of Glaucoma Detection Algorithms using Fundus and OCT Images," 2022 IEEE 13th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 2022, pp. 0191-0196, doi: 10.1109/IEMCON56893.2022.9946629. - E. Amjadian, R. Kiefer, J. Steen, M. Abid, M. Ardali, "A Comprehensive Survey of Publicly Available Glaucoma Datasets for Automated Glaucoma Detection". American Academy of Optometry. 2022. - Rotterdam EyePACS AIROGS Dataset (https://airogs.grand-challenge.org/data-and-challenge/ )
Derived work by: Meesam Abbas, 2025 — Predicted OD/OC masks using a custom AI segmentation pipeline.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
The CAPA Apple Quality Grading Multi-Spectral Image Database consists of multispectral (450nm, 500nm, 750nm, and 800nm) images of health and defected apples of bi-color, manual segmentations of defected regions, and expert evaluations of the apples into 4 quality categories. The defect types consist of bruise, rot, flesh damage, frost damage, russet, etc. The database can be used for academic or research purposes with the aim of computer vision based apple quality inspection.
The CAPA Apple Quality Grading Multi-Spectral Image Database is a propriety of ULG (Gembloux Agro-Bio Tech) - Belgium, and cannot be used without the consent of the ULG (Gembloux Agro-Bio Tech), Belgium.
For consent, contact
Devrim Unay, İzmir University of Economics, Turkey: unaydevrim@gmail.com
OR
Marie-France Destain, Gembloux Agro-Bio Tech, Belgium: mfdestain@ulg.ac.be
In disseminating results using this database,
1. the author should indicate in the manuscript that it was acquired by ULG (Gembloux Agro-Bio Tech), Belgium.
2. cite the following article Kleynen, O., Leemans, V., & Destain, M.-F. (2005). Development of a multi-spectral vision system for the detection of defects on apples. Journal of Food Engineering, 69(1), 41-49.
Relevant publications:
Kleynen et al., 2003 O. Kleynen, V. Leemans and M.F. Destain, Selection of the most efficient wavelength bands for ‘Jonagold’ apple sorting. Postharv. Biol. Technol., 30 (2003), pp. 221–232.
Leemans and Destain, 2004 V. Leemans and M.F. Destain, A real-time grading method of apples based on features extracted from defects. J. Food Eng., 61 (2004), pp. 83–89.
Leemans et al., 2002 V. Leemans, H. Magein and M.F. Destain, On-line fruit grading according to their external quality using machine vision. Biosyst. Eng., 83 (2002), pp. 397–404.
Unay and Gosselin, 2006 D. Unay and B. Gosselin, Automatic defect detection of ‘Jonagold’ apples on multi-spectral images: A comparative study. Postharv. Biol. Technol., 42 (2006), pp. 271–279.
Unay and Gosselin, 2007 D. Unay and B. Gosselin, Stem and calyx recognition on ‘Jonagold’ apples by pattern recognition. J. Food Eng., 78 (2007), pp. 597–605.
Unay et al., 2011 Unay, D., Gosselin, B., Kleynen, O, Leemans, V., Destain, M.-F., Debeir, O, “Automatic Grading of Bi-Colored Apples by Multispectral Machine Vision”, Computers and Electronics in Agriculture, 75(1), 204-212, 2011.
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F793761%2F566d1819af77773693bf2eb7cb132da2%2FUntitled.png?generation=1587389801922267&alt=media" alt="Image preview">
This dataset, Fundus Image Registration Dataset (also known as FIRE) consists of 129 retinal images forming 134 image pairs. These image pairs are split into 3 different categories depending on their characteristics. The images were acquired with a Nidek AFC-210 fundus camera, which acquires images with a resolution of 2912x2912 pixels and a FOV of 45° both in the x and y dimensions. Images were acquired at the Papageorgiou Hospital, Aristotle University of Thessaloniki, Thessaloniki from 39 patients.
The images follow the naming convention: - [Image pair name]_X.jpg where X is 1 for the reference image and 2 for the test image.
The ground truth files follow the naming convention: control_points_[Image pair name]_1_2.txt
The ground truth file for each image pair has the following format: - [reference_point_1_x] [reference_point_1_y] [test_point_1_x] [test_point_1_y] - [reference_point_2_x] [reference_point_2_y] [test_point_2_x] [test_point_2_y]
If you used this data in your research, please credit the authors
FIRE: Fundus Image Registration Dataset C. Hernandez-Matas, X. Zabulis, A. Triantafyllou, P. Anyfanti, S. Douma, A.A. Argyros Journal for Modeling in Ophthalmology, vol. 1, no. 4, pp. 16-28, Jul. 2017.
@article{article, author = {Hernandez-Matas, Carlos and Zabulis, Xenophon and Triantafyllou, Areti and Anyfanti, Panagiota and Douma, Stella and Argyros, Antonis}, year = {2017}, month = {01}, pages = {}, title = {FIRE: Fundus Image Registration Dataset}, journal = {Journal for Modeling in Opthalmology (to appear)} }
Image by Paul Diaconu on Pixabay
Facebook
Twitterhttps://images.cv/licensehttps://images.cv/license
Labeled J lowercase images suitable for training and evaluating computer vision and deep learning models.
Facebook
TwitterIn humans, the phenomenon of pareidolia is well-known. It's the proclivity to see faces in seemingly insignificant objects.
This dataset contains visually similar images paired by humans. A collection of 6016 picture pairings covering a broad and diversified set of criteria used by humans.
Amir Rosenfeld, Markus Solbach, John Tsotsos; Totally-Looks-Like: A Dataset and Benchmark of Semantic Image Similarity. Journal of Vision 2018;18(10):136. doi: https://doi.org/10.1167/18.10.136.
Attempt to improve the perceptual judgment of AI!
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
K denotes the size of visual vocabulary.
Facebook
TwitterIETE journal of research Impact Factor 2024-2025 - ResearchHelpDesk - IETE Journal of Research is a bimonthly journal published by the Institution of Electronics and Telecommunication Engineers (IETE), India. It publishes scientific and technical papers describing original research work or novel product/process development. Occasionally special issues are brought out on new and emerging research areas. This journal is useful to researchers, engineers, scientists, teachers, managers, and students who are interested in keeping track of original research and development work being carried out in the broad area of electronics, telecommunications, computer science, and engineering and information technology. Subjects covered by this journal are: Communications: Digital and analog communication, Digital signal processing, Image processing, Satellite communication, Secure communication, Speech and audio processing, Space communication, Vehicular communications, Wireless communication. Computers and Computing: Algorithms, Artificial intelligence, Computer graphics, Compiler programming and languages, Computer vision, Data mining, High-performance computing, Information technology, Internet computing, Multimedia, Networks, Network Security, Operating systems, Quantum learning systems, Pattern Recognition, Sensor networks, Soft computing. Control Engineering: Control theory and practice- Conventional control, Non-linear control, Adaptive control, Robust Control, Reinforcement learning control, Soft computing tools in control application- Fuzzy logic systems, Neural Networks, Support vector machines, Intelligent control. Electromagnetics: Antennas and arrays, Bio-electromagnetics, Computational electromagnetics, Electromagnetic interference, Electromagnetic compatibility, Metamaterials, Millimeter-wave and Terahertz circuits and systems, Microwave measurements, Microwave Photonics, Passive, active and tunable microwave circuits, Propagation studies, Radar and remote sensing, Radio wave propagation and scattering, RFID, RF MEMS, Solid-state microwave devices and tubes, UWB circuits and systems. Electronic Circuits, Devices, and Components: Analog and Digital circuits, Display Technology, Embedded Systems VLSI Design, Microelectronics technology and device characterization, MEMS, Nano-electronics, Nanotechnology, Physics and technology of CMOS devices, Sensors, Semiconductor device modeling, Space electronics, Solid state devices, and modeling. Instrumentation and Measurements: Automated instruments and measurement techniques, Industrial Electronics, Non-destructive characterization and testing, Sensors. Medical Electronics: Bio-informatics, Biomedical electronics, Bio-MEMS, Medical Instrumentation. Opto-Electronics: Fibre optics, Holography and optical data storage, Optical sensors Quantum Electronics, Quantum optics. Power Electronics: AC-DC/DC-DC/DC-AC/AC-AC converters, Battery chargers, Custom power devices, Distributed power generation, Electric vehicles, Electrochemical processes, Electronic blast, Flexible AC transmission systems, Heating/welding, Hybrid vehicles, HVDC transmission, Power quality, Renewal energy generation, Switched-mode power supply, Solid-state control of motor drives. The IETE Journal of Research is indexed in: British Library CLOCKSS CrossRef EBSCO - Applied Science & Technology Source EBSCO - Academic Search Complete EBSCO - STM Source EI Compendex/ Engineering Village (Elsevier) Google Scholar Microsoft Academic Portico ProQuest - ProQuest Central ProQuest - Research Library ProQuest - SciTech Premium Collection ProQuest - Technology Collection Science Citation Index Expanded (Thomson Reuters) SCImago (Elsevier) Scopus (Elsevier) Ulrich's Periodicals Directory Web of Science (Thomson Reuters) WorldCat Local (OCLC) Zetoc RG Journal Impact: 0.59 * *This value is calculated using ResearchGate data and is based on average citation counts from work published in this journal. The data used in the calculation may not be exhaustive. RG Journal impact history 2020 Available summer 2021 2018 / 2019 0.59 2017 0.39 2016 0.33 2015 0.49 2014 0.49 2013 0.41 2012 0.61 2011 0.90 2010 0.43 2009 0.22 2008 0.19 2007 0.23 2006 0.09 2005 0.11 2004 0.23 2003 0.38 IETE Journal of Research more details H Index - 20 Subject Area and Category: Computer Science, Computer Science Applications, Engineering, Electrical, and Electronic Engineering, Mathematics, Theoretical Computer Science Publisher: Taylor & Francis Publication Type: Journals Coverage : 1979-1989, 1993-ongoing
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
LAR.i Laboratory - Université du Québec à Chicoutimi (UQAC) 2021-08-24 Name: Image dataset of various soil types in an urban city Published journal paper: Gensytskyy, O., Nandi, P., Otis, M.JD. et al. Soil friction coefficient estimation using CNN included in an assistive system for walking in urban areas. J Ambient Intell Human Comput 14, 14291–14307 (2023). https://doi.org/10.1007/s12652-023-04667-w This dataset contains images of various types of soils and was used for the project "An assistive system for walking in urban areas". The images were taken using a smartphone camera in a vertical orientation and are high-quality. The files are named with two characters, being the first letter and last letter of its class name, following by their number. Capture location : City of Saguenay, Quebec Canada. Class count : 8 Total number of images : 493 Classes and number of images per class: Asphalt (89) Concrete (80) Epoxy_coated_interior (34) Grass (90) Gravel (58) Scrattered_snow (40) Snow (68) Wood (34)
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
We introduce a new benchmark dataset named EatSense that targets both computer vision and the healthcare community. EatSense is recorded while a person eats in a dining room uncontrolled setting. Key features are: First, it introduces challenging atomic actions for recognition. Second, the hugely varying lengths of actions in EatSense, make it nearly impossible for current temporal action localization frameworks to localize them. Third, it provides the capability to model complete eating behaviour (chain of action-based). Lastly, it simulates minor changes in motion/performance. Moreover, we conduct extensive experiments on EatSense with baseline deep learning-based approaches for bench-marking and hand-crafted feature-based approaches for explainable applications. We believe this dataset will benefit future researchers in building robust temporal action localization networks, behaviour recognition and performance assessment models for eating. The dataset is related to the publication by Muhammad Ahmed Raza, Longfei Chen, Nanbo Li and Robert B. Fisher (2023). 'EatSense: Human centric, action recognition and localization dataset for understanding eating behaviors and quality of motion assessment', Image and Vision Computing, 137, 104762, ISSN 0262-8856, https://doi.org/10.1016/j.imavis.2023.104762
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
This repository contains a mapping between the classes of COCO, LVIS, and Open Images V4 datasets into a unique set of 1460 classes. COCO [Lin et al 2014] contains 80 classes, LVIS [gupta2019lvis] contains 1460 classes, Open Images V4 [Kuznetsova et al. 2020] contains 601 classes. We built a mapping of these classes using a semi-automatic procedure in order to have a unique final list of 1460 classes. We also generated a hierarchy for each class, using wordnet This repository contains the following files: coco_classes_map.txt, contains the mapping for the 80 coco classes lvis_classes_map.txt, contains the mapping for the 1460 coco classes openimages_classes_map.txt, contains the mapping for the 601 coco classes classname_hyperset_definition.csv, contains the final set of 1460 classes, their definition and hierarchy all-classnames.xlsx, contains a side-by-side view of all classes considered This mapping was used in VISIONE [Amato et al. 2021, Amato et al. 2022] that is a content-based retrieval system that supports various search functionalities (text search, object/color-based search, semantic and visual similarity search, temporal search). For the object detection VISIONE uses three pre-trained models: VfNet Zhang et al. 2021, Mask R-CNN He et al. 2017, and a Faster R-CNN+Inception ResNet (trained on the Open Images V4). This is repository is released under a Creative Commons Attribution license, please cite the following paper if you use it in your work in any form: @inproceedings{amato2021visione, title={The visione video search system: exploiting off-the-shelf text search engines for large-scale video retrieval}, author={Amato, Giuseppe and Bolettieri, Paolo and Carrara, Fabio and Debole, Franca and Falchi, Fabrizio and Gennaro, Claudio and Vadicamo, Lucia and Vairo, Claudio}, journal={Journal of Imaging}, volume={7}, number={5}, pages={76}, year={2021}, publisher={Multidisciplinary Digital Publishing Institute} } References: [Amato et al. 2022] Amato, G. et al. (2022). VISIONE at Video Browser Showdown 2022. In: , et al. MultiMedia Modeling. MMM 2022. Lecture Notes in Computer Science, vol 13142. Springer, Cham. https://doi.org/10.1007/978-3-030-98355-0_52 [Amato et al. 2021] Amato, G., Bolettieri, P., Carrara, F., Debole, F., Falchi, F., Gennaro, C., Vadicamo, L. and Vairo, C., 2021. The visione video search system: exploiting off-the-shelf text search engines for large-scale video retrieval. Journal of Imaging, 7(5), p.76. [Gupta et al.2019] Gupta, A., Dollar, P. and Girshick, R., 2019. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5356-5364). [He et al. 2017] He, K., Gkioxari, G., Dollár, P. and Girshick, R., 2017. Mask r-cnn. In Proceedings of the IEEE international conference on computer vision (pp. 2961-2969). [Kuznetsova et al. 2020] Kuznetsova, A., Rom, H., Alldrin, N., Uijlings, J., Krasin, I., Pont-Tuset, J., Kamali, S., Popov, S., Malloci, M., Kolesnikov, A. and Duerig, T., 2020. The open images dataset v4. International Journal of Computer Vision, 128(7), pp.1956-1981. [Lin et al. 2014] Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P. and Zitnick, C.L., 2014, September. Microsoft coco: Common objects in context. In European conference on computer vision (pp. 740-755). Springer, Cham. [Zhang et al. 2021] Zhang, H., Wang, Y., Dayoub, F. and Sunderhauf, N., 2021. Varifocalnet: An iou-aware dense object detector. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8514-8523).
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
KOKLU Murat (a), UNLERSEN M. Fahri (b), OZKAN Ilker Ali (a), ASLAN M. Fatih(c), SABANCI Kadir (c) (a) Department of Computer Engineering, Selcuk University, Turkey, Konya, Turkey (b) Department of Electrical and Electronics Engineering, Necmettin Erbakan University, Konya, Turkey (c) Department of Electrical-Electronic Engineering, Karamanoglu Mehmetbey University, Karaman, Turkey DATASET: https://www.muratkoklu.com/datasets/ Citation Request : Koklu, M., Unlersen, M. F., Ozkan, I. A., Aslan, M. F., & Sabanci, K. (2022). A CNN-SVM study based on selected deep features for grapevine leaves classification. Measurement, 188, 110425. Doi:https://doi.org/10.1016/j.measurement.2021.110425 Link: https://doi.org/10.1016/j.measurement.2021.110425 DATASET: https://www.muratkoklu.com/datasets/
Highlights • Classification of five classes of grapevine leaves by MobileNetv2 CNN Model. • Classification of features using SVMs with different kernel functions. • Implementing a feature selection algorithm for high classification percentage. • Classification with highest accuracy using CNN-SVM Cubic model.
Abstract: The main product of grapevines is grapes that are consumed fresh or processed. In addition, grapevine leaves are harvested once a year as a by-product. The species of grapevine leaves are important in terms of price and taste. In this study, deep learning-based classification is conducted by using images of grapevine leaves. For this purpose, images of 500 vine leaves belonging to 5 species were taken with a special self-illuminating system. Later, this number was increased to 2500 with data augmentation methods. The classification was conducted with a state-of-art CNN model fine-tuned MobileNetv2. As the second approach, features were extracted from pre-trained MobileNetv2′s Logits layer and classification was made using various SVM kernels. As the third approach, 1000 features extracted from MobileNetv2′s Logits layer were selected by the Chi-Squares method and reduced to 250. Then, classification was made with various SVM kernels using the selected features. The most successful method was obtained by extracting features from the Logits layer and reducing the feature with the Chi-Squares method. The most successful SVM kernel was Cubic. The classification success of the system has been determined as 97.60%. It was observed that feature selection increased the classification success although the number of features used in classification decreased. Keywords: Deep learning, Transfer learning, SVM, Grapevine leaves, Leaf identification
Facebook
TwitterThe quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness.
CIFAKE is a dataset that contains 60,000 synthetically-generated images and 60,000 real images (collected from CIFAR-10). Can computer vision techniques be used to detect when an image is real or has been generated by AI?
Further information on this dataset can be found here: Bird, J.J. and Lotfi, A., 2024. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. IEEE Access.
The dataset contains two classes - REAL and FAKE.
For REAL, we collected the images from Krizhevsky & Hinton's CIFAR-10 dataset
For the FAKE images, we generated the equivalent of CIFAR-10 with Stable Diffusion version 1.4
There are 100,000 images for training (50k per class) and 20,000 for testing (10k per class)
The dataset and all studies using it are linked using Papers with Code https://paperswithcode.com/dataset/cifake-real-and-ai-generated-synthetic-images
If you use this dataset, you must cite the following sources
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
Bird, J.J. and Lotfi, A., 2024. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. IEEE Access.
Real images are from Krizhevsky & Hinton (2009), fake images are from Bird & Lotfi (2024). The Bird & Lotfi study is available here.
The updates to the dataset on the 28th of March 2023 did not change anything; the file formats ".jpeg" were renamed ".jpg" and the root folder was uploaded to meet Kaggle's usability requirements.
This dataset is published under the same MIT license as CIFAR-10:
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Facebook
TwitterGitHub page: https://github.com/mjkwon2021/CAT-Net Paper: Myung-Joon Kwon, Seung-Hun Nam, In-Jae Yu, Heung-Kyu Lee, and Changick Kim, “Learning JPEG Compression Artifacts for Image Manipulation Detection and Localization”, International Journal of Computer Vision, 2022, vol. 130, no. 8, pp. 1875–1895, Aug. 2022.
Facebook
TwitterIETE journal of research Acceptance Rate - ResearchHelpDesk - IETE Journal of Research is a bimonthly journal published by the Institution of Electronics and Telecommunication Engineers (IETE), India. It publishes scientific and technical papers describing original research work or novel product/process development. Occasionally special issues are brought out on new and emerging research areas. This journal is useful to researchers, engineers, scientists, teachers, managers, and students who are interested in keeping track of original research and development work being carried out in the broad area of electronics, telecommunications, computer science, and engineering and information technology. Subjects covered by this journal are: Communications: Digital and analog communication, Digital signal processing, Image processing, Satellite communication, Secure communication, Speech and audio processing, Space communication, Vehicular communications, Wireless communication. Computers and Computing: Algorithms, Artificial intelligence, Computer graphics, Compiler programming and languages, Computer vision, Data mining, High-performance computing, Information technology, Internet computing, Multimedia, Networks, Network Security, Operating systems, Quantum learning systems, Pattern Recognition, Sensor networks, Soft computing. Control Engineering: Control theory and practice- Conventional control, Non-linear control, Adaptive control, Robust Control, Reinforcement learning control, Soft computing tools in control application- Fuzzy logic systems, Neural Networks, Support vector machines, Intelligent control. Electromagnetics: Antennas and arrays, Bio-electromagnetics, Computational electromagnetics, Electromagnetic interference, Electromagnetic compatibility, Metamaterials, Millimeter-wave and Terahertz circuits and systems, Microwave measurements, Microwave Photonics, Passive, active and tunable microwave circuits, Propagation studies, Radar and remote sensing, Radio wave propagation and scattering, RFID, RF MEMS, Solid-state microwave devices and tubes, UWB circuits and systems. Electronic Circuits, Devices, and Components: Analog and Digital circuits, Display Technology, Embedded Systems VLSI Design, Microelectronics technology and device characterization, MEMS, Nano-electronics, Nanotechnology, Physics and technology of CMOS devices, Sensors, Semiconductor device modeling, Space electronics, Solid state devices, and modeling. Instrumentation and Measurements: Automated instruments and measurement techniques, Industrial Electronics, Non-destructive characterization and testing, Sensors. Medical Electronics: Bio-informatics, Biomedical electronics, Bio-MEMS, Medical Instrumentation. Opto-Electronics: Fibre optics, Holography and optical data storage, Optical sensors Quantum Electronics, Quantum optics. Power Electronics: AC-DC/DC-DC/DC-AC/AC-AC converters, Battery chargers, Custom power devices, Distributed power generation, Electric vehicles, Electrochemical processes, Electronic blast, Flexible AC transmission systems, Heating/welding, Hybrid vehicles, HVDC transmission, Power quality, Renewal energy generation, Switched-mode power supply, Solid-state control of motor drives. The IETE Journal of Research is indexed in: British Library CLOCKSS CrossRef EBSCO - Applied Science & Technology Source EBSCO - Academic Search Complete EBSCO - STM Source EI Compendex/ Engineering Village (Elsevier) Google Scholar Microsoft Academic Portico ProQuest - ProQuest Central ProQuest - Research Library ProQuest - SciTech Premium Collection ProQuest - Technology Collection Science Citation Index Expanded (Thomson Reuters) SCImago (Elsevier) Scopus (Elsevier) Ulrich's Periodicals Directory Web of Science (Thomson Reuters) WorldCat Local (OCLC) Zetoc RG Journal Impact: 0.59 * *This value is calculated using ResearchGate data and is based on average citation counts from work published in this journal. The data used in the calculation may not be exhaustive. RG Journal impact history 2020 Available summer 2021 2018 / 2019 0.59 2017 0.39 2016 0.33 2015 0.49 2014 0.49 2013 0.41 2012 0.61 2011 0.90 2010 0.43 2009 0.22 2008 0.19 2007 0.23 2006 0.09 2005 0.11 2004 0.23 2003 0.38 IETE Journal of Research more details H Index - 20 Subject Area and Category: Computer Science, Computer Science Applications, Engineering, Electrical, and Electronic Engineering, Mathematics, Theoretical Computer Science Publisher: Taylor & Francis Publication Type: Journals Coverage : 1979-1989, 1993-ongoing
Facebook
TwitterAttribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
License information was derived automatically
Dataset Development Environment for Guidance Glasses Project
Credit to: *Dwyer, B., Nelson, J. (2022). Roboflow (Version 1.0) [Software]. Available from https://roboflow.com. computer vision. *Pedestrian Signal Images Dataset. UCI Senior Project (2023). Roboflow Universe (Version 1.0) [Software]. Available from https://universe.roboflow.com/uci-senior-project/pedestrian-signals/dataset/9. https://roboflow.com/ computer vision. *HAWC_AI Computer Vision Project. Kolattukudy, J. (2023). Roboflow Universe (Version 1.0) [Software]. Available from https://universe.roboflow.com/joseph-kolattukudy/hawc_ai/dataset/1/images/?split=train&numImages=100. https://roboflow.com/ computer vision. *July 29th Annotations, Kolattukudy, J. (2023). Roboflow Universe (Version 1.0) [Software]. Available https://universe.roboflow.com/joseph-kolattukudy/july-29th-annotations. https://roboflow.com/ computer vision. *VehicleCount Computer Vision Project, Fyp (2023). Roboflow Universe (Version 1.0) [Software]. Available https://universe.roboflow.com/fyp-5ctjf/vehiclecount. https://roboflow.com/ computer vision. *PedestriansDetection Computer Vision Project, Reg, V (2023). Roboflow Universe (Version 1.0) [Software]. Available from https://universe.roboflow.com/victor-reg/pedestriansdetection computer vision.
Facebook
TwitterAttribution-ShareAlike 4.0 (CC BY-SA 4.0)https://creativecommons.org/licenses/by-sa/4.0/
License information was derived automatically
The graph shows the changes in the impact factor of ^ and its corresponding percentile for the sake of comparison with the entire literature. Impact Factor is the most common scientometric index, which is defined by the number of citations of papers in two preceding years divided by the number of papers published in those years.