29 datasets found
  1. Retinal fundus images for glaucoma analysis: the RIGA dataset

    • commons.datacite.org
    • deepblue.lib.umich.edu
    Updated Apr 20, 2020
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ahmed Almazroa (2020). Retinal fundus images for glaucoma analysis: the RIGA dataset [Dataset]. http://doi.org/10.7302/z23r0r29
    Explore at:
    Dataset updated
    Apr 20, 2020
    Dataset provided by
    DataCitehttps://www.datacite.org/
    University of Michigan
    Authors
    Ahmed Almazroa
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    The dataset includes 3 different files: 1) MESSIDOR dataset file contains 460 original images and 460 images for every single ophthalmologist manual marking in total of 3220 images for the entire file. 2) Bin Rushed Ophthalmic center file and contains 195 original images and 195 images for every single ophthalmologist manual marking in total of 1365 images for the entire file. 3) Magrabi Eye center file and contains 95 original images and 95 images for every single ophthalmologist manual marking in total of 665 images for the entire file. The total of all the dataset images are 750 original images and 4500 manual marked images. The images are saved in JPG and TIFF format.;NOTE ON THE DATA: Depositor accidentally left out 50 images from the BinRushed folder from the original deposit. A corrected BinRushed folder that includes these 50 images was added to this data set on May 21, 2018.;NOTE ON DOWNLOADING: The file "MESSIDOR.zip" is too large to be downloaded through Deep Blue Data. Please use Globus to download this file.

  2. t

    RIM-ONE: An open retinal image database for optic nerve evaluation - Dataset...

    • service.tib.eu
    Updated Dec 2, 2024
    + more versions
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2024). RIM-ONE: An open retinal image database for optic nerve evaluation - Dataset - LDM [Dataset]. https://service.tib.eu/ldmservice/dataset/rim-one--an-open-retinal-image-database-for-optic-nerve-evaluation
    Explore at:
    Dataset updated
    Dec 2, 2024
    Description

    Open retinal image database for optic nerve evaluation

  3. h

    UHB Eye Image Dataset Release 001

    • healthdatagateway.org
    unknown
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    https://www.gov.uk/government/publications/diabetic-eye-screening-retinal-image-grading-criteria, UHB Eye Image Dataset Release 001 [Dataset]. https://healthdatagateway.org/en/dataset/96
    Explore at:
    unknownAvailable download formats
    Dataset provided by
    https://www.gov.uk/government/publications/diabetic-eye-screening-retinal-image-grading-criteria
    License

    https://www.insight.hdrhub.org/https://www.insight.hdrhub.org/

    Description

    There are two data sets of eye scans available. The first of these is a set fundus images of which the are c. 7.0 million. The other is a set of OCT scans of which there are c. 440, 000.

    This dataset contains routine clinical ophthalmology data for every patient who have been seen at Queen Elizabeth Hospital and the Birmingham, Solihull and Black Country Diabetic Retinopathy screening program at University Hospitals Birmingham NHS Foundation Trust, with longitudinal follow-up for 15 years. Key data included are: • Total number of patients. • Demographic information (including age, sex and ethnicity) • Past ocular history • Intravitreal injections • Length of time since eye diagnosis • Visual acuity • The national screening diabetic grade category (seven categories from R0M0 to R3M1) • Reason for sight and severe sight impairment

    Geography University Hospitals Birmingham is set within the West Midlands and it has a catchment population of circa 5.9million. The region includes a diverse ethnic, and socio-economic mix, with a higher than UK average of minority ethnic groups. It has a large number of elderly residents but is the youngest population in the UK. There are particularly high rates of diabetes, physical inactivity, obesity, and smoking.

    Data source: Ophthalmology department at Queen Elizabeth Hospital, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom. The Birmingham, Solihull and Black Country Data Set, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom. They manage over 200,000 patients, with longitudinal follow-up up to 15 years, making this the largest urban diabetic screening scheme in Europe.

    Pathway: The routine secondary care follow-up in the hospital eye services for all ophthalmic diseases at Queen Elizabeth Hospital. The Birmingham, Solihull and Black Country dataset is representative of the patient pathway for community screening and grading of diabetic eye disease.

  4. m

    Dataset for - Development of a Deep Learning-based system for Optic Nerve...

    • data.mendeley.com
    Updated Jun 26, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Francesco Marzola (2023). Dataset for - Development of a Deep Learning-based system for Optic Nerve characterization in Transorbital Ultrasound Images on a multicenter dataset [Dataset]. http://doi.org/10.17632/kw8gvp8m8x.2
    Explore at:
    Dataset updated
    Jun 26, 2023
    Authors
    Francesco Marzola
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This dataset provides all the data and codes to reproduce the results in the related paper. For more information or requests please contact francesco.marzola@polito.it. Maintained version of the code is available at: https://github.com/frmrz/BioLab-PoliTO---BioMedical-Image-and-Signal-Processing

    If you use the dataset, please cite: Marzola Francesco, Lochner Piergiorgio, Naldi Andrea, Lemor Robert, Stögbauer Jakob, and Kristen M. Meiburger. "Development of a Deep Learning–Based System for Optic Nerve Characterization in Transorbital Ultrasound Images on a Multicenter Data Set." Ultrasound in Medicine & Biology, (2023). https://doi.org/10.1016/j.ultrasmedbio.2023.05.011.

  5. D

    Labelled Dataset of Retinal Images for Glaucoma detection

    • dataverse.nl
    • test.dataverse.nl
    txt, zip
    Updated Sep 1, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Jiapan Guo; Jiapan Guo; George Azzopardi; George Azzopardi; Chenyu Shi; Chenyu Shi; Nomdo Jansonius; Nomdo Jansonius; Nicolai Petkov; Nicolai Petkov (2021). Labelled Dataset of Retinal Images for Glaucoma detection [Dataset]. http://doi.org/10.34894/H2SZSO
    Explore at:
    zip(1176911), zip(103724), zip(41781), txt(1497), zip(38552), zip(97943), zip(28703), zip(87083)Available download formats
    Dataset updated
    Sep 1, 2021
    Dataset provided by
    DataverseNL
    Authors
    Jiapan Guo; Jiapan Guo; George Azzopardi; George Azzopardi; Chenyu Shi; Chenyu Shi; Nomdo Jansonius; Nomdo Jansonius; Nicolai Petkov; Nicolai Petkov
    License

    CC0 1.0 Universal Public Domain Dedicationhttps://creativecommons.org/publicdomain/zero/1.0/
    License information was derived automatically

    Description

    Fundus photography is a viable option for glaucoma population screening. In order to facilitate the development of computer-aided glaucoma detection systems, we publish this annotation dataset that contains manual annotations of glaucoma features for seven public fundus image data sets. All manual annotations are made by a specialised ophthalmologist. For each of the fundus images in the seven fundus datasets, the upper, the bottom, the left and the right boundary coordinates of the optic disc and the cup are stored in a .mat file with the corresponding fundus image name. The seven public fundus image data sets are: CHASEDB (https://blogs.kingston.ac.uk/retinal/chasedb1/), Diaretdb1_v_1_1 (https://www.it.lut.fi/project/imageret/diaretdb1/), DRINSHTI (http://cvit.iiit.ac.in/projects/mip/drishti-gs/mip-dataset2/Home.php), DRIONS-DB (http://www.ia.uned.es/~ejcarmona/DRIONS-DB.html), DRIVE (https://www.isi.uu.nl/Research/Databases/DRIVE/), HRF (https://www5.cs.fau.de/research/data/fundus-images/), and Messidor (http://www.adcis.net/en/Download-Third-Party/Messidor.html). Researchers are encouraged to use this set to train or validate their systems for automatic glaucoma detection. When you use this set, please cite our published paper: J. Guo, G. Azzopardi, C. Shi, N. M. Jansonius and N. Petkov, "Automatic Determination of Vertical Cup-to-Disc Ratio in Retinal Fundus Images for Glaucoma Screening," in IEEE Access, vol. 7, pp. 8527-8541, 2019, doi: 10.1109/ACCESS.2018.2890544.

  6. f

    Correlation between optic nerve head parameters and glaucoma patient data.

    • figshare.com
    • plos.figshare.com
    xls
    Updated Jun 1, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Yu Yokoyama; Masaki Tanito; Koji Nitta; Maki Katai; Yasushi Kitaoka; Kazuko Omodaka; Satoru Tsuda; Toshiaki Nakagawa; Toru Nakazawa (2023). Correlation between optic nerve head parameters and glaucoma patient data. [Dataset]. http://doi.org/10.1371/journal.pone.0099138.t004
    Explore at:
    xlsAvailable download formats
    Dataset updated
    Jun 1, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Yu Yokoyama; Masaki Tanito; Koji Nitta; Maki Katai; Yasushi Kitaoka; Kazuko Omodaka; Satoru Tsuda; Toshiaki Nakagawa; Toru Nakazawa
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Correlation between optic nerve head parameters and glaucoma patient data.

  7. d

    Quantifying the influence of optical coherence tomography beam tilt in the...

    • search.dataone.org
    • data.niaid.nih.gov
    • +1more
    Updated Aug 18, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    David Bissig; Shasha Gao; Haohua Qian (2024). Quantifying the influence of optical coherence tomography beam tilt in the normal adult mouse retina [Dataset]. http://doi.org/10.5061/dryad.95x69p8th
    Explore at:
    Dataset updated
    Aug 18, 2024
    Dataset provided by
    Dryad Digital Repository
    Authors
    David Bissig; Shasha Gao; Haohua Qian
    Description

    Parallel microstructures within the retina – like retinal nerve fiber layer (RNFL) axons – differentially reflect light depending on its angle. This effect has previously been observed with optical coherence tomography (OCT), but it is under-studied. Quantification of this effect might provide useful information about retinal microstructures, and therefore the broader health of the retina. Our goal was to quantify the influence OCT beam tilt on reflectivity of each layer of the normal adult mouse retina., Using a Bioptigen Envisu UHR2200 system, we collected OCT images of twenty-seven healthy control mouse retinas (from fifteen mice). Images included the optic nerve head, the temporal retina, and the nasal retina. We converted signal intensities to estimated attenuation coefficients (eAC) for further processing. Each retina was imaged multiple times with various beam tilts. eAC was measured at beam tilts in the range of ±30° (where 0° would be the most standard acquisition, where the beam is perpendicular to the surface of the retina). Images had a nominal resolution of 1.4 µm × 1.4 µm. Retinas were digitally linearized and spatially normalized to facilitate averaging and data aggregation across mice. This repository includes all retinal images processed for the associated publication. For three of the retinas, we also provide R code (r-project.org) and images from intermediate processing steps., , # Quantifying the influence of optical coherence tomography beam tilt in the normal adult mouse retina.

    https://doi.org/10.5061/dryad.95x69p8th

    Description of the data and file structure

    In optical coherence tomography (OCT), the angle of the beam influences the apparent reflectivity of the retina. The relationship between beam tilt and reflectivity has been characterized in some layers of the mouse retina. We sought to characterize that relationship for all layers of the retina. To this end, we collected multiple OCT images at varying beam tilts from fifteen (15) adult wild-type C57 mice (Jackson Labs, Bar Harbor, ME, age ~3 mo).

    References:

    Meleppat RK, Zhang P, Ju MJ, Manna SK, Jian Y, Pugh EN, Zawadzki RJ, 2019. Directional optical coherence tomography reveals melanin concentration-dependent scattering properties of retinal pigment epithelium. J Biomed Opt 24(6):1-10.

    Files and variables

    File: beam_tilt_data_for_Dryad...

  8. o

    Data from: Neural specialization for ‘visual’ concepts emerges in the...

    • openicpsr.org
    Updated Feb 6, 2024
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Miriam Hauptman (2024). Neural specialization for ‘visual’ concepts emerges in the absence of vision [Dataset]. http://doi.org/10.3886/E198163V1
    Explore at:
    Dataset updated
    Feb 6, 2024
    Dataset provided by
    Johns Hopkins University
    Authors
    Miriam Hauptman
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    The dataset contains anonymized structural and functional fMRI data from 21 congenitally blind adults and 22 sighted adults. All blind participants had minimal light perception from birth at most. These participants are blind due to conditions affecting the eye or optic nerve rather than brain damage. Cause of blindness information is provided in the aggregate to protect participant privacy. Both blind and sighted participants had no known cognitive or neurological disabilities, as determined through self-report, and were matched on age and years of education. Participants underwent 8 functional scans. During the scans, participants heard pairs of words and judged how similar the two words were in meaning on a scale from 1 (not at all similar) to 4 (very similar), indicating their responses via button press. Word stimuli consisted of 18 words in each of 8 semantic categories: 4 categories of entities/nouns (birds, mammals, manmade places, natural places), and 4 categories of events/verbs (light emission, sound emission, hand-related actions, mouth-related actions). Sighted participants wore light-excluding blindfolds to ensure uniform light conditions across groups during the scans. T1-weighted anatomical images were also collected. Only the structural and functional images in standard space were shared in this project, in accordance with IRB requirements. For details on image acquisition parameters and data preprocessing methods, please refer to the data description file.

  9. f

    Data from: Bio-Inspired Vision and Neuromorphic Image Processing Using...

    • acs.figshare.com
    xlsx
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Lin Sun; Shangda Qu; Yi Du; Lu Yang; Yue Li; Zixian Wang; Wentao Xu (2023). Bio-Inspired Vision and Neuromorphic Image Processing Using Printable Metal Oxide Photonic Synapses [Dataset]. http://doi.org/10.1021/acsphotonics.2c01583.s002
    Explore at:
    xlsxAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    ACS Publications
    Authors
    Lin Sun; Shangda Qu; Yi Du; Lu Yang; Yue Li; Zixian Wang; Wentao Xu
    License

    Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
    License information was derived automatically

    Description

    We propose a neuromorphic vision system that uses digitally printed metal oxide photonic synapses with optical tunability and temporally correlated plasticity. The neuromorphic vision system provides the first demonstration of encoding ambient light intensity in the time domain and captures optical images by encoding their pixel intensity into pulsatile signals in real time, analogous to a biological visual retina and optic nerve. The system can then process the information and form memory, emulating the image data processing in the human brain. The system can realize image-preprocessing functions to increase image contrast and reduce image background noise, thereby effectively improving the classification and recognition accuracy. Dynamic stimulation of visual cortical prosthetics shows that the system is capable of dynamic perception and memory to detect motion and mimic coherent visual perception. Furthermore, the photonic synapses with a two-terminal structure using printed semiconducting fiber arrays could also facilitate large-scale integration. The proposed system offers the potential to simplify neuromorphic visual circuits and may have applications in autonomous smart devices, such as driverless cars, smart surveillance systems, and intelligent healthcare.

  10. Link for analyzed digital image correlation data for the eyes in the...

    • figshare.com
    bin
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thao Nguyen; Michael Saheb Kashaf; Harry A. Quigley; Cameron Czerpak (2025). Link for analyzed digital image correlation data for the eyes in the goggle-wearing study [Dataset]. http://doi.org/10.6084/m9.figshare.29504315.v4
    Explore at:
    binAvailable download formats
    Dataset updated
    Jul 8, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Thao Nguyen; Michael Saheb Kashaf; Harry A. Quigley; Cameron Czerpak
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Link to complete analyzed DVC results for all subjects in the goggle-wearing experiments published inCzerpak CA, Kashaf MS, Zimmerman BK, Mirville R, Gasquet NC, Quigley HA, Nguyen TD. The Strain Response to Intraocular Pressure Increase in the Lamina Cribrosa of Control Subjects and Glaucoma Patients. Translational vision science & technology. 2024 December 2;13(12):7. PubMed PMID: 39630437; PubMed Central PMCID: PMC11627119; DOI: 10.1167/tvst.13.12.7.Data for each eye is stored in a folder named after the eye labels in the file AverageData.xlsx. Each folder contains Matlab workspaces (.mat) for the following results.1)The initial DVC displacement (resultsFIDVC1HOCT_LC[label]_[LE/RE].disp.mat), baseline error (resultsFIDVC1HOCT_LC[label]_[LE/RE]_baseline error.mat), and correlation error(resultsFIDVC1HOCT_LC[label]_[LE/RE]_CorrError.mat). The LE label stands for letf eye and RE is for right eye.2) The processed DVC outputs for the displacements and strains (HOCT_LC[label]-DispStrain.mat), baseline error (HOCT_LC[label]-BaselineError.mat) and correlation error (HOCT_LC[label]-CorrError.mat)3) The strains averaged for each tissue segmented from the OCT images (HOCT_LC[eyelabel]-SegmentedAndSummarized.mat).4) The coordinates of points defining the tissue boundaries of the ONH used to segment the OCT images ([label]-dividers.mat)

  11. Optical Imaging Technologies Market Analysis, Size, and Forecast 2025-2029:...

    • technavio.com
    Updated Apr 21, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Technavio (2025). Optical Imaging Technologies Market Analysis, Size, and Forecast 2025-2029: North America (US and Canada), Europe (France, Germany, UK), Middle East and Africa , APAC , South America , and Rest of World (ROW) [Dataset]. https://www.technavio.com/report/optical-imaging-technologies-market-industry-analysis
    Explore at:
    Dataset updated
    Apr 21, 2025
    Dataset provided by
    TechNavio
    Authors
    Technavio
    Time period covered
    2021 - 2025
    Area covered
    Germany, Canada, United States, Global
    Description

    Snapshot img

    Optical Imaging Technologies Market Size 2025-2029

    The optical imaging technologies market size is forecast to increase by USD 1.81 billion, at a CAGR of 11.1% between 2024 and 2029.

    The market is experiencing significant growth, driven primarily by the increasing demand for non-invasive diagnostic solutions. This trend is fueled by the growing preference for less invasive and more accurate diagnostic methods, particularly in healthcare applications. Moreover, the continuous innovation in optical imaging technologies, leading to the introduction of new products, further propels market expansion. However, the market faces challenges in the form of stringent regulatory requirements. Navigating these hurdles, such as obtaining necessary approvals and adhering to quality standards, can be a complex and time-consuming process.
    Companies must invest in robust regulatory compliance strategies to effectively address these challenges and capitalize on the market's potential for growth. By focusing on innovation, regulatory compliance, and meeting the increasing demand for non-invasive diagnostics, market participants can position themselves for success in the dynamic and evolving optical imaging technologies landscape.
    

    What will be the Size of the Optical Imaging Technologies Market during the forecast period?

    Explore in-depth regional segment analysis with market size data - historical 2019-2023 and forecasts 2025-2029 - in the full report.
    Request Free Sample

    The market continues to evolve, driven by advancements in various subfields including microscopy, imaging sensors, and automation. Cmos cameras are increasingly utilized in microscopy applications due to their high sensitivity and versatility. Electron microscopy, with its ability to provide high-resolution images, remains a critical tool in material science and biology research. Optical microscopes, meanwhile, are finding new applications in industries such as manufacturing and quality control. Microscopy automation and image processing software are essential components of modern imaging systems, enabling efficient data acquisition and analysis. Endoscopic imaging and cryo-electron microscopy (cryo-em) are revolutionizing medical diagnostics and structural biology, respectively.

    Light sources, from lasers to light emitting diodes (LEDs), play a crucial role in illuminating samples for imaging. Image segmentation, 3D image reconstruction, spectral imaging, and super-resolution microscopy are just a few of the techniques pushing the boundaries of what is possible in imaging. Innovations in microscope objectives, image analysis algorithms, and data acquisition systems continue to drive progress in this dynamic field. Market activities in this sector are characterized by ongoing research and development, collaborations, and partnerships. The integration of various technologies, such as hyperspectral imaging and x-ray microscopy, is expanding the scope of applications and driving growth.

    The future of optical imaging technologies promises continued innovation and discovery.

    How is this Optical Imaging Technologies Industry segmented?

    The optical imaging technologies industry research report provides comprehensive data (region-wise segment analysis), with forecasts and estimates in 'USD million' for the period 2025-2029, as well as historical data from 2019-2023 for the following segments.

    Technology
    
      OCT
      Photoacoustic tomography
      Hyperspectral imaging
      Near-infrared spectroscopy
    
    
    End-user
    
      Hospitals and clinics
      Research laboratories
      Pharmaceutical companies
    
    
    Geography
    
      North America
    
        US
        Canada
    
    
      Europe
    
        France
        Germany
        UK
    
    
      Rest of World (ROW)
    

    By Technology Insights

    The oct segment is estimated to witness significant growth during the forecast period.

    Optical imaging technologies, including optical coherence tomography (OCT), multiphoton microscopy, microscopy accessories, image sensors, image segmentation, 3D image reconstruction, light sources, CCD cameras, in situ hybridization, confocal microscopy, photomultiplier tubes, data acquisition systems, spectral imaging, super-resolution microscopy, microscope objectives, CMOS cameras, electron microscopy, optical microscopes, microscopy automation, image processing software, endoscopic imaging, cryo-electron microscopy, laser systems, hyperspectral imaging, image analysis algorithms, light sheet microscopy, image registration, sample preparation techniques, x-ray microscopy, and fluorescence microscopy, continue to advance our understanding of various biological and material structures. OCT, a non-invasive imaging technology, uses light waves to provide high-resolution cross-sectional images of biological tissues, playing a significant role in diagnosing and monitoring retinal and optic nerve diseases.

    Other imaging techniques, such as confocal microscopy, electron microsc

  12. Link for analyzed digital image correlation data for eyes in the suturelysis...

    • figshare.com
    bin
    Updated Jul 8, 2025
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Thao Nguyen; Cameron Czerpak; Zhuochen Yuan; Harry A. Quigley (2025). Link for analyzed digital image correlation data for eyes in the suturelysis study [Dataset]. http://doi.org/10.6084/m9.figshare.29504717.v2
    Explore at:
    binAvailable download formats
    Dataset updated
    Jul 8, 2025
    Dataset provided by
    Figsharehttp://figshare.com/
    Authors
    Thao Nguyen; Cameron Czerpak; Zhuochen Yuan; Harry A. Quigley
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    Link to complete analyzed DVC results for all subjects in the goggle-wearing experiments published inCzerpak CA, Kashaf MS, Zimmerman BK, Mirville R, Gasquet NC, Quigley HA, Nguyen TD. The Strain Response to Intraocular Pressure Increase in the Lamina Cribrosa of Control Subjects and Glaucoma Patients. Translational vision science & technology. 2024 December 2;13(12):7. PubMed PMID: 39630437; PubMed Central PMCID: PMC11627119; DOI: 10.1167/tvst.13.12.7.Data for each eye is stored in a folder named after the data labels in the AverageData dataset. Each folder contains Matlab workspaces (.mat) for the following results.1)The initial DVC displacement (resultsFIDVC1HOCT_LC[label]_[LE/RE].disp.mat), baseline error (resultsFIDVC1HOCT_LC[label]_[LE/RE]_baseline error.mat), and correlation error(resultsFIDVC1HOCT_LC[label]_[LE/RE]_CorrError.mat). 2) The processed DVC outputs for the displacements and strains (HOCT_LC[label]-DispStrain.mat), baseline error (HOCT_LC[eyelabel]-BaselineError.mat) and correlation error (HOCT_LC[label]-CorrError.mat)3) The strains averaged for each tissue segmented from the OCT images (HOCT_LC[label]-SegmentedAndSummarized.mat).4) The coordinates of points defining the tissue boundaries of the ONH used to segment the OCT images ([label]-dividers.mat)

  13. Eye images and retinopathy grades in diabetic eye screening

    • healthdatagateway.org
    unknown
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    University Hospitals Birmingham NHS Foundation Trust, Eye images and retinopathy grades in diabetic eye screening [Dataset]. https://healthdatagateway.org/dataset/92
    Explore at:
    unknownAvailable download formats
    Dataset provided by
    National Health Servicehttps://www.nhs.uk/
    University Hospitals Birmingham NHS Foundation Trusthttp://www.uhb.nhs.uk/
    Authors
    University Hospitals Birmingham NHS Foundation Trust
    License

    https://www.insight.hdrhub.org/https://www.insight.hdrhub.org/

    Description

    Background Diabetes mellitus affects over 3.9 million people in the United Kingdom (UK), with over 2.6 million people in England alone. Diabetic retinopathy (DR) is a common microvascular complication of type 1 and type 2 diabetes and remains a major cause of vision loss and blindness in those of working age. The National Institute for Health and Care Excellence recommendations are for annual screening using digital retinal photography for all patients with diabetes aged 12 years and over until such time as specialist surveillance or referral to Hospital Eye Services (HES) is required.

    Birmingham, Solihull and Black Country DR screening program is a member of the National Health Service (NHS) Diabetic Eye Screening Programme. This dataset contains routine community annual longitudinal screening patient results of over 200000 patients with screening results per patient ranging from 1 year to 15 years. Key data included in this imaging dataset are: • Fundal photographs • The national screening diabetic grade category (seven categories from R0M0 to R3M1) • Screening Outcome (digital surveillance and time; referral to HES)

    Geography Birmingham, Solihull and Black Country is set within the West Midlands and has a population of circa 5.9million. The region includes a diverse ethnic, and socio-economic mix, with a higher than UK average of minority ethnic groups. It has a large number of elderly residents but is the youngest population in the UK. There are particularly high rates of diabetes, physical inactivity, obesity, and smoking.

    Data source: The Birmingham, Solihull and Black Country Data Set, University Hospitals Birmingham NHS Foundation Trust, Birmingham, United Kingdom. They manage over 200,000 patients, with longitudinal follow-up up to 15 years, making this the largest urban diabetic screening scheme in Europe.

  14. o

    Blindness Resting State

    • openicpsr.org
    Updated Oct 20, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Marina Bedny; Mengyu Tian (2023). Blindness Resting State [Dataset]. http://doi.org/10.3886/E198832V1
    Explore at:
    Dataset updated
    Oct 20, 2023
    Dataset provided by
    Beijing Normal University
    Johns Hopkins University
    Authors
    Marina Bedny; Mengyu Tian
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    This project includes resting state data for 30 congenitally blind adults and 50 blindfolded sighted controls. All blind participants had minimal light perception from birth at most. These participants are blind due to conditions affecting the eye or optic nerve, rather than brain damage. Cause of blindness information is provided in the aggregate to protect participant privacy. Both blind and sighted participants had no known cognitive or neurological disabilities, as determined through self-report. A subset of the 30 sighted controls can be selected to create a group similar in age and education to the blind group. Participants underwent 1 to 4 resting state scans, each comprising 240 volumes (average scan time = 710.4 seconds per person). During the scans, participants were instructed to relax while staying awake. Sighted participants wore light-excluding blindfolds to ensure uniform light conditions across groups during the scans. T1-weighted anatomical images were also collected. Only the structural and functional images in standard space were shared in this project, in accordance with IRB requirements. For details on image acquisition parameters and data preprocessing methods, please refer to the data description file.

  15. h

    Moorfields DR Dataset 005

    • healthdatagateway.org
    unknown
    Updated Sep 15, 2021
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    INSIGHT Health Data Hub (2021). Moorfields DR Dataset 005 [Dataset]. https://healthdatagateway.org/en/dataset/95
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Sep 15, 2021
    Dataset authored and provided by
    INSIGHT Health Data Hub
    License

    https://www.insight.hdrhub.org/researcher-areahttps://www.insight.hdrhub.org/researcher-area

    Description

    The Moorfields DR Dataset encompasses all patients who have been referred via the NHS diabetic eye screening program (DESP) to Moorfields Eye Hospital - a leading provider of eye health services in the UK and a world-class centre of excellence for ophthalmic research and education.

    The DESP invites all diabetic patients aged 12 years or over to annual primary-care-based screening. Here, two-field fundus photography (one image centred on the macula and a second image centred on the optic disc) is acquired and graded according to the English Screening Programme for Diabetic Retinopathy standards. If criteria were met (R2, R3, R3, M1, or ungradable), patients are referred to hospital eye services and suspended from screening while under secondary care. Urgently referred patients (retinopathy grade R3) are to be seen within 2 weeks, and routinely referred patients within 10 weeks.

    The earliest available screening records are from 2013, however, the dataset will include any imaging or clinical metadata that is available for these patients prior to that time (for example in patients who were initially monitored for the early manifestations of the disease). Also of note, this dataset will include data from both eyes in each case. For these reasons, the dataset will include longitudinal data from a wide range of diabetic eye disease.

    Clinical metadata includes information regarding: - patient demographics - visual acuities (predominantly measured with Early Treatment Diabetic Retinopathy Study (ETDRS) charts) - diabetic retinopathy grading - intravitreal therapies and ocular surgeries

    Additional information is provided in the ‘technical details’ tab.

    The DR dataset includes eye imaging modalities, such as: - Optical coherence tomography (CSO, Heidelberg, Optos, Topcon, Zeiss) - Colour fundus photographs (Topcon, Zeiss) - Ultra-wide field photographs (Optos, Zeiss) - Iris photographs (CSO, Zeiss) - Keratoscope topography (CSO) - Infrared photographs (Heidelberg, Topcon, Zeiss) - Fluorescein angiography (Heidelberg, Optos, Topcon, Zeiss) - Indocyanine green angiography (Heidelberg, Optos, Topcon) - Fundus autofluorescence (Heidelberg, Optos, Zeiss)

    Imaging data from CSO is subject to additional approvals.

    As of July 2024, the dataset consisted of 91,009 eyes with 445,792 screening readings, and over 5,537,798 ophthalmic images. This is one of the largest single centre databases from patients with DR and covers more than a decade of follow-up for these patients.

  16. P

    GAMMA Challenge Dataset

    • paperswithcode.com
    Updated Sep 1, 2022
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Junde Wu; Huihui Fang; Fei Li; Huazhu Fu; Fengbin Lin; Jiongcheng Li; Lexing Huang; Qinji Yu; Sifan Song; Xinxing Xu; Yanyu Xu; Wensai Wang; Lingxiao Wang; Shuai Lu; Huiqi Li; Shihua Huang; Zhichao Lu; Chubin Ou; Xifei Wei; Bingyuan Liu; Riadh Kobbi; Xiaoying Tang; Li Lin; Qiang Zhou; Qiang Hu; Hrvoje Bogunovic; José Ignacio Orlando; Xiulan Zhang; Yanwu Xu (2022). GAMMA Challenge Dataset [Dataset]. https://paperswithcode.com/dataset/gamma-challenge
    Explore at:
    Dataset updated
    Sep 1, 2022
    Authors
    Junde Wu; Huihui Fang; Fei Li; Huazhu Fu; Fengbin Lin; Jiongcheng Li; Lexing Huang; Qinji Yu; Sifan Song; Xinxing Xu; Yanyu Xu; Wensai Wang; Lingxiao Wang; Shuai Lu; Huiqi Li; Shihua Huang; Zhichao Lu; Chubin Ou; Xifei Wei; Bingyuan Liu; Riadh Kobbi; Xiaoying Tang; Li Lin; Qiang Zhou; Qiang Hu; Hrvoje Bogunovic; José Ignacio Orlando; Xiulan Zhang; Yanwu Xu
    Description

    GAMMA releases the world's first multi-modal dataset for glaucoma grading, which was provided by the Sun Yat-sen Ophthalmic Center of Sun Yat-sen University in Guangzhou, China. The dataset consists of 2D fundus images and 3D optical coherence tomography (OCT) images of 300 patients. The dataset was annotated with glaucoma grade in every sample, and macular fovea coordinates as well as optic disc/cup segmentation mask in the fundus image.

    We invite the medical image analysis community to participate by developing and testing existing and novel automated classification and segmentation methods.

    GAMMA challenge consists of THREE Tasks:

    Grading glaucoma using multi-modality data

    Segmentation of optic disc and cup in fundus images

    Localization of fovea macula in fundus image

  17. P

    G1020 Dataset

    • paperswithcode.com
    Updated Aug 10, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Muhammad Naseer Bajwa; Gur Amrit Pal Singh; Wolfgang Neumeier; Muhammad Imran Malik; Andreas Dengel; Sheraz Ahmed (2023). G1020 Dataset [Dataset]. https://paperswithcode.com/dataset/g1020
    Explore at:
    Dataset updated
    Aug 10, 2023
    Authors
    Muhammad Naseer Bajwa; Gur Amrit Pal Singh; Wolfgang Neumeier; Muhammad Imran Malik; Andreas Dengel; Sheraz Ahmed
    Description

    A large publicly available retinal fundus image dataset for glaucoma classification called G1020. The dataset is curated by conforming to standard practices in routine ophthalmology and it is expected to serve as standard benchmark dataset for glaucoma detection. This database consists of 1020 high resolution colour fundus images and provides ground truth annotations for glaucoma diagnosis, optic disc and optic cup segmentation, vertical cup-to-disc ratio, size of neuroretinal rim in inferior, superior, nasal and temporal quadrants, and bounding box location for optic disc.

  18. o

    Data from: Gender Prediction for a Multiethnic Population via Deep Learning...

    • omicsdi.org
    Updated Apr 5, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    (2023). Gender Prediction for a Multiethnic Population via Deep Learning Across Different Retinal Fundus Photograph Fields: Retrospective Cross-sectional Study. [Dataset]. https://www.omicsdi.org/dataset/biostudies/S-EPMC8408758
    Explore at:
    Dataset updated
    Apr 5, 2023
    Variables measured
    Unknown
    Description

    Background Deep learning algorithms have been built for the detection of systemic and eye diseases based on fundus photographs. The retina possesses features that can be affected by gender differences, and the extent to which these features are captured via photography differs depending on the retinal image field. Objective We aimed to compare deep learning algorithms' performance in predicting gender based on different fields of fundus photographs (optic disc-centered, macula-centered, and peripheral fields). Methods This retrospective cross-sectional study included 172,170 fundus photographs of 9956 adults aged ≥40 years from the Singapore Epidemiology of Eye Diseases Study. Optic disc-centered, macula-centered, and peripheral field fundus images were included in this study as input data for a deep learning model for gender prediction. Performance was estimated at the individual level and image level. Receiver operating characteristic curves for binary classification were calculated. Results The deep learning algorithms predicted gender with an area under the receiver operating characteristic curve (AUC) of 0.94 at the individual level and an AUC of 0.87 at the image level. Across the three image field types, the best performance was seen when using optic disc-centered field images (younger subgroups: AUC=0.91; older subgroups: AUC=0.86), and algorithms that used peripheral field images had the lowest performance (younger subgroups: AUC=0.85; older subgroups: AUC=0.76). Across the three ethnic subgroups, algorithm performance was lowest in the Indian subgroup (AUC=0.88) compared to that in the Malay (AUC=0.91) and Chinese (AUC=0.91) subgroups when the algorithms were tested on optic disc-centered images. Algorithms' performance in gender prediction at the image level was better in younger subgroups (aged <65 years; AUC=0.89) than in older subgroups (aged ≥65 years; AUC=0.82). Conclusions We confirmed that gender among the Asian population can be predicted with fundus photographs by using deep learning, and our algorithms' performance in terms of gender prediction differed according to the field of fundus photographs, age subgroups, and ethnic groups. Our work provides a further understanding of using deep learning models for the prediction of gender-related diseases. Further validation of our findings is still needed.

  19. f

    Accurate, fast, data efficient and interpretable glaucoma diagnosis with...

    • plos.figshare.com
    tiff
    Updated May 30, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    Ian J. C. MacCormick; Bryan M. Williams; Yalin Zheng; Kun Li; Baidaa Al-Bander; Silvester Czanner; Rob Cheeseman; Colin E. Willoughby; Emery N. Brown; George L. Spaeth; Gabriela Czanner (2023). Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile [Dataset]. http://doi.org/10.1371/journal.pone.0209409
    Explore at:
    tiffAvailable download formats
    Dataset updated
    May 30, 2023
    Dataset provided by
    PLOS ONE
    Authors
    Ian J. C. MacCormick; Bryan M. Williams; Yalin Zheng; Kun Li; Baidaa Al-Bander; Silvester Czanner; Rob Cheeseman; Colin E. Willoughby; Emery N. Brown; George L. Spaeth; Gabriela Czanner
    License

    Attribution 4.0 (CC BY 4.0)https://creativecommons.org/licenses/by/4.0/
    License information was derived automatically

    Description

    BackgroundGlaucoma is the leading cause of irreversible blindness worldwide. It is a heterogeneous group of conditions with a common optic neuropathy and associated loss of peripheral vision. Both over and under-diagnosis carry high costs in terms of healthcare spending and preventable blindness. The characteristic clinical feature of glaucoma is asymmetrical optic nerve rim narrowing, which is difficult for humans to quantify reliably. Strategies to improve and automate optic disc assessment are therefore needed to prevent sight loss.MethodsWe developed a novel glaucoma detection algorithm that segments and analyses colour photographs to quantify optic nerve rim consistency around the whole disc at 15-degree intervals. This provides a profile of the cup/disc ratio, in contrast to the vertical cup/disc ratio in common use. We introduce a spatial probabilistic model, to account for the optic nerve shape, we then use this model to derive a disc deformation index and a decision rule for glaucoma. We tested our algorithm on two separate image datasets (ORIGA and RIM-ONE).ResultsThe spatial algorithm accurately distinguished glaucomatous and healthy discs on internal and external validation (AUROC 99.6% and 91.0% respectively). It achieves this using a dataset 100-times smaller than that required for deep learning algorithms, is flexible to the type of cup and disc segmentation (automated or semi-automated), utilises images with missing data, and is correlated with the disc size (p = 0.02) and the rim-to-disc at the narrowest rim (p

  20. h

    Moorfields DMO Dataset 003

    • healthdatagateway.org
    unknown
    Updated Feb 4, 2023
    Share
    FacebookFacebook
    TwitterTwitter
    Email
    Click to copy link
    Link copied
    Close
    Cite
    INSIGHT Health Data Hub (2023). Moorfields DMO Dataset 003 [Dataset]. https://healthdatagateway.org/en/dataset/101
    Explore at:
    unknownAvailable download formats
    Dataset updated
    Feb 4, 2023
    Dataset authored and provided by
    INSIGHT Health Data Hub
    License

    https://www.insight.hdrhub.org/researcher-areahttps://www.insight.hdrhub.org/researcher-area

    Description

    The Moorfields DMO Dataset encompasses all patients who have received at least one injection of either Lucentis (ranibizumab) or Eylea (aflibercept) to treat diabetic macular oedema (DMO) at Moorfields Eye Hospital - a leading provider of eye health services in the UK and a world-class centre of excellence for ophthalmic research and education.

    These therapies began at Moorfields in 2013, however, the dataset will include any imaging or clinical metadata that is available for these patients prior to that time (for example in patients who were initially monitored for the early forms of the disease prior to receiving treatment). Also of note, this dataset will include data from both eyes in each case - for example, it will include data from fellow eyes that are not receiving injections. For these reasons, the dataset will include longitudinal data from a wide range of diabetic eye disease.

    Clinical metadata includes information regarding: - patient demographics - visual acuities (predominantly measured with Early Treatment Diabetic Retinopathy Study (ETDRS) charts) - diabetic retinopathy grading - intravitreal therapies and ocular surgeries

    Additional information is provided in the ‘technical details’ tab.

    The DMO dataset includes eye imaging modalities, such as: - Optical coherence tomography (CSO, Heidelberg, Optos, Topcon, Zeiss) - Colour fundus photographs (Topcon, Zeiss) - Ultra-wide field photographs (Optos, Zeiss) - Iris photographs (CSO, Zeiss) - Keratoscope topography (CSO) - Infrared photographs (Heidelberg, Topcon, Zeiss) - Fluorescein angiography (Heidelberg, Optos, Topcon, Zeiss) - Indocyanine green angiography (Heidelberg, Optos, Topcon) - Fundus autofluorescence (Heidelberg, Optos, Zeiss)

    Imaging data from CSO is subject to additional approvals.

    As of July 2024, the dataset consisted of 5,873 eyes receiving Lucentis or Eylea for DMO, over 58,808 injection episodes, and over 1,304,395 ophthalmic images. This is one of the largest single centre databases from patients with DMO and covers more than a decade of follow-up for these patients.

Share
FacebookFacebook
TwitterTwitter
Email
Click to copy link
Link copied
Close
Cite
Ahmed Almazroa (2020). Retinal fundus images for glaucoma analysis: the RIGA dataset [Dataset]. http://doi.org/10.7302/z23r0r29
Organization logo

Retinal fundus images for glaucoma analysis: the RIGA dataset

Explore at:
149 scholarly articles cite this dataset (View in Google Scholar)
Dataset updated
Apr 20, 2020
Dataset provided by
DataCitehttps://www.datacite.org/
University of Michigan
Authors
Ahmed Almazroa
License

Attribution-NonCommercial 4.0 (CC BY-NC 4.0)https://creativecommons.org/licenses/by-nc/4.0/
License information was derived automatically

Description

The dataset includes 3 different files: 1) MESSIDOR dataset file contains 460 original images and 460 images for every single ophthalmologist manual marking in total of 3220 images for the entire file. 2) Bin Rushed Ophthalmic center file and contains 195 original images and 195 images for every single ophthalmologist manual marking in total of 1365 images for the entire file. 3) Magrabi Eye center file and contains 95 original images and 95 images for every single ophthalmologist manual marking in total of 665 images for the entire file. The total of all the dataset images are 750 original images and 4500 manual marked images. The images are saved in JPG and TIFF format.;NOTE ON THE DATA: Depositor accidentally left out 50 images from the BinRushed folder from the original deposit. A corrected BinRushed folder that includes these 50 images was added to this data set on May 21, 2018.;NOTE ON DOWNLOADING: The file "MESSIDOR.zip" is too large to be downloaded through Deep Blue Data. Please use Globus to download this file.

Search
Clear search
Close search
Google apps
Main menu