New optical coherence tomography biomarkers identified through deep learning for risk stratification in patients with age-related macular degeneration (2022 - 2024)

Presentation at the public event “Night of Museums” of the ophthalmic telemedicine kiosk and the 3D reconstruction module of neovascularization in AngioOCT images for NAMD
Competition: Demonstrative Experimental Project - PED2021
Contract number: 616PED/2022
Project code: PN-III-P2-2.1-PED-2021-2709
Research field:
Title : New optical coherence tomography biomarkers identified through deep learning for risk stratification in patients with age-related macular degeneration
Acronym: DeLArMaD
Project start date: June 20, 2022
Project end date: June 20, 2024
Duration (months): 24
Total budget: 600,000 RON
State budget 600,000 RON
Project webpage:
Coordinating institution: Iuliu Hațieganu University of Medicine and Pharmacy
Project director: Prof. Simona Delia Nicoara
Project partner 1 (P1): Technical University of Cluj-Napoca

Team

Iuliu Hațieganu University of Medicine and Pharmacy

Technical University of Cluj-Napoca

Project objectives

Objective 1. Training and evaluation of deep learning-based screening (DL): Architectures based on Convolutional Neural Networks (CNN) and Capsule Networks (CapsNet) will be trained to classify Optical Coherence Tomography (OCT) scans of a patient, collected in a single visit, as urgent or non-urgent. This classification will help identify patients who need immediate treatment.

We used deep learning to classify retinal conditions from OCT images, Angio OCT, and fundus images. We estimated the treatment necessity for patients with AMD [16]. We developed decision support systems that signal relevant biomarkers to the specialist physician (e.g., fluid level [4]) or 3D visualization of neovascularization. [1]

Objective 2. Training, evaluation, and explanation of DL-based diagnosis as normal or stages of AMD: The models will be trained using patient data, including OCT scans, to diagnose normal conditions and various stages of age-related macular degeneration (AMD) - early, intermediate, or advanced. To collect relevant training data, patients with different stages of AMD will be monitored monthly. The diagnosis should be explained in terms of the presence and progression of known imaging biomarkers.

We formalized the stages of AMD in the AMD ontology [12]. We developed models for explaining medical cases [11]. For patient-directed explanations, we developed a method to identify false medical information on the Internet [14]. Additionally, we evaluated the capability of large language models to answer patients’ questions related to AMD [21],[9]. We improved white-box learning algorithms to generate description logic axioms from training data [22]. We used classifier ensembles to detect diabetic retinopathy from fundus images [23].

Objective 3. Training, evaluation, and explanation of DL-based prediction of conversion from intermediate to advanced stage.

We evaluated the explanations generated for the patient or specialist physician based on the explainability evaluation metrics analyzed in  and the explanation ontology developed in [11]. We developed a method to verify whether the medical information provided to the patient is medically accurate [14].

Objective 4: Evaluation of treatment efficacy: Using the trained models for diagnosis and conversion prediction, the response of neovascular AMD (nAMD) to treatment will be monitored. The evaluation method should provide explanations regarding the evolution of imaging biomarkers or other characteristics identified by the models used for Objectives 2 and 3.

We developed an application that helps ophthalmology residents identify biomarkers and monitor the progression of conditions. The application assigns cases to each resident based on pedagogical rules [10]. We predicted the evolution of visual acuity (e.g., positive vs. negative) in patients with treatment and those without treatment [17].

Objective 5. Developing a program for anti-VEGF injections: Using trained models for diagnosis and conversion prediction, a model will be developed to predict the progression of AMD (Age-related Macular Degeneration) based on different injection programs. The data will be used to train models capable of anticipating the progression of known or new, highly predictive biomarkers. Based on the trusted model used for decision-making, the model will issue explained recommendations.

The method developed for generating the treatment plan is described in [5]. We developed a method to generate recommendations based on the knowledge identified and saved in the ontology . We analyzed the predictive capabilities of AI-based methods in the progression of AMD [19].

Objective 6. Identification of new biomarkers and their causal relationship with AMD. By interpreting the trained models used to achieve objectives 1, 2, 3, and 5, new OCT biomarkers and patterns for AMD progression will be identified.

We have developed a system that automatically extracts causal relationships from scientific articles related to AMD [3], [18]. We analyzed potential biomarkers in OCT images[8]. We extended visualization in conditions such as vitrectomy and lensectomy to facilitate the identification of biomarkers [13]. We used contrastive learning to identify areas with potential biomarkers . We developed algorithms for retina segmentation [24]. We detected biomarkers from OCT images [29][30].

Technological Maturity of Research Results

The AI-based modules developed have been included in a customized training system application for ophthalmology residents. The developed system assigns patients who come to the clinic to residents, ensuring that each resident interacts with various retinal diseases over the course of a year of study. Patient allocation is performed according to the steps in Fig. 1.

Patient allocation algorithm for residents

The allocation is based on rules specified by the human agent, as illustrated in Fig.2.

Examples of rules for patient allocation to residents

The application is described at https://users.utcluj.ro/ chiorana/resident.html. It offers two processing workflows: one for residents and one for ophthalmology specialists. In the residents’ workflow, the diagnosis of a case is based on specific sheets for the condition chosen by the resident for the current case. Residents can mark biomarkers directly on the fundus image (Fig.3). The specialist physician evaluates each case completed by the residents based on an evaluation sheet. Scoring is given based on the accuracy of the diagnosis and biomarker identification (Fig.4). Both residents and the specialist physician can track their own progress, as well as the students’ progress, both globally and in detail for each condition (Fig.5).

image image

Fig. 3 - Residents can signal biomarkers and visualise previous images from the same patient

image image

Fig. 4 - The ophthalmologist assesses the diagnostics made by residents

image image

Fig. 5 - Monitoring the progress of residents

The application was tested in an experiment with 4 residents over 4 weeks, during which they received cases from a predefined set of 200 images. The specialist doctor’s evaluation was carried out throughout the experiment. At the end of the experiment, a workshop was organized during which the results were analyzed together with all the participants. The feedback was positive, both in terms of the functionalities offered and the positive impact on learning.

From the perspective of the medical specialist, the major advantage came from identifying the types of errors made especially at the level of changes in images. The application was used from October 2023 to March 2024 on real cases in the practice of the County Hospital, with a group of 4 residents. Thus, being tested in clinical conditions on real cases, the application has reached a TRL 6.

Scientific Results Obtained

Development of an ontology for AMD.

The ontology formalizes biomarkers for age-related macular degeneration [12]. Biomarkers are formalized in description logic according to definitions from the medical protocols used in the diagnosis and treatments for AMD. Examples of such definitions appear in Fig. 6. The development of the ontology was based on existing medical ontologies such as Anatomy Ontology, Human Disease, Experimental Factor Ontology, SNOMED, Biological Spatial Ontology, Relation Ontology, Symptom Ontology. With this ontology, biomarkers from OCT images or fundus images can be formalized (Fig. 7).

Formalising severity levels for AMD in description logic
Using the AMD ontology to classify images

Identifying the challenges of using AI in clinical settings.

The research [19] also includes an analysis of the challenges associated with using AI-based tools in practice. These challenges include: (i) the lack of data for patients with AMD who have had multiple follow-up visits; (ii) the variety of devices and image acquisition protocols for OCT; (iii) the quality of data annotation (e.g., in diagnosis, the average accuracy of medical experts varies between 80% and 90%, while the average error rate among radiologists is around 30%. The number of annotators required to obtain correct labels with 95% confidence. For an error rate of 20%, 10 evaluators are needed, while for an error rate of 30%, at least 25 evaluators are necessary). (iv) lack of clinical validation; (v) opacity. The analysis of these challenges and possible solutions are discussed in  [19].

Detecting the severity of diabetic retinopathy from fundus images.

The proposed method has three stages [23]. In the first stage, the available images are processed through: adaptive equalization, color normalization, Gaussian filter, removal of the optic disc and blood vessels (Fig. 8). Subsequently, the optic disc and blood vessels are removed from the image (Fig. 9). In the second stage, the images are segmented based on relevant biomarkers and features are extracted from the fundus images (Fig. 10). In the third stage, an ensemble of classifiers is applied and the confidence in the obtained result is evaluated through machine learning.

Image processing: adaptive equalisation, color normalisation,, gaussian filter
Processing fundus images: removing the optical disk and blood vessels
Indetifying biomarkers: hard exudates sau soft exudates

Segmentation of fluid for AMD from OCT images.

The objective was to identify biomarkers for AMD condition by segmenting optical coherence tomography (OCT) images of the retina. We proposed an original hybrid solution that combines retinal segmentation based on deep learning with feature-based image classification [4]. The method is used to distinguish between a specific fluid area and adjacent retinal zones. In this context, we perform a comparative analysis of various semantic segmentation techniques, which provide results for the segmentation of the retinal layer. Due to the sparse representation of fluids in the considered reference dataset, we also address and enhance their recognition through texture classification based on regions of interest (Fig 11).

Segmenting OCT images: identifying retinal layers and areas with fluid

Identification of the neovascular membrane using classic image processing methods.

The objective was to identify the area where the neovascular membrane appears using binary image processing methods, based on binarization algorithms and contour extraction.

(a)image (b)image (c)image (d)image

Fig. 12 - Indetifying neovascular membrane in OCT images (a) original image, (b) ground truth mask; (c,d) prediction blue - true positive, green false positive, red - false negative

Predicting treatment plans for patients with AMD.

The architecture used for prediction appears in Fig. 13 [5][16]. To facilitate explanations, the system is extended with visualization functionalities (Fig. 14 and 15).

Architecture used to predict the treatment for AMD condition
Grad-CAM visualization of a model for a patient with CNV
2D UMAP visualization of the learned model. Each B-scan is colored according to the most likely prediction class (left). The coloring is based on the need for CNV treatment (right)

Detection of biomarkers for diabetic retinopathy from OCT.

The ability of new architectures (EfficientNetV2B1, ConvNeXt) to differentiate healthy versus diseased, even without modifying their pre-trained parameters, has been validated. This indicates that the features extracted from dense representation contain relevant information for diagnosis. A model was constructed to identify the presence of one of the biomarkers: Hyperreflective foci, Detachment, edema without using pixel-level annotated data [29].

Automatic detection of images with uncertain annotations.

An automatic method for identifying images with uncertain annotations has been developed. In the medical field, the precise annotation of all details, for example, the presence of all biomarkers in all images from an OCT volume, is challenging. A correct diagnosis can be made by a human specialist based on the precise analysis of a subset of images from the volume, but for training automatic models, imprecisely annotated images can negatively affect the final quality of the model. The proposed method addresses two functionalities: firstly, identifying uncertain images that could later be re-annotated by a physician, and secondly, building a model that is not affected by such uncertain images, even without their re-verification by a specialist . The method was tested on a natural dataset where the annotation was intentionally altered, and on the OCT dataset used in [29].

Creating a 3D model of the blood vessels in the neovascularization area.

We succeeded in isolating the neovascularized area, identifying the blood vessels, and constructing the 3D model of the area of interest. We also calculated its volume and compared two scans of the same patient at different intervals (Fig. 16[1].

image image

Fig. 16 - 3D visualization of the neovascular area and comparison of two scans at different intervals

Impact of the Obtained Results

The articles resulting from the project have, up to the time of writing this report, a cumulative number of over 10 citations. Table 1 presents some such examples.

Table1: Sample of citations for the published papers
Paper Citations Example
[19] 8 Mares, V., Nehemy, M.B., Bogunovic, H., Frank, S., Reiter, G. S., Schmidt- Erfurth, U. AI-based support for optical coherence tomography in age-related macular degeneration. Int J Retin Vitr 10, 31 (2024).
[20] 1
[17] 1

Knowledge transfer to economic operator.

Through the Transilvania Digital Innovation Hub, we applied for funding to design a telemedicine kiosk in ophthalmology for the period of December 2023 to September 2024. The knowledge acquired during this project will be transferred to an economic operator for the development of such a kiosk. The medical device for capturing fundus images purchased within the project will be used in designing an ophthalmologic kiosk. The application will be designed according to the HIPAA standard and the new AI Act regulations. The targeted artificial intelligence modules will help identify whether the retina images (fundus and OCT) have been correctly captured, signaling cases of defocus, artifacts, or excessive brightness. Additionally, artificial intelligence will be used to flag possible retina conditions. Technically, it is a binary classification problem of retina images into two classes: healthy eye // eye with possible issues.

Creating a medical image labeling center.

Through the Transilvania Digital Innovation Hub, we obtained funding for establishing a medical image labeling center in the ophthalmologic field. The challenges are related to the fact that medical data must be labeled according to methodologies and quality assurance standards. Correct labeling is achieved by staff who will undergo certification courses for medical data annotators. We aim to enhance the skills of residents, master’s students, and Ph.D. candidates. Additionally, data quality evaluation is a necessary step to validate the training phases of learning algorithms. The vision for this center is to create a chain effect in the national development of AI-based systems using medical data. The experience gained in the current project will be applied within this Ophthalmologic Data Labeling Center. We also seek to foster interaction between this center and the European Health Data Spaces (Common European Data Spaces), with the Romanian AI Hub (HRIA), and with the Institute of Artificial Intelligence of the Technical University of Cluj-Napoca, which includes the medical field as a strategic research domain.

Exploitation and Dissemination of Results

The results have also been disseminated in seminars (Table 1).

Table2: Dissemination of results at seminars
Researcher Presentation Event
A. Groza Eyes on A.I. Progress and Innovation in Ophthalmology, Cluj-Napoca, 6 Decembrie 2023
A Marginean Enhancing OCT and Eye Fundus Image Interpretation with Deep Learning Progress and Innovation in Ophthalmology, Cluj-Napoca, 6 Decembrie 2023
R. R. Slavescu Detection of nAMD using Transformer Architectures - Preliminary Experiments Progress and Innovation in Ophthalmology, Cluj-Napoca 6 Decembrie 2023
I. Damian Morphological characteristics of choroidal neovascular membrane in age-related macular degeneration: a spectral-domain optical coherence tomography angiography study Progress and Innovation in Ophthalmology, Cluj-Napoca 6 Decembrie 2023
A. Groza Interleaving machine learning with reasoning for identifying retinal conditions Smart Diaspora 2023, Workshop Abordări orientate către om pentru Inteligență Artificială deîncredere (Trust-AI), 10-13 Aprilie 2023, Timisoara
A. Groza Interleaving machine learning with reasoning for identifying retinal conditions Aplicații ale Inteligenței Artificiale în Medicină (AIAM), 30 Martie 2023, Bucuresti, Romania
R. R. Slavescu Alocarea cazurilor la medicii rezidenți Zilele UMF, 5 Decembrie 2022
A. Groza Artificial Intelligence in Ophthalmology RePatriot, Inteligența Artificială aplicată în domeniul sănătății, 21 Noiembrie, 2022
A Groza Ongoing Research at Intelligent Systems Group Transilvania Digital Innovation Hub, Cluj-Napoca, Romania, 24 November, 2022

Analysis of AI Prediction Capabilities on OCT Images This review aims to identify and evaluate approaches that apply artificial intelligence (AI) to optical coherence tomography (OCT) images to predict the progression of age-related macular degeneration (AMD). The search targeted seven databases up to January 1, 2022, using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines, identifying 1800 records. After initial evaluation, 48 articles were selected for full-text retrieval, and finally, 19 articles were included.

Of these 19 articles, 4 articles focused on predicting anti-VEGF requirement in neovascular AMD (nAMD), 4 articles focused on predicting anti-VEGF efficacy in nAMD patients, 3 articles predicted conversion from early or intermediate AMD (iAMD) to nAMD, 1 article predicted conversion from iAMD to geographic atrophy (GA), 1 article predicted conversion from iAMD to both nAMD and GA, 3 articles predicted GA growth, and 3 articles predicted visual acuity (VA) after anti-VEGF treatment in nAMD patients. Since the use of AI methods to predict AMD progression is only in its initial phase, such a review establishes the context of existing work and can serve as a starting point for future research. (ISI article, Q2 [19])

Allocation of patients to residents. Resident training centers face challenges in trying to create balanced residency programs, as the cases encountered by residents are not always distributed efficiently from a didactic point of view. We propose a personalized residency training system in ophthalmology. The system is built on two components: (1) a deep learning (DL) model and (2) a case allocation algorithm based on expert systems. The DL model is trained on publicly available datasets through contrastive learning and can classify retinal diseases from fundus images (CFP). The CFP image is interpreted by the DL model, which provides a presumptive diagnosis. This diagnosis is then transmitted to a case allocation algorithm that selects the resident who would benefit most from the specific case, based on the respective resident’s history and performance. At the end of each case, the supervising expert physician evaluates the resident’s performance based on standardized examination records, and the results are immediately updated in their portfolio. Our approach offers a framework for future precision medical education in ophthalmology. (ISI Article, Q2 [20]

Verification of medical information. We address the issue of false medical information distributed on the internet. We have developed a tool for verifying medical information based on four technologies: (i) the Suggested Upper Merged Ontology (SUMO) for knowledge representation, (ii) the Vampire theorem prover for fact-checking, (iii) WordNet, and (iv) GPT (Generative Pre-trained Transformer) for learning and aligning concepts. (DBLP indexed Workshop [14])

Development of white-box learning algorithms to provide explanations to the specialist physician. In the medical field, it is important that the machine learning algorithms used are transparent to explain the proposed decision. Thus, we have developed "white-box" machine learning algorithms. We focused on algorithms for learning axioms in descriptive logic. We extended the Class Expression Learning for Ontology Engineering (CELOE) algorithm, part of the DL-Learner tool. The approach uses multiple search trees and refinement operators for the learned concepts to divide the search space into smaller subspaces. We introduce the operation of conjunction of the best class expressions from each tree, retaining the results that provide the most information. The aim is to stimulate exploration from a diverse set of starting concepts and to simplify the process of finding definitions for concepts in ontologies. The concepts learned from data can be analyzed by the ophthalmologist to validate or invalidate the hypotheses identified by the machine learning algorithms. (Scopus Paper [22])

Methods for extending the DMLV ontology based on scientific articles using large language models. Expert knowledge in the field of ophthalmology can be formalized in medical ontologies. This expert knowledge can be extracted directly from scientific articles. We investigated the possibility of enriching medical ontologies by automatically translating natural language sentences into Descriptive Logic. Since Large Language Models (LLMs) are the best tools available for translations, we fine-tuned a GPT-3 model to convert sentences into OWL Functional Syntax. We provided translation examples to train the model on: instances, class subsumption, domain and range of relations, relations, disjoint classes, and cardinality restrictions. The resulting axioms are used to enrich an ontology, supervised by a human agent. The developed tool is available as a plugin for the Protégé editor. We aim to extend the DMLV ontology with information extracted from scientific articles. (ISI Paper, IEEEXplore [18])

Detection of diabetic retinopathy severity levels. In the case of diabetic retinopathy, the blood vessels in the retina deteriorate, leading to bleeding, swelling, and other changes that affect vision. We proposed a method for detecting the severity levels of diabetic retinopathy. In the first stage, the available images are processed through: adaptive equalization, color normalization, Gaussian filtering, removal of the optic disc, and blood vessels. In the second stage, we perform image segmentation for relevant biomarkers and extract features from fundus images. In the third stage, we apply an ensemble of classifiers and assess the confidence in the results obtained through machine learning. (DBLP Paper [23])

Identification of biomarkers for AMD through OCT image segmentation. The classification of medical images is considered solved from a technical perspective: given a sufficient number of images—in terms of quantity, quality, and distribution—machine learning algorithms can classify various conditions. However, a drawback is that most approaches directly target disease detection without considering the medical protocols or guidelines used by clinicians. We address the gap between clinical protocols and AI-provided solutions. Instead of pursuing disease detection with deep learning tools, we aim to identify only the biomarkers sought by physicians according to medical protocols for AMD. The proposed solution combines deep learning-based retinal segmentation with feature-based image classification. The method can distinguish between a specific fluid area and adjacent retinal areas. Due to the underrepresentation of certain fluids in the considered reference dataset, we also address and improve their recognition through texture classification based on regions of interest. The proposed approach represents a step towards developing AI systems in alignment with ongoing medical protocols. (ISI Paper [4])

Ontology for AMD. We aim to support the monitoring of current guidelines and scientific information in the management of age-related macular degeneration (AMD) to assist retina specialists in developing a clinical protocol for AMD therapeutic approaches. Firstly, we developed an ontology for the AMD condition using information from the literature, related medical ontologies, and domain knowledge from ophthalmologists. Secondly, we populated and enriched the AMD ontology using structured knowledge extracted from the medical literature with the help of the GPT-3 language model. Thirdly, we applied reasoning algorithms on the formalized knowledge in the ontology to highlight to the ophthalmologist any differences or inconsistencies between various clinical studies, protocols, or therapeutic approaches specific to AMD. (DBLP indexed Workshop [12])

Method for generating explanations. In the medical field, the ability of an AI-based system to explain how it arrived at a decision is important for gaining trust in the system, understanding its functioning, and potentially gaining new and more generalizable knowledge. With artificial intelligence agents becoming increasingly complex and widespread, there is a growing interest in equipping them with the ability to explain themselves. However, explanations are more than just a matter of integrating techniques to approximate machine learning models with simpler methods; explanations are interactive acts of communication that need to be tailored to the interests of the person seeking explanations. To facilitate the representation of the communication goals behind an explanation, we propose the "Explanation Interchange Format" ontology, which serves multiple purposes. Firstly, it formally describes the act of communicating explanations and its structure based on existing theoretical work on scientific explanations. Secondly, the ontology allows reasoning to construct explanations, for example, by selecting explanatory structures appropriate to the interests of the inquirer. (SpringerLink book chapter [11]

Method for administering anti-VEGF injections. Optical Coherence Tomography (OCT) is a non-invasive examination that can help clinicians diagnose multiple retinal abnormalities, including AMD, and monitor disease progression. There are still needs regarding personalized treatment for patients suffering from AMD, as it is a disease that presents individual diversity in its progression and outcomes. We propose a method that will utilize deep learning technologies to analyze disease progression in patients and potential outcomes. The proposed method relies solely on OCT scans performed during the initial two examinations. We propose an architecture that will assist ophthalmologists in administering anti-VEGF injections, the standard treatment for advanced neovascular AMD. The architecture is based on features extracted from the B-scans of an OCT volume and, through transfer learning practices, will predict the total number of injections required for a patient under treatment, the decision for the next injection administration visit, as well as visual acuity at the next visit. (ISI proceedings paper [5])

Methods for expanding annotated medical image datasets. In order to create an extensive set of medical images, including the associated masks, a model based on FCBFormer was trained, resulting in two promising versions. The first one is based on the AdaBelief optimizer for training, and uses a weighted sum between binary cross entropy and the Dice coefficient as the loss function. On the tests performed, Precision and Dice coefficient values surpass those offered by the base FCBFormer architecture. The second version uses the Adam optimizer for training and binary cross entropy as the loss function. Better values for Recall and Precision (0.95 and 0.998, respectively) are obtained, at the cost of a slight decrease in the Dice score (from 0.93 to 0.92). Although initially aimed at the automatic detection of polyps, the method is general enough to be used in other types of images. (ISI proceedings paper [28], poster [24])

Classification of retinal conditions. Age-related macular degeneration (AMD), a leading cause of vision loss in older adults, represents a significant challenge in developing personalized treatment strategies due to its diverse progression patterns and outcomes. Optical Coherence Tomography (OCT), a non-invasive diagnostic tool, provides clinicians with a valuable means of identifying various retinal abnormalities, including AMD, and tracking disease progression. Differentiating between healthy and AMD or diabetic retinopathy in OCT is not as challenging for current pre-trained vision architectures. This work explores the use of transfer learning to move from B-scans to volumes and sequences of volumes. The transfer is from a general classification task between Normal, CNV, and Drusen to a specific task of recognizing early signs indicating future disease progression. Different types of base features and aggregation are combined with EfficientNetV2 architectures on two datasets: one dataset includes volumes with annotations at the B-scan and volume levels, and another dataset includes 94 patients and their treatment histories. The first two examinations from the latter dataset are considered for estimating anti-VEGF injection administration. Experiments suggest that pre-trained architectures can capture not only the presence of the disease but also elements relevant to disease progression, without requiring complex annotations such as those needed in semantic segmentation. When networks can identify aspects they were not explicitly trained to recognize, the discovery of new biomarkers, better disease understanding, and personalized treatments become more feasible. (ISI proceedings paper [16])

Extraction of causal relationships from scientific articles related to AMD [3] The contributions are as follows: (i) a corpus of medical abstracts annotated with named entities and causal relationships for the condition AMD; (ii) the CausalAMD ontology used in the annotation process and the automatic construction of the LLM prompt; (iii) a tool capable of extracting causal relationships from medical abstracts (Fig. 17); (iv) a RAG-guided chatbot specialized in AMD. Scopus-indexed conference (paper under review)

Causal knowledge graph extracted from a scientif article about AMD
Table 3: Ophthalmology datasets developed within the project
Dataset Description Ref.
ReziOCT \(10,592\) fundus images classified into 39 classes taken from public datasets (ODIR, RFMID, JSIEC). Annotations were aligned by a specialist doctor. [20]
BioMs DR \(2660\) OCT images from \(52\) patients with information about the presence of one or more biomarkers of diabetic retinopathy: edema in \(1615\), detachment in \(330\), foci in \(2096\), and \(510\) images with a healthy aspect. [29]
CausalAMD 70 abstracts of scientific articles related to AMD (average word count 350). The abstracts were annotated using the Prodigy tool with 9 causal relationships and corresponding named entities. [3]
DLMV FollowUp Contains visual acuity values for 98 patients at 4 successive visits \(LTFU_0\), \(LTFU +3\), \(LTFU_6\), and \(LTFU +12\) months. For each patient, 16 characteristics are collected (e.g., sex, age, environment, occupation, OD severity, OS severity, vitrectomy). [17]
Table 4: Utilizing acquired expertise and continuing collaboration in subsequent research
Project Title Partner Call Status
1 Creation of an ophthalmic image labeling center SCJ Cluj TDIH accepted
2 Evaluation of pediatric emergencies using artificial intelligence Peditech TDIH accepted
3 Utilization of deep learning and knowledge representation for personalized residency training in ophthalmology Easeful S.R.L. PED under review
4 AI-assisted telemedicine for improving the treatment of retinal diseases: screening, biomarkers, diagnosis suggestion, and patient monitoring Brainoble Appsmart and UMF PTE under review
5 Development of a personalized AI virtual assistant using smartwatches for patients with Alzheimer’s or dementia UEFISCDI and TUBITAK RO–TR mobility under review

Dissemination to society and the public.

At the end of the project (May 2024), we organized two dissemination events for the public. The 3D reconstruction module of neovascularization in AngioOCT images for NAMD was presented at the Night of Museums (Fig. 18), on May 24, 2024, at HUB UTCN, Cluj-Napoca (Fig. 19). The module for classifying retinal conditions based on fundus images was presented at the public event AI@UTCN, open to the public, and disseminated on the X and Instagram platforms.

Presentation at the public event “Night of Museums” of the ophthalmic telemedicine kiosk and the 3D reconstruction module of neovascularization in AngioOCT images for NAMD
Presentation of the module for classifying retinal conditions based on fundus images at the AI@UTCN event, open to citizens
Planned deliverables/indicators Nr. Achieved deliverables/indicators Nr.
ISI Articles 5  [19], [20], [17],  [29], [13],  [8] 6
Journal Articles 0  [30] 1
Book Chapters 0  [11] 1
Conference Papers 9  [18], [5], [16],  [28], [4],  [24],  [26],  [27], [1], [9], [3],  [25],  [14], [12], [10] 15
Technical ( ro, en) and Financial Reports 5 3 annual reports, 1 usage report of the application for residents, 1 technical report 5
User Guide for the Application 1 Link 1
Organization of Scientific Workshops 0 Zilele UMF
Organization of Public Events 0 Night of Museums, AI@UTCN 2
Ophthalmology Datasets 1 Described in [20], [3] 4
Ontologies 0 OntoAMD [12],[14],[2], CausalAMD [3] 2
Utilization of Expertise: Projects Won Data Labeling Center, Evaluation of Pediatric Emergencies 2
Utilization of Expertise: Projects Under Review 0 PED2023, PTE2023 2

Papers