Discover the numerous projects of our doctoral researchers from 2020.
Imaging & Diagnostics

Scalable Cell-Tracking
Scalable Cell-Tracking with Learnable Combinatorial Optimization
Doctoral researcher: Stefan Haller
Institution: UNI HD
Data science PI: Carsten Rother
Life science PI: Gerd Ulrich Nienhaus
Cell segmentation and tracking is the problem of processing a time series of (3D) images showing development of an organism (e.g. drosophila) on the cellular level, that is, growth, movement, division and death of cells. Existing methods for this problem work reasonably well in simple cases with relatively few cells per time frame and relatively small temporal changes between them.
The goal of the project is to develop a new cell-tracking method which is able to cope with the problem at later development stages, where existing approaches hit their limit. The new key components of the method are: (a) a scalable combinatorial solver which is able to efficiently deal with millions of variables and constraints,as cell-tracking can be seen as a large-scale combinatorial problem; and (b) a training technique to learn the parameters of the solver to fine-tune it to different types of input data.

Integration of omics data to discover biomarker signatures
Integration of omics data to discover biomarker signatures for hypoxia and radioresistance
Doctoral researcher: Verena Bitto
Institution: DKFZ
Data science PI: Benedikt Brors
Life science PI: Michael Baumann
Tumour hypoxia, a state of low oxygen levels in certain tissue regions, seems to play a prognostic role for loco-regional tumour control. Hypoxic cells are associated with resistance to radiotherapy, which allows them to survive standard treatment. Both phenomena, tumour hypoxia and radioresistance, however, are highly heterogeneous in between patients and within one tumour.
Therefore, the aim of the project is to study both, tumour hypoxia and radioresistance, in more detail. We plan to integrate different omics data in order to find regulatory pathways and combine them with results from analysis of MRI scans.

Visualization for MRI-Based Psychiatric Diagnosis
Visualization for MRI-Based Psychiatric Diagnosis
Doctoral researcher: Philipp Wimmer
Institution: UNI HD
Data science PI: Filip Sadlo
Life science PI: Daniel Durstewitz
Since reliable biomarkers for psychiatric diagnosis and treatment indication are lacking, such diagnostics and prognostics is mainly based on structural interviews with the patient. Although abnormalities in non-invasive measurements, including structural and functional magnetic resonance imaging (fMRI), are associated with psychiatric conditions, the effects are often small and very heterogeneous, and thus do not directly allow for medically reliable classification of psychiatric conditions or subtypes.
The group of PI Durstewitz has suggested that derailed cortical attractor dynamics, rooted in known biophysical (synaptic) alterations, could be a hallmark feature of many psychiatric illnesses. Identification of attractor dynamics in brain recordings is, however, a highly challenging topic, and only recently progress in statistical machine and deep learning has enabled to extract crucial dynamical systems features from neural, including fMRI, recordings.
Major challenges for translating these methods and observations into clinical practice include their extension to additional data sources for extended robustness, effective and efficient analysis of the resulting very highdimensional state spaces, identification and extraction of those dynamical features most crucial to psychiatric diagnosis and treatment, and, last but not least, presentation of the results, together with context information, in a format that is accessible to clinicians.
The goals of this project is to develop advanced visual data analysis techniques addressing these challenges. On the one hand, we develop techniques that reveal topological features, e.g., invariant manifolds in the high-dimensional phase space of the inferred dynamical systems, support navigation of these spaces, and bring them into context with additional data sources. On the other hand, we include structural magnetic resonance imaging data, develop respective local feature definitions, and investigate the integration of these spatial data with the high-dimensional phase space of the inferred dynamical systems. Last but not least, we combine the obtained techniques into an overall approach that enables effective application and interpretation by clinicians.
Surgery & Intervention 4.0

Human Rules - AI brains
Human Rules - AI brains: Automated CTV delineation for head and neck cancers
Doctoral researcher: Alexandra Walter
Institution: KIT / DKFZ
Data science PI: Martin Frank
Life science PI: Oliver Jäkel
The precise spatial delineation of cancerous and healthy tissue in radiation therapy is necessary to prevent side effects and the reoccurrence of the tumor. The sides, sizes and shapes of tumors vary widely, which make the identification of the clinical target volume (CTV) difficult for humans as well as for machine learning algorithms. Expert guidelines were formulated to establish a best in class delineation standard. The goal of this project is to establish a reliable, fully automated pipeline for CTV delineation by combining the achievements of convolutional neuronal networks on medical image segmentation tasks with human expert guidelines.
First, state-of-the-art ML solutions will set a baseline for the delineation accuracy on the CT scans of the investigated head and neck cancer cohort. A divide-and-conquer strategy will be implemented to realize expert guidelines as single constraints. Finally, these constraints will be built into the learning routines by either translating them into the target function or the networks architecture.

Cooperative multi-agent reinforcement learning
Cooperative multi-agent reinforcement learning for next-generation cognitive robotics in laparoscopic surgery
Doctoral researcher: Paul Maria Scheikl
Institution: KIT
Data science PI: Franziska Mathis-Ullrich
Life science PI: Martin Wagner
Laparoscopic surgery is a team effort. A surgeon and her assistant(s) collaborate to solve a shared task working individually and as a team for a successful surgical outcome. However, our society faces increasing shortage of skilled surgeons and assistants, especially in rural areas. This shortage may be resolved by providing cognitive surgical robots that automate certain tasks. Our project addresses the highly interdependent behavior of surgeon and assistant(s) as a multi-agent system problem of human and artificial agents using methods of cooperative multi-agent reinforcement learning (cMARL). In contrast to previous work, this project aims to train multiple, decentralized artificial agents that cooperatively solve a shared, robot-assisted laparoscopic task.

Data-driven prediction of complications in surgery
Robust data-driven prediction of complications in minimally-invasive surgery
Doctoral researcher: Lucas-Raphael Müller
Institution: DKFZ
Data science PI: Lena Maier-Hein
Life science PI: Hannes Kenngott
Death within 30 days after surgery has recently been found to be the third-leading cause of death worldwide. In this context, anastomotic leakage (AL) has been recognised as one major cause for intraoperative or postoperative complication. Anastomoses in medicine refer to the surgical connection between two diverging tubular structures, such as blood vessels or intestines, while leaks may occur as late as days or even weeks postoperatively. Recent results suggest that the risk of anastomosis leakage varies crucially from hospital to hospital and can reach up to 16% (1).
The long-term goal of this project is to develop the first fully-automatic video-based approach to anastomosis leakage prediction based on laparoscopic video and multi-modal sensor data recorded throughout the surgical intervention. The core methodological challenge is to develop a representation of the multi-modal data that generalises across specific acquisition hardware and settings and thus serves as a fundament for clinical translation and multi-centre application of our methodology.
Models for Personalized Medicine

Explainable Artificial Intelligence in Life Science
Explainable Artificial Intelligence in Life Science: An Application to Omics Data
Doctoral researcher: Philipp Toussaint
Institution: KIT
Data science PI: Ali Sunyaev
Life science PI: Matthias Schlesner
As it is becoming progressively challenging to wholly analyse the ever-increasing amounts of generated biomedical data (e.g., CT scans, X-ray images, omics data) by means of conventional analysis techniques, researchers and practitioners are turning to artificial intelligence (AI) approaches (e.g., deep learning) to analyse their data.
Although the application of AI to biomedical data in many cases promises to deliver improved performance and accuracy, extant AI approaches often suffer from opacity. Their sub-symbolic representation of state is often inaccessible and non-transparent to humans, thus limiting us in fully understanding and therefore trusting the produced outputs. Explainable AI (XAI) describes a recent trend in AI research with the aim of addressing the opacity issue of contemporary AI approaches by producing (more) interpretable AI models whilst maintaining high levels of performance and accuracy.
The objective of the XAIOmics research project is to design, develop, and evaluation an XAI approach to biomedical (i.e., omics) data. In particular, we will identify biomedical use cases and current, viable approaches in the domain of XAI and apply and adapt them to the identified use cases. With regards to the highly interdisciplinary field, a central research hurdle will be the development of an understanding for the different kinds of biomedical data and the subsequent feature engineering in the context of the design of the AI algorithms.
In doing so, this project will not only aid researchers and physicians in obtaining a better understanding of the outputs of contemporary AI approaches for biomedical data but also create more transparency, which will support the building of trust in AI-based treatment and diagnosis decisions in personalized medicine.

Bayesian Deep Learning for Radiogenomics Analysis of Cancer
Bayesian Deep Learning for Radiogenomics Analysis of Cancer
Doctoral researcher: Alejandra Jayme
Institution: UNI HD
Data science PI: Vincent Heuveline
Life science PI: Heinz-Peter Schlemmer
In recent years, biomedical data have been increasingly available. In particular, the costs for procuring omics data, including genome and exome sequences as well as protein and transcriptomicdata, have dramatically decreased. Together with histological images of tumors, these are important sources of high-dimensional data beneficial for cancer diagnostic procedures and treatment optimization.
We look to big data analysis to derive clinical knowledge on cancer evolution: identifying patterns and structures that signal critical events in cancer development to aid in developing adequate diagnosis, prevention and treatment for all kinds of tumors. This concept of data-driven modeling (DDM) unifies statistics, data analysis and machine learning in order to understand the actual phenomena with data. Moreover, we couple the model with uncertainty quantification (UQ) methods to address the impact of uncertainties inherent in data and mathematical models.
The aim of this project is to develop and analyze new and innovative computational approaches, combining DDM and UQ, for solving current challenges in the field of hereditary cancer research. The derived model should enhance existing diagnostics and prevention strategies and allow for better understanding of cancer evolution.

ML based parameterization
ML based parameterization to simulate tissue and tumor development as emerging property from single cell events
Doctoral researcher: Julian Herold
Institution: KIT
Data science PI: Achim Streit
Life science PI: Alexander Schug
A deep understanding of tissue growth as emerging behavior from single-cell events might lead to new insights for different scientific fields, ranging from fundamental biology to in-silico testing of treatment regimes for personalized medicine. We aim at simulating mm-sized virtual tissues such as embryogenetic brain tissue or cancerous tumors with more than a million µm-resolved individual cells. Such a model requires many parameters, which have to be adjusted accordingly in order reproduce experimental data. We deploy a combination of traditional optimization techniques and deep learning approaches in order to find a reliable and flexible way to adapt those parameters.

Data Mining and Uncertainty Quantification
Data Mining and Uncertainty Quantification in disease diagnosis
Doctoral researcher: Elaine Zaunseder
Institution: UNI HD
Data science PI: Vincent Heuveline
Life science PI: Stefan Kölker
The accurate diagnosis of patients with rare diseases is important but challenging, since the prevalence of them is very low. This is especially relevant for diseases where quick identification can lead to efficient therapies and treatments which can positively change the outcome and severity of the disease. In this respect, the analysis of patient-specific data from disease and control cohorts can support to improve the diagnosis.
Thanks to advances in data mining and machine learning as well as the computing landscape in recent years, new opportunities to examine large data sets with high dimensional feature spaces have been developed. In particular, Knowledge Discovery in Databases (KDD) methods can be helpful to discover new clues for unknown causal relations as well as novel metabolic patterns. A classical approach is to consider classification schemes, which rely on mathematical models describing the underlying relationships. When developing these classification models real-world data has to be processed, so the medical input data and the mathematical models are subjected to measurement inaccuracies and uncertainties.
In our project, we want to analyze and develop innovative methods for mathematical uncertainty quantification (UQ) to describe and quantify noise in the data as well as to assess model inaccuracies, in order to obtain reliable classification results. In that context, it is important not only to ensure the reliability of the model, but also to make the model understandable and interpretable for practitioners. Specifically, we want to extend techniques of Explainable AI (XAI) in the context of disease diagnosis. The overall goal is to analyze and develop reliable and interpretable models with high diagnostic prediction to support the diagnosis of rare diseases.