Deep learning for rapid and robust fluorescence lifetime imaging

Deep-learning approach has the potential to unlock fluorescent lifetime imaging for clinical applications.


Fluorescence lifetime imaging microscopy (FLIM) is a widely used tool for biomedical imaging that offers many unique advantages over typical intensity-based fluorescence microscopy. FLIM is advancing fundamental biological research by enabling direct observation of cellular processes in live cells.1 This technique can be used to quantitatively measure dynamic cellular events that would otherwise be imperceptible with standard intensity based fluorescence imaging. Notably, the clinical potential of FLIM has recently been demonstrated by identify tumor-generated exosomes and circulating cancer cells from blood.2,3 For widespread clinical adoption, diagnostics must be fast and reliable. Unfortunately, current systems require long acquisition times and are unreliable in photon-starved conditions expected with clinical samples.4

The Yeh Lab at The University of Texas at Austin has developed a deep learning-based method termed flimGANE (fluorescence lifetime imaging based on Generative Adversarial Network Estimation) that can rapidly generate accurate and high-quality FLIM images even in the photon-starved conditions. They demonstrated a 258X increase in speed over the leading method (time-domain least-square estimation), while at the same time improving the accuracy of cellular structure visualization and metabolic state analysis.5

With its advantages in speed and reliability, flimGANE can be used to develop clinical diagnostics and ultra-fast research applications for fluorescence lifetime imaging.


1. Summers, P.A. et al., Nat Commun 12, 162 (2021).

2. Lee D, et al., Lab Chip 2018 May 1;18(9):1349-1358.

3. Li, N. et al., Anal. Chem. 2019, 91, 23, 15308–15316.

4. Datta, R. et al., J. of Biomedical Optics, 25(7), 071203 (2020).

5. Chang, C. et al., bioRxiv, 01 Dec 2020.