We build generative models to simulate anatomical structure and motion —
enabling personalized understanding and clinical insight.
In this study, we propose MeshHeart, a conditional generative model that learns the population distribution of 3D+t cardiac anatomy and motion using geometric encoders and temporal transformers. The model enables personalized simulation, quantifies deviations from normal patterns via a latent delta metric, and demonstrates strong disease classification performance in a large-scale UK Biobank study.
Published at: Nature Machine Intelligence, 2025
We propose a conditional generative model that learns 4D cardiac anatomy sequences and their associations with clinical factors such as age, sex, and disease status. CHeart enables controllable generation and completion of cardiac motion patterns and outperforms state-of-the-art baselines in sequence prediction and synthesis.
Published at: IEEE Transactions on Medical Imaging, 2023
In this work, we introduce a diffusion-based framework to generate ultrasound video sequences from static images and clinical parameters. Unlike prior methods requiring video input, EchoDiffusion enables single-frame-to-sequence synthesis, addressing real-world limitations in clinical documentation. Applied to echocardiograms, the model effectively captures variation in Left Ventricular Ejection Fraction and outperforms existing sequence-to-sequence methods by a large margin.
Published at: MICCAI, 2023
In this work, we propose a conditional generative model that learns how the heart changes structurally with age, enabling simulation of cardiac aging trajectories. The model integrates clinical factors such as age and sex, and demonstrates strong performance in modelling both cross-sectional and longitudinal cardiac anatomical variation.
Published at: STACOM, 2023
Luma Lab
Copyright © 2025 Luma Lab - All Rights Reserved.