LUMA

LUMALUMALUMA
  • Home
  • RESEARCH
    • Overview
    • Generative Modelling
    • Multi-Agent Reasoning
    • Structure-aware Learning
  • PUBLICATIONS
  • PEOPLE
  • NEWS
  • Contact
  • MORE
    • Teaching
    • Events
  • More
    • Home
    • RESEARCH
      • Overview
      • Generative Modelling
      • Multi-Agent Reasoning
      • Structure-aware Learning
    • PUBLICATIONS
    • PEOPLE
    • NEWS
    • Contact
    • MORE
      • Teaching
      • Events

LUMA

LUMALUMALUMA
  • Home
  • RESEARCH
    • Overview
    • Generative Modelling
    • Multi-Agent Reasoning
    • Structure-aware Learning
  • PUBLICATIONS
  • PEOPLE
  • NEWS
  • Contact
  • MORE
    • Teaching
    • Events

Structure-aware Learning

We develop learning methods that embed anatomical structure into AI models — to improve representation, generalization, and interpretability.

Featured work

🫀 Mesh4D: A Motion-Aware Multi-View Variational Autoencoder for 3D+t Mesh Reconstruction

Mesh4D is a motion-aware deep generative model for reconstructing high-resolution, temporally smooth 3D+t cardiac meshes directly from multi-view cardiac MRI. The method integrates a multi-view cross-attention encoder, transformer-based variational latent dynamics, and a continuous deformation decoder for anatomically consistent and physiologically plausible 4D heart reconstruction.

Status: Accepted at MICCAI 2025

Read more

Luma Lab

Copyright © 2025 Luma Lab - All Rights Reserved.

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

DeclineAccept