Application
A software for multimodal MR (magnetic resonance) image-to-image translations using only a single MRI (magnetic resonance image) modality.
Key Benefits
- Can generate any arbitrary MRI from any other arbitrary MRI.
- Potential clinical tool for fast multimodal MR image generation with a single scan.
- Reduce scan time, increase patient comfort, increase MR scanner throughput and reduce MR image motion artifacts.
Market Summary
Multimodal MRI could facilitate medical diagnosis and treatment planning. Currently marketed MRIs are used as a diagnostic and disease detection tool for various presentations including head trauma, strokes, brain aneurysms, as well as heart aneurysms or tears. However, the long scanning time increases patient discomfort and introduces motion artifact.
Technical Summary
Emory researchers have developed a unified generative adversarial networks (StarGAN) to generate any arbitrary MRI from any other MRI. For example, a front image may be used to generate a profile or back image. StarGAN is a novel and scalable approach that allows multimodal (i.e. multi-scope) MR image-to-image translations using only a single MRI modality (i.e. view). StarGAN has a superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain and perspective. The proposed method is a promising tool to reduce scan time, increase patient comfort, increase MR scanner throughput and reduce MR image motion artifacts.
Developmental Stage
Currently available online.
Publication: MRI reconstruction using deep learning, generative adversarial network and acquisition signal model. Pub. No.: US 2019/0369191 A1, Pub. Date: Dec 5, 2019