Software that can automatically segment the right ventricle in a live ultrasound image.
- Approximately 89% volume overlap between this method and the gold standard of manual segmentation for both epicardial and endocardial boundaries of the right ventricle.
- Automatic segmentation with a level of accuracy that enables quantified characterization of the right ventricle.
Reliable evaluation of the structure and function of heart ventricles from echocardiographic images is an important issue for clinical examination and diagnosis. Although a number of methods have been used to segment images of the left ventricle, they cannot be directly applied to images of the right ventricle because the right ventricle has irregular geometry and the imaging quality of the right ventricle is poor. The right ventricle is often segmented manually by experts but this process is time-consuming as hundreds of images in a single series need to be segmented in a region that is already difficult to segment.
Emory researchers have developed a method and algorithm to segment both epicardial and endocarial boundaries of the right ventricle automatically from a continuous echocardiography series by combining sparse matrix transform (SMT), training models, and a level set algorithm. Briefly, the SMT extracts main motion regions of myocardium, then a training model of the right ventricle is registered to the extracted regions. Next, the training model used is varied as an adapted initialization for segmentation, and finally a localized region-based level set algorithm is applied to segment both epicardial and endocardial boundaries of the right ventricle. The mean Dice scores, a volume overlap measurement between this method and the gold standard, for both epicardial and endocardial boundaries are 89.1% Â± 2.3% and 83.6% Â± 7.3%, respectively. This automatic segmentation method based on SMT and level set can provide a useful tool for quantitative cardiac imaging.
Method has been evaluated and validated by eight echocardiography series randomly selected from different human subjects' data containing 400 images.