The Open Medical Informatics Journal




(Discontinued)

ISSN: 1874-4311 ― Volume 13, 2019

Quantification of Epicardial Fat by Cardiac CT Imaging



Giuseppe Coppini1, Riccardo Favilla1, Paolo Marraccini1, Davide Moroni*, 2, Gabriele Pieri2
1 Institute of Clinical Physiology (IFC), Italian National Research Council (CNR), Pisa, Italy
2 Institute of Information Science and Technologies (ISTI), Italian National Research Council (CNR), Pisa, Italy

Abstract

The aim of this work is to introduce and design image processing methods for the quantitative analysis of epicardial fat by using cardiac CT imaging.

Indeed, epicardial fat has recently been shown to correlate with cardiovascular disease, cardiovascular risk factors and metabolic syndrome. However, many concerns still remain about the methods for measuring epicardial fat, its regional distribution on the myocardium and the accuracy and reproducibility of the measurements.

In this paper, a method is proposed for the analysis of single-frame 3D images obtained by the standard acquisition protocol used for coronary calcium scoring. In the design of the method, much attention has been payed to the minimization of user intervention and to reproducibility issues.

In particular, the proposed method features a two step segmentation algorithm suitable for the analysis of epicardial fat. In the first step of the algorithm, an analysis of epicardial fat intensity distribution is carried out in order to define suitable thresholds for a first rough segmentation. In the second step, a variational formulation of level set methods - including a specially-designed region homogeneity energy based on Gaussian mixture models- is used to recover spatial coherence and smoothness of fat depots.

Experimental results show that the introduced method may be efficiently used for the quantification of epicardial fat.

Keywords: Image segmentation, level set methods, epicardial fat, cardiac CT.


Article Information


Identifiers and Pagination:

Year: 2010
Volume: 4
First Page: 126
Last Page: 135
Publisher Id: TOMINFOJ-4-126
DOI: 10.2174/1874431101004010126

Article History:

Received Date: 5/12/2009
Revision Received Date: 24/2/2010
Acceptance Date: 1/3/2010
Electronic publication date: 27/7/2010
Collection year: 2010

© Coppini et al.; Licensee Bentham Open.

open-access license: This is an open access article licensed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted, non-commercial use, distribution and reproduction in any medium, provided the work is properly cited


* Address correspondence to this author at the Istituto di Scienza e Tecnologie dell'Informazione ISTI-CNR, Via G. Moruzzi 1, 56124, Pisa, Italy; Tel: +39-050-3153130; Fax: +39-050-3152810; E-mail: davide.moroni@isti.cnr.it





1. INTRODUCTION

Epicardial fat, as other visceral fat localizations, is correlated with cardiovascular disease, cardiovascular risk factors and metabolic syndrome [1Sacks HS, Fain JN. Human epicardial adipose tissue: A review Am Heart J 2007; 153: 907-17., 2Rosito GA, Massaro JM, Hoffmann U, et al. Pericardial fat, visceral abdominal fat, cardiovascular disease risk factors, and vascular calcification in a community-based sample: the framingham heart study Circulation 2008; 117: 605-13.]. The important characteristic of epicardial fat is the anatomical localization and its functional relationship with coronary arteries and myocardium [3Wang TD, Lee WJ, Shih FY, et al. Relations of epicardial adipose tissue measured by multidetector computed tomography to components of the metabolic syndrome are region-specific and independent of anthropometric indexes and intraabdominal visceral fat J Clin Endocrinol Metab 2009; 94(2): 662-9.]. It is postulated that epicardial fat may work as an endocrine organ secreting hormones, cytokines and chemokines that may play a role in the atherogenesis [4Mazurek T, Zhang L, Zalewski A, et al. Human epicardial adipose tissue is a source of inflammatory mediators Circulation 2003; 108(20): 2460-6., 5Baker A, Silva N, Quinn D, et al. Human epicardial adipose tissue expresses a pathogenic profile of adipocytokines in patients with cardiovascular disease Cardiovasc Diabetol 2006; 5: 1.]. This hypothesis is partially supported by few epidemiological studies. Moreover many concerns remain about the method for measuring epicardial fat, its regional distribution on the myocardium and the accuracy and reproducibility of the measurements. At present, three techniques appear suitable for the quantification of epicardial fat, namely echocardiography, Magnetic Resonance Imaging (MRI) and Computed Tomography (CT). All of them have been used in medical studies (see e.g. [6Iacobellis G, Assael F, Ribaudo MC, et al. Epicardial fat from echocardiography: a new method for visceral adipose tissue prediction Obesity 2003; 11(2): 304-10.-9Abbara S, Desai JC, Cury RC, Butler J, Nieman K, Reddy V. Mapping epicardial fat with multi-detector computed tomography to facilitate percutaneous transepicardial arrhythmia ablation Eur J Radiol 2006; 57(3): 417-22.]).

With respect to other imaging modalities, however, CT may provide a more accurate evaluation of fat tissue due to its higher spatial resolution compared to ultrasound and MRI.

In addition, CT is widely used for evaluating coronary calcium score, which is an independent predictor of cardiovascular prognosis [10Detrano R, Guerci AD, Carr JJ, et al. Coronary calcium as a predictor of coronary events in four racial or ethnic groups N Engl J Med 2008; 358(13): 1336-45.], and to obtain a non-invasive study of coronary morphology. Current MultiDetector Computed Tomography (MDCT) systems allow the visualization of the entire heart volume during an entire heart cycle with high density resolution. MDCT imaging is thus opened to a wide spectrum of cardiac applications. As pointed out by other researchers [11Mahnken AH, Mühlenbruch G, Günter GW, Wildberger JE. Cardiac CT: coronary arteries and beyond Eur Radiol 2007; 17: 994-1008.], different pieces of information about coronary morphology, myocardium structure and function may be simultaneously collected leading to a more complete assessment of the functional status of the heart. Quantitative evaluation of epicardial fat may add a prognostic value to cardiac CT examinations with a potential improvement of its cost-effectiveness. Unfortunately, evaluating epicardial fat by direct user interaction is a long and tedious task prone to strong inter- and intra-observer variability. As a matter of fact, accurate and reproducible measurements on reasonably large populations are heavily hampered.

Although the quantitative analysis of epicardial fat may thus have important clinical implications, the medical imaging literature on this topic is still limited. Actually, the extraction of clinical parameters for the evaluation of epicardial fat poses several non-trivial problems related to the areas of image segmentation, image reconstruction and shape modeling for the geometrical and densitometric characterization of the 3D models [12Moroni D, Perner P, Salvetti O. A general approach to shape characterisation for biomedical problems Int J Signal Imaging Syst Eng 2008; 1: 30-5.] representing fat depots.

Although the normal CT attenuation range for fat tissues is known to be in the interval -190<HU< - 30 [13Yoshizumi T, Nakamura T, Yamane M, et al. Abdominal fat: standardized technique for measurement at CT Radiology 1999; 211(1): 283-6.], such information alone is not sufficient to obtain accurate segmentation of fat depots. Indeed, such interval is somewhat dependent from the CT scanner and varies also across individuals. For this reason, the attenuation range for fat should be assessed on the specific study under examination.

In addition, care must be taken in using the attenuation range for threshold-based segmentation, since spurious regions should be previously eliminated.

For this reason, in [14Pednekar A, Bandekar AN, Kakadiaris IA, Naghavi M. Automatic Segmentation of Abdominal Fat from CT Data In: 7th IEEE Workshop on Applications of Computer Vision / IEEE Workshop on Motion and Video Computing (WACV/MOTION 2005); Breckenridge, CO, USA. 2005; pp. 308-15.] a method is proposed for the segmentation of abdominal fat tissue from CT images. The method classifies each pixel into three classes, corresponding to fat, non-fat and background, using a feature-vector paradigm. In particular, a wide range of Laws' features and Gabor texture features are extracted. After assessing the discriminative power of each feature, a restricted subset was chosen and employed for segmentation through a hierarchical multi-class fuzzy affinity method.

Very similarly, in [15Bandekar AN, Kakadiaris IA. Automated pericardial fat quantification in CT data In: Proc. 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; New York, USA. 2006; pp. 932-6.], the work in [14Pednekar A, Bandekar AN, Kakadiaris IA, Naghavi M. Automatic Segmentation of Abdominal Fat from CT Data In: 7th IEEE Workshop on Applications of Computer Vision / IEEE Workshop on Motion and Video Computing (WACV/MOTION 2005); Breckenridge, CO, USA. 2005; pp. 308-15.] is extended to the quantification of pericardial fat. The authors added a stage for the ad hoc labeling of organs and tissues in the cardiac CT (namely lungs, heart, descending aorta and subcutaneous regions). A major limitation of the approach is that it is mainly 2D-based and does not provide any insight in the regional distribution of the pericardial fat.

In [16Dey D, Suzuki Y, Suzuki S, et al. Automated quantitation of pericardiac fat from noncontrast CT Invest Radiol 2008; 43(2 ): 45-53., 17Dey D, Wong ND, Tamarappoo B, et al. Computer-aided non-contrast CT-based quantification of pericardial and thoracic fat and their associations with coronary calcium and metabolic syndrome Atherosclerosis 2009. in Press], a computer-aided method for the quantification of pericardial fat using non-contrast CT images is presented. The method uses a preprocessing step to remove all other structures apart from the heart, using a region growing segmentation strategy. Then, an experienced user is required to scroll through the slices between upper and lower heart limit and, if the pericardium is visible, he is required to place from 5 to 7 control points. Catmull-Rom cubic spline functions are then automatically generated to obtain a smooth closed pericardial contour. Identification of the fat inside this contour is finally achieved by thresholding.

The recent contribution [18Figueiredo B, Barbosa JG, Bettencourt N, Tavares JMRS. Semi-automatic quantification of the epicardial fat in CT images In: VipIMAGE 2009- II ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing; Porto, portugal: Taylor and Francis. 2009; pp. 247-50.] uses the same preprocessing step presented in [16Dey D, Suzuki Y, Suzuki S, et al. Automated quantitation of pericardiac fat from noncontrast CT Invest Radiol 2008; 43(2 ): 45-53.], but adds a high level step for the identification of the pericardium, thus avoiding manual placement of spline control points. Indeed, in each slice, the segmentation algorithm sweeps the anterior region of the heart from 0 to 180 degrees with rays having the heart centroid as origin. The higher intensity points along each ray are stored and used to reconstruct the epicardial contour, employing some robust estimation method to deal with the quite strong amount of noise and outliers. Although this approach may be interesting (and, indeed, may be integrated with the methods presented in this paper), the reported results are not impressive (with only 4 out of 10 images correctly segmented in a fully automatic way) and, so, some improvement is still necessary.

For what regards the estimation of visceral fat by other imaging modalities, in [19Leinhard D, Johansson A, Rydell J, et al. Quantitative abdominal fat estimation using MRI In: Proceedings of 19th International Conference on Pattern Recognition; Tampa, FL, USA: IAPR;. 2008.], an approach for abdominal fat estimation from MRI data is presented. The approach is based on two point Dixon imaging technique and consists in three stages for a) 3D phase unwrapping, b) image intensity inhomogeneity correction and c) final morphon-based registration and segmentation of fat tissue.

With respect to the related works presented above, the aim of this paper is to focus on image processing methods for the quantitative analysis of epicardial fat from volumetric non-contrast CT data acquired for calcium scoring, by extending the previous work [20Coppini G, Favilla R, Lami E, Marraccini P, Moroni D, Salvetti O. Regional epicardial fat measurement: computational methods for cardiac CT imaging Trans Mass-Data Anal Images Signals 2009; 1(1): 101-.]. Particular care is given throughout the paper to the identification of possible sources of inter- and intra-observer variability. Besides the description of a procedure for identifying the Volume of Interest (VoI) in the CT scan, the main contribution of the paper is a two step segmentation method suitable for the analysis of epicardial fat. In the first step of this method, an analysis of epicardial fat intensity distribution is carried out in order to define suitable thresholds for a first rough segmentation. In the second step, an advanced segmentation method, based on level set, is used to recover spatial coherence and smoothness of fat depots. Among several frameworks for level sets, the geodesic active contour framework [21Caselles V, Kimmel R, Sapiro G. Geodesic active contours Int J Comput Vis 1997; 22: 61-79.] has been selected, exploiting a variational formulation without contour re-initialization. Since intensity and region homogeneity are crucial for the classification of voxel belonging or not to fat tissue, a specially-designed region homogeneity term based on Gaussian mixture models has been incorporated in the framework. Experimental results show that the introduced segmentation method may be efficiently used for the quantification of epicardial fat.

The paper is organized as follows. In Section 2, the steps leading to an user-assisted segmentation of epicardial fat and to the extraction of proposed parameters for its quantification are described, while in Section 3, after describing the considered acquisition protocol, the results of the processing are reported step by step. Section 4 ends the paper with remarks and directions for future work.

2. METHODS

The procedure that has been devised for the measurement of epicardial fat consists in four steps, namely Identification of the VoI (Section 2.1), Segmentation of Epicardial Fat (Section 2.2), Segmentation Refinement (Section 2.3) and, finally, Extraction of Clinical Parameters (Section 2.4).

2.1. Identification of the Volume of Interest (VoI)

The goal of this section is to identify the Volume of Interest (VoI) on which to perform the quantitative analysis of epicardial fat. In particular, the first stage is to identify the anatomical structures in the CT scan corresponding to lungs, liver, diaphragm, sternum and ribs, in order to remove them from the VoI, since they do not contribute to epicardial fat. In addition, since the basal part of the heart (i.e. the volume containing the atria and the great vessels) is known to be more complex, its inclusion for the analysis of epicardial fat would require either strong manual intervention (leading to scarcely reproducible results) or extremely refined and knowledge-based segmentation methods. Thus, for the scopes of the present work, we focus on the analysis of the epicardial fat localized below the atrioventricular sulcus, i.e. in the regions corresponding to left and right ventricle. Indeed, the atrioventricular sulcus is an easily discernible pseudo-planar anatomical feature that can be certainly identified by experts, with very low inter-observer variability.

It is thus assumed that i) the acquired CT scan has been preprocessed by an expert, ii) the atrioventricular sulcus has been identified and iii) the volumetric dataset has been resampled along planes parallel to the sulcus. Formally, denoting with I = I (x, y, z) the volumetric dataset (where (x, y, z) ranges through the domain, namely a rectangular box ), it is assumed the the planes with equation z = const are parallel to the intraventricular sulcus.

Under this assumption, some assisted image editing is still necessary for the identification of the VoI. Indeed the region corresponding to the heart should be separated from the nearby tissues. Since in computed tomography, the attenuation coefficient of lung tissue is well separated from the one corresponding to muscles and fat, lungs do not pose any problem, since they can be disregarded by simple thresholding. On the other hand, the intensity values of liver and diaphragm are similar to values found in the VoI. Thus, they cannot be separated merely on the basis of the intensity value. Another similar problem consists in the separation of fat depots near the chest wall from the epicardial fat. Also in this case, the separation cannot be based just on the local image appearance, but some complex vision system is required.

For the scopes of the present work, a simple form of manual intervention is introduced for these separation problems. Namely, an expert observer is required to scroll the slices between the atrioventricular sulcus and the apex and to place some control points on the pericardium. Such control points are automatically used to draw a natural cubic spline, which represents the closed pericardium contour, on each slice. A rasterization procedure is finally applied to convert the stack of splines into the volume they bound. The volume V represents the identified VoI.

2.2. Segmentation of Epicardial Fat

After having defined the VoI, a first rough estimation of epicardial fat is obtained by applying a threshold-based approach.

As reported in Section 1, absolute Hounsfield Unit (HU) values of voxels correspond directly to tissue property and, thus, epicardial fat may be identified by thresholding the VoI with the interval corresponding to adipose tissue (i.e. -190<HU<- 30 ). However, such nominal interval should be corrected on the basis both of the actual CT scanner and of the acquisition protocol (especially in case of contrast-enhanced images). In addition, it has been reported that the attenuation range varies also across population [15Bandekar AN, Kakadiaris IA. Automated pericardial fat quantification in CT data In: Proc. 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; New York, USA. 2006; pp. 932-6.]. Thus, a tool has been designed for guiding the selection of a suitable interval [T0 , T1] to achieve the segmentation of epicardial fat, expressed as the following binary image J :


(1) 

In particular, if the segmentation obtained by the default interval ([T0 , T1] =[-190, - 30], using values in HU) is not satisfying, the cardiologist may proceed to the fine tuning of the the interval, for example by “probing” a patch of the image corresponding to epicardial fat. Indeed, the analysis of the histogram of the patch may be used to define a custom interval [T0 , T1]. More precisely, the following procedure is used:

  1. Select a voxel P = (xo , yo , zo ) in the interior of the region corresponding to epicardial fat.
  2. Consider a cube Co centered in P with edge length ℓ.
  3. Compute the mean µ and the standard deviation σ of the volumetric dataset I restricted to the cube Co.
  4. Define the thresholds in the interval [To, T1] by:

    where ξ is a positive real number. Notice that the interval is chosen with center µ and width proportional to σ.

2.3. Segmentation Refinement

The thresholding operated by Equation 1 provides a segmentation of epicardial fat which may suffer from spurious oscillations in the contour. A level set method is used to achieve both contour regularization and better adherence to image edges. Among several frameworks for level sets, we work slice-by-slice in the geodesic active contour framework [21Caselles V, Kimmel R, Sapiro G. Geodesic active contours Int J Comput Vis 1997; 22: 61-79.], exploiting a variational formulation without contour re-initialization. Since the ultimate goal is to classify voxels inside the VoI with the labels fat and non-fat on the basis of their intensity, regional information is crucial and, thus, a region homogeneity term has been incorporated in the framework. More formally, consider for each slice z = zk = cost , the region of interest and let be the induced two-dimensional image. Let {R+,R-} be a partition of R into two disjoint regions corresponding to non-fat and fat respectively. Notice that the slice index k has been omitted to avoid cumbersome notation. Let be C the contour separating one region from the other. We look for an optimal C by minimizing a suitable energy functional E.

In the level set formulation, the contour C is represented as the zero-level set of a scalar function


(2) 

In addition, the region R+ and (resp. R-) is precisely the set on which Ф assumes positive (resp. negative) values.

Artificial time is then introduced to evolve the contour towards a minimum value of the energy E. Taking as initial guess the partition given by the thresholding in Equation 1, we set:


(3) 

Thus, the initial value Ф0 at time t = 0 is set equal to the Signed Distance Function (SDF) from the contour. Of course several other choices for the function Ф0 may be used to encode the same initial contour, however -as it is well known- initializing the level set function with a SDF is suitable for achieving stability in numerical approximations.

For example, a SDF is almost everywhere differentiable (actually as we will see in Section 2.3.3), reducing numerical inaccuracy that may show up during contour evolution.

A composite energy term was associated with Фt, expressing the goodness of contour fitting to the image data and inner conditions of homogeneity and stability:


(4) 

Each of the summands in Equation 4 achieves a special effect and, for sake of clarity, they will be discussed separately in the sections below.

2.3.1. The Geodesic Active Contour Term

As usual, let the edge indicator functiong be defined as:


(5) 

where Gσ is a Gaussian kernel with variance σ2.

The first summand EGAC in Equation 4 is defined as:


(6) 

where the integral is extended to the whole region R, α is a positive constant and δ is the Dirac delta function. The term EGAC corresponds to the length of the contour C weighted by the edge indicator function. Requiring that the length of the contour is minimal is a well-known condition for achieving smoothness, since spurious contour oscillations are wiped out. Minimizing g -weighted length achieves instead simultaneously contour smoothness and adherence to image data. In particular, notice that a piece of contour crossing an homogeneous region is discouraged, since the region is characterized by high values of g (namely g ≈ 1). By contrast a piece of contour on an ideal edge costs nothing, since g = 0 there.

2.3.2. The Region Homogeneity Term

The ERH term is specially designed for favoring the selection of homogeneous regions. Namely, we suppose that the intensities of voxels belonging to R+ and R- are drawn from two distinct probability density functions, being p+ and p- respectively.

After a careful analysis of the histograms of the region of interest, it has been judged convenient to model both distributions by using Gaussian Mixture Models (GMM), with N+ and N- components respectively:


(7) 

where are the mean and standard deviation of the i-th Gaussian distribution and are the weights of the mixture.

Assuming that voxels are independent and identically distributed, the conditional probability of the image Ik given a partition {R+, R-} is given by:


(8) 

Applying the logarithm to Equation 8, we have:


(9) 

This in turn suggests to define the following energetic term for region homogeneity (see also [22Riklin-Raviv T, Sochen NA, Kiryati N, et al. Propagating Distributions for Segmentation of Brain Atlas In: IEEE International Symposium on Biomedical Imaging; 2007; pp. 1304-7.]):


(10) 

where β is a positive constant and H denotes the Heaviside function, defined as:


(11) 

2.3.3. The SDF Constrain Term

For the introduction of the last, technical summand ESDF, recall that we chose Ф0 to be a SDF, for several numerical advantages. In usual level set formulation, Фt ceases to be a SDF as the time evolves. This can lead to less confident estimations of the derivatives in usual finite difference schemes and, eventually, the functions Фt may develop shocks. To avoid this, it is customary to periodically re-initialize Фt to a SDF. Several strategies exist for achieving this goal, for example by applying some iso-contour algorithm and, then, computing the SDF again or by solving the so-called re-initialization equation, i.e. another partial differential equation. However, none of this method is appealing, because it is difficult to decide when and how often to perform re-initialization. The strategy we adopted consists in softly constraining Фt to be a SDF during the whole evolution [23Li C, Xu C, Gui C, Fox MD. Level set evolution without re-initialization: a new variational formulation In: Proceedings of the 2005 Conference on Computer Vision and Pattern Recognition (CVPR'05); 2005; pp. 430-6.].

To discuss this issue, we first recall that a SDF Ψ satisfies:


(12) 

in every point having one and only one closest point and, thus, generically. It is not difficult to show that the converse also holds, i.e. any function Ψ satisfying is a SDF. This differential characterization of signed distance functions is suitable for the present purpose. Actually defining


(13) 

we have a measure of how close Ф is to a signed distance function. A positive constant γ is introduced to manage the tradeoff with the previous energy terms. During contour evolution, the ESDF the term prevents Фt to move away from the space of signed distance functions. In this way, numerical inaccuracy are avoided, retaining the good features of a SDF, without any need of other numerical remedies.

2.3.4. Energy Minimization

The minimization of the functional E is performed by gradient descent, using the initialization described in Equation 3:


(14) 

where is the first variation of the functional E By linearity:


(15) 

It is not difficult to compute these first variations, which are given by:


(16) 


(17) 


(18) 

In the numerical discretization, all these quantities are approximated by a first order standard finite difference scheme.

2.4. Extraction of Clinical Parameters

After having applied the previous method to each slice, the direct computation of global parameters for the evaluation of epicardial fat is straightforward. In particular, the area Ak of the region corresponding to epicardial fat may be computed in the slice z = zk by the formula:


(19) 

where Ф is the optimal function found minimizing the functional in Equation 4 and H is the Heaviside function defined in Equation 11. Notice that there is no need to define a crisp segmentation, i.e. for example by extracting a binary mask from Ф . Using a mollified Heaviside function, partial volume effects near the interface may be also controlled. Let ∆ be the interslice spacing in the dataset. The total epicardial fat volumeVfat may be estimated by the sum:


(20) 

where the summation runs over all the slices between the atrioventricular sulcus and the heart apex.

For clinical applications, it is convenient to normalize the total fat volume by the Body Surface Area (BSA), thus defining a fat index:


(21) 

Besides total volume, we started to analyze the regional distribution of epicardial fat depots. To this end, the volume of fat surrounding each ventricle is estimated. For this analysis, the user is asked to locate, in the original images, the plane separating the left from the right ventricle. Such a plane can be easily and reproducibly identified according to a well-defined anatomical landmark, i.e. the interventricular sulcus.

3. EXPERIMENTAL SECTION

3.1. Experimental Data

The procedure was tested on CT images obtained during standard calcium scoring CT studies. Cardiac CT examinations were performed using a GE Light- Speed 64 VCT scanner. Patients were pretreated with proponolol (2 mg i.v.) and isorbide denitrate (0.6 mg i.v.) as clinically indicated. After the scout of the thorax, the heart volume was scanned to obtain the images for calcium scoring. Slices with voxel size 0.625 × 0.625 × 2.5 mm were so obtained without contrast medium injection. Ten patients have been used for testing the procedures described in this paper. Examples of acquired images are given in Fig. (1).

Fig. (1)

Typical slices acquired in the calcium scoring case. In the slice (a) some portion of atria and great vessels are visible, while slice (b) shows a separation problem between the heart and the chest wall. Hounsfield values between and are represented.



Fig. (2)

Typical views of the dataset obtained after resampling in planes parallel to the atrioventricular sulcus: (a) saggital view and (b) axial view. Notice that in (a) the atrioventricular sulcus is easily discernible.



Fig. (3)

Axial views of the identified VoI: (a) apical, (b) mid and (c) basal slice planes. The green curve is the natural cubic spline whose control points have been placed by an experienced observer.



Fig. (4)

First segmentation of the epicardial fat by thresholding: (a) apical, (b) mid and (c) basal slice planes. The red curves encircle epicardial fat depots.



Fig. (5)

Intensity distribution of the non-fat region from Fig. (4b) and plot of the two Gaussian distributions from the estimated GMM.



Fig. (6)

Level set refinement of the epicardial fat segmentation: (a) apical, (b) mid and (c) basal slice planes. The red curves encircle fat depots. Improvement of smoothness with respect Fig. (4) may be appreciated.



Fig. (7)

Views of the global segmented volume: (a) axial, (b) coronal and (c) sagittal views. Epicardial fat is shaded in blue, while internal heart is shaded in pink.



Fig. (8)

3D reconstruction of the segmented volume.



3.2. Results

After the expert has identified the plane H on which the atrioventricular sulcus approximately lies (see Section 2.1), the original images acquired by the CT scanner were resampled along planes parallel to H. The volume was discretized by a grid of size (nx , ny , nz ) = (512, 512, 62) . Since the spatial resolution of the datasets is high, no significant artefact was introduced by resampling. Examples of the resampled volumetric dataset are shown in Fig. (2).

The volume was then edited as described in Section 2.1, obtaining as a result a VoI containing all the voxels that may contribute to epicardial fat volume. Some views of the VoI are shown in Fig. (3), together with the natural cubic spline curve delineated by an experienced observer.

The plane separating the left and right ventricle, identified through the interventricular sulcus, is also traced with some standard user interaction.

After identification of the VoI, the threshold-based segmentation was performed. For the choice of the interval [T0 ,T1 ], a voxel in the interior of the epicardial fat region was selected. A cube C0 having edge length = 6,25 mm was considered, obtaining as values µ = - 115 HU and σ ≈ 35 HU. The parameter ξ was manually set to 2. As expected, it resulted that µ is robust with respect to the choice of the voxel and of the length . Fig. (4) shows the segmentation obtained from the slices in Fig. (3) by thresholding according to the determined interval [T0 ,T1]=[-185, -45] HU.

The level set refinement method was then applied slice-by-slice. Using the initial guess provided by thresholding, GMMs with N+ = N- = 2 modes were fitted to the data, using the Expectation-Maximization algorithm [24Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm J R Stat Soc Ser B Stat Methodol 1977; 39(1): 1-38.]. An example is shown in Fig. (5).

Visual evaluation of the segmentation results lead to the choice of the parameters α,β,γ involved in the summands in Equation 4. In particular, the following values were used throughout the study:

α=2    β=10-3    γ=0.2

A time-step ∆t = 3 was deemed suitable and 30 iterations in the gradient descent brought to the results reported in Fig. (6). After applying such refinement to all the slices, the final segmentation of the epicardial fat could be inspected by using the conventional views reported in Fig. (7), where tissues of interest have been shaded. The geometry of epicardial fat was also reconstructed applying the marching cube isosurfacing algorithm. An example is shown in Fig. (8). For this particular patient, a Vfat =78 cm3 was computed, which leads to a Fat Index =41 cm 3 / m 2.

4. CONCLUSIONS AND FURTHER WORK

In this paper we have introduced image processing tools for the assisted segmentation and analysis of epicardial fat. Actually epicardial fat has been shown recently to correlate with cardiovascular diseases and metabolic syndrome and, therefore, the interest of the medical community urges to develop reproducible and quantitative analysis methodologies.

In this work, after describing methods for the identification of the volume of interest and for the segmentation, parameters for evaluation of epicardial fat volume and regional distribution have been discussed. The presented preliminary results are quite encouraging.

In the framework of this research, the introduced tools are being refined and efforts are being spent both to improve the automation of the processing steps and to enrich the quantitative description by performing a regional analysis of fat thickness.

In addition, quantitative evaluation of the obtained results is currently in progress. In particular, a phantom is being considered with the purpose both of calibrating the CT scanner and of gathering some ground-truth information.

Finally, the main aim will be to correlate the extracted parameters with other clinically significant data, in order to better understand the association between epicardial fat and cardiovascular risk factors.

ACKNOWLEDGEMENTS

This work has been partially supported by European Project HEARTFAID “A knowledge based platform of services for supporting medical-clinical management of the heart failure within the elderly population” (IST-2005-027107).

REFERENCES

[1] Sacks HS, Fain JN. Human epicardial adipose tissue: A review Am Heart J 2007; 153: 907-17.
[2] Rosito GA, Massaro JM, Hoffmann U, et al. Pericardial fat, visceral abdominal fat, cardiovascular disease risk factors, and vascular calcification in a community-based sample: the framingham heart study Circulation 2008; 117: 605-13.
[3] Wang TD, Lee WJ, Shih FY, et al. Relations of epicardial adipose tissue measured by multidetector computed tomography to components of the metabolic syndrome are region-specific and independent of anthropometric indexes and intraabdominal visceral fat J Clin Endocrinol Metab 2009; 94(2): 662-9.
[4] Mazurek T, Zhang L, Zalewski A, et al. Human epicardial adipose tissue is a source of inflammatory mediators Circulation 2003; 108(20): 2460-6.
[5] Baker A, Silva N, Quinn D, et al. Human epicardial adipose tissue expresses a pathogenic profile of adipocytokines in patients with cardiovascular disease Cardiovasc Diabetol 2006; 5: 1.
[6] Iacobellis G, Assael F, Ribaudo MC, et al. Epicardial fat from echocardiography: a new method for visceral adipose tissue prediction Obesity 2003; 11(2): 304-10.
[7] Iacobellis G. Echocardiographic epicardial fat: A new tool in the white coat pocket Nutr Metab Cardiovasc Dis 2008; 18(8): 519-22.
[8] Chaowalit N, Somers VK, Pellikka PA, Rihal CS, Lopez-Jimenez F. Subepicardial adipose tissue and the presence and severity of coronary artery disease Atherosclerosis 2006; 186(2): 354-9.
[9] Abbara S, Desai JC, Cury RC, Butler J, Nieman K, Reddy V. Mapping epicardial fat with multi-detector computed tomography to facilitate percutaneous transepicardial arrhythmia ablation Eur J Radiol 2006; 57(3): 417-22.
[10] Detrano R, Guerci AD, Carr JJ, et al. Coronary calcium as a predictor of coronary events in four racial or ethnic groups N Engl J Med 2008; 358(13): 1336-45.
[11] Mahnken AH, Mühlenbruch G, Günter GW, Wildberger JE. Cardiac CT: coronary arteries and beyond Eur Radiol 2007; 17: 994-1008.
[12] Moroni D, Perner P, Salvetti O. A general approach to shape characterisation for biomedical problems Int J Signal Imaging Syst Eng 2008; 1: 30-5.
[13] Yoshizumi T, Nakamura T, Yamane M, et al. Abdominal fat: standardized technique for measurement at CT Radiology 1999; 211(1): 283-6.
[14] Pednekar A, Bandekar AN, Kakadiaris IA, Naghavi M. Automatic Segmentation of Abdominal Fat from CT Data In: 7th IEEE Workshop on Applications of Computer Vision / IEEE Workshop on Motion and Video Computing (WACV/MOTION 2005); Breckenridge, CO, USA. 2005; pp. 308-15.
[15] Bandekar AN, Kakadiaris IA. Automated pericardial fat quantification in CT data In: Proc. 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society; New York, USA. 2006; pp. 932-6.
[16] Dey D, Suzuki Y, Suzuki S, et al. Automated quantitation of pericardiac fat from noncontrast CT Invest Radiol 2008; 43(2 ): 45-53.
[17] Dey D, Wong ND, Tamarappoo B, et al. Computer-aided non-contrast CT-based quantification of pericardial and thoracic fat and their associations with coronary calcium and metabolic syndrome Atherosclerosis 2009. in Press
[18] Figueiredo B, Barbosa JG, Bettencourt N, Tavares JMRS. Semi-automatic quantification of the epicardial fat in CT images In: VipIMAGE 2009- II ECCOMAS Thematic Conference on Computational Vision and Medical Image Processing; Porto, portugal: Taylor and Francis. 2009; pp. 247-50.
[19] Leinhard D, Johansson A, Rydell J, et al. Quantitative abdominal fat estimation using MRI In: Proceedings of 19th International Conference on Pattern Recognition; Tampa, FL, USA: IAPR;. 2008.
[20] Coppini G, Favilla R, Lami E, Marraccini P, Moroni D, Salvetti O. Regional epicardial fat measurement: computational methods for cardiac CT imaging Trans Mass-Data Anal Images Signals 2009; 1(1): 101-.
[21] Caselles V, Kimmel R, Sapiro G. Geodesic active contours Int J Comput Vis 1997; 22: 61-79.
[22] Riklin-Raviv T, Sochen NA, Kiryati N, et al. Propagating Distributions for Segmentation of Brain Atlas In: IEEE International Symposium on Biomedical Imaging; 2007; pp. 1304-7.
[23] Li C, Xu C, Gui C, Fox MD. Level set evolution without re-initialization: a new variational formulation In: Proceedings of the 2005 Conference on Computer Vision and Pattern Recognition (CVPR'05); 2005; pp. 430-6.
[24] Dempster AP, Laird NM, Rubin DB. Maximum likelihood from incomplete data via the EM algorithm J R Stat Soc Ser B Stat Methodol 1977; 39(1): 1-38.
Track Your Manuscript:


Endorsements



"Open access will revolutionize 21st century knowledge work and accelerate the diffusion of ideas and evidence that support just in time learning and the evolution of thinking in a number of disciplines."


Daniel Pesut
(Indiana University School of Nursing, USA)

"It is important that students and researchers from all over the world can have easy access to relevant, high-standard and timely scientific information. This is exactly what Open Access Journals provide and this is the reason why I support this endeavor."


Jacques Descotes
(Centre Antipoison-Centre de Pharmacovigilance, France)

"Publishing research articles is the key for future scientific progress. Open Access publishing is therefore of utmost importance for wider dissemination of information, and will help serving the best interest of the scientific community."


Patrice Talaga
(UCB S.A., Belgium)

"Open access journals are a novel concept in the medical literature. They offer accessible information to a wide variety of individuals, including physicians, medical students, clinical investigators, and the general public. They are an outstanding source of medical and scientific information."


Jeffrey M. Weinberg
(St. Luke's-Roosevelt Hospital Center, USA)

"Open access journals are extremely useful for graduate students, investigators and all other interested persons to read important scientific articles and subscribe scientific journals. Indeed, the research articles span a wide range of area and of high quality. This is specially a must for researchers belonging to institutions with limited library facility and funding to subscribe scientific journals."


Debomoy K. Lahiri
(Indiana University School of Medicine, USA)

"Open access journals represent a major break-through in publishing. They provide easy access to the latest research on a wide variety of issues. Relevant and timely articles are made available in a fraction of the time taken by more conventional publishers. Articles are of uniformly high quality and written by the world's leading authorities."


Robert Looney
(Naval Postgraduate School, USA)

"Open access journals have transformed the way scientific data is published and disseminated: particularly, whilst ensuring a high quality standard and transparency in the editorial process, they have increased the access to the scientific literature by those researchers that have limited library support or that are working on small budgets."


Richard Reithinger
(Westat, USA)

"Not only do open access journals greatly improve the access to high quality information for scientists in the developing world, it also provides extra exposure for our papers."


J. Ferwerda
(University of Oxford, UK)

"Open Access 'Chemistry' Journals allow the dissemination of knowledge at your finger tips without paying for the scientific content."


Sean L. Kitson
(Almac Sciences, Northern Ireland)

"In principle, all scientific journals should have open access, as should be science itself. Open access journals are very helpful for students, researchers and the general public including people from institutions which do not have library or cannot afford to subscribe scientific journals. The articles are high standard and cover a wide area."


Hubert Wolterbeek
(Delft University of Technology, The Netherlands)

"The widest possible diffusion of information is critical for the advancement of science. In this perspective, open access journals are instrumental in fostering researches and achievements."


Alessandro Laviano
(Sapienza - University of Rome, Italy)

"Open access journals are very useful for all scientists as they can have quick information in the different fields of science."


Philippe Hernigou
(Paris University, France)

"There are many scientists who can not afford the rather expensive subscriptions to scientific journals. Open access journals offer a good alternative for free access to good quality scientific information."


Fidel Toldrá
(Instituto de Agroquimica y Tecnologia de Alimentos, Spain)

"Open access journals have become a fundamental tool for students, researchers, patients and the general public. Many people from institutions which do not have library or cannot afford to subscribe scientific journals benefit of them on a daily basis. The articles are among the best and cover most scientific areas."


M. Bendandi
(University Clinic of Navarre, Spain)

"These journals provide researchers with a platform for rapid, open access scientific communication. The articles are of high quality and broad scope."


Peter Chiba
(University of Vienna, Austria)

"Open access journals are probably one of the most important contributions to promote and diffuse science worldwide."


Jaime Sampaio
(University of Trás-os-Montes e Alto Douro, Portugal)

"Open access journals make up a new and rather revolutionary way to scientific publication. This option opens several quite interesting possibilities to disseminate openly and freely new knowledge and even to facilitate interpersonal communication among scientists."


Eduardo A. Castro
(INIFTA, Argentina)

"Open access journals are freely available online throughout the world, for you to read, download, copy, distribute, and use. The articles published in the open access journals are high quality and cover a wide range of fields."


Kenji Hashimoto
(Chiba University, Japan)

"Open Access journals offer an innovative and efficient way of publication for academics and professionals in a wide range of disciplines. The papers published are of high quality after rigorous peer review and they are Indexed in: major international databases. I read Open Access journals to keep abreast of the recent development in my field of study."


Daniel Shek
(Chinese University of Hong Kong, Hong Kong)

"It is a modern trend for publishers to establish open access journals. Researchers, faculty members, and students will be greatly benefited by the new journals of Bentham Science Publishers Ltd. in this category."


Jih Ru Hwu
(National Central University, Taiwan)


Browse Contents




Webmaster Contact: info@benthamopen.net
Copyright © 2023 Bentham Open