3D Texture Analysis of MRI Brain Datasets

Vassili A.Kovalev, Frithjof Kruggel, Hermann-Josef Gertz, and D. Yves von Cramon

AbstractA method is proposed for 3D texture analysis of MRI brain datasets. It is based on extended, multi-sort co-occurrence matrices that employ intensity, gradient and anisotropy image features in a uniform way. Basic properties of matrices as well as their sensitivity and dependence on spatial image scaling are evaluated. The ability of the suggested 3D texture descriptors is demonstrated on non-trivial classification tasks for pathologic findings in brain datasets.

Index terms3D texture, MRI, co-occurrence, neurodegenerative diseases


I.     Introduction

It has long been recognized that textural features play an important role in a wide variety of image analysis problems. Depending on texture type and specific goal of the problem in hands, various types of 2D texture descriptors have been proposed and studied in the literature, ranging from general Wold texture features [1] to co-occurrence matrices [2] and tree-structured wavelet transform [3]. Reviews of 2D texture analysis methods can be found in [4, 5]. Recent proliferation in 3D sensor technology and continuous increase of spatial resolution of neuroimaging techniques call for new, natively 3D texture analysis methods. Although using slice-by-slice 2D approaches is still possible, they suffer from the drawback that some important information contained in original image data is ignored.

While 2D texture analysis has been extensively studied, there has been very little work done in the area of characterization and analysis of 3D (volumetric) textures. So far, the extension of 2D gray scale methods to 3D has largely been confined to 3D edge detectors (e.g., [6]) and such a particular property of 3D textures as spatial anisotropy [7].

There is a strong practical need for image analysis methods that derive quantitative measures about tissue characteristics. Such measures may then be incorporated into statistics comparing clinical features with MRI findings. The work proposed here is a part of the larger project to develop automatic methods that will help to understand and quantify diffuse pathologic processes, which occur with neurodegenerative diseases. Understanding these disease processes involves detecting and describing pathologic changes as well as monitoring of these changes with time. Lesions such as diffuse white matter hypointensities [8, 9, 10], periventricular hypointensities [11] and enlarged periventricular spaces [12, 13] are typically faint, unsharp, and “cloudy” (without sharp borders). It is expected that these lesions might be segmented and characterized quantitatively with the help of texture analysis methods. The unsharp nature of non-focal brain lesions does not give us a chance to define their precise borders. It appears more realistic to calculate brain lesion probability maps instead. We may then evaluate visually how well it fits to experts expectations, use the map to segment the lesion for any given probability threshold and derive quantitative volumetric and textural descriptors of a lesion.

In this work we propose a new method for 3D texture analysis which is based on extended co-occurrence matrices. Gray level co-occurrence matrices and gray level run length matrices have been suggested by Haralick et al. [2] and Galloway [14]. These classical 2D texture analysis tools have been successfully used for texture description, classification and segmentation in their original form in a number of applications including brain image analysis (e.g., [15]). In the context of this work, there are at least three different reasons for extending of the co-occurrence approach.

(a) Consideration of the change of original image data dimensionality from 2D to 3D. This is an obvious and rather technical point. In order to adapt traditional co-occurrence matrices designed for 2D images to 3D, there is no need to change original definitions except for considering neighboring voxels on the 3D voxel lattice. Precisely speaking, the increase of spatial dimensionality of N-dimensional arrays defined on the same set of discrete gray values from N to N+1 changes neighborhood relations and, therefore, changes mutual relationships between the sets of possible N-dimensional “images” and their co-occurrence representations. However, in this paper we focus on the analysis of 3D brain datasets and do not elaborate these general theoretical problems.

(b) Increasing the sensitivity and specificity of co-occurrence descriptors. Traditional co-occurrence matrices perform well for characterization and discrimination of “general”, distinct textural classes. Typical examples of such textures are given in the Brodatz album [16] containing natural patterns of reptile skin leafs, brick walls, etc. Because the variation of brain texture is not very salient, we need to increase the sensitivity and specificity of co-occurrence features in order to enable detection and separation of rather faint and not well defined textural differences related to various normal and pathological structures. In order to achieve this, we increase the co-occurrence matrix dimensionality (number of axes) by combining “elementary” image features of different sorts (e.g., intensity, gradient magnitude, mutual orientation) as described in [17]. In this respect we may say that traditional co-occurrence matrices are single-sort matrices, namely, intensity co-occurrence matrices. We increase sensitivity and specificity of these descriptors by the use of multi-sort and, accordingly, multi-dimensional co-occurrence matrices. Such an extension provides means for dealing with different features in a systematic and uniform manner. It is important to note that there are no relations between image dimensionality and co-occurrence matrix dimensionality. The image dimensionality corresponds to the spatial dimensionality of input data while the matrix dimensionality reflects the number of image characteristics and inter-voxel relations under consideration.

(c) Rotation and reflection invariance. According to the definition (e.g., [18]), classical co-occurrence matrices use a positional operator to define direction and distance for the pixel pairs under examination. Selection of these directions is not critical when we are dealing with uniform, anisotropic textures or, just opposite, analyzing strongly isotropic objects placed into image origin in a pre-defined way. In all other cases the co-occurrence matrices computed for original and rotated/reflected images may be very different. To overcome this drawback we do not use pre-defined directions for voxel pairs in 3D. Instead, all possible voxel pairs (with no repetition) are considered at a certain distance range. Hence, such a co-occurrence matrix describes the internal structure of a given image or an arbitrary-shaped 3D gray scale segment, while being independent of its orientation relative to the image origin.

In section 2 we define extended multi-sort co-occurrence matrices formally and consider their basic properties. In section 3 we discuss issues related to the matrix sensitivity and dependence on spatial image scaling. Section 4 provides examples of application of the method suggested. We draw our conclusions and outline future work in section 5.

II.     Methodology

A.     General Approach

Following the basic concept of elementary image structures [17], we assume that the image of any object can be considered as a composition of voxel pairs. The elements of a pair carry some characteristics (e.g., gray value, gradient magnitude, semantic label) and have some relations (e.g., distance between them, relative gradient angle, intensity difference). To represent the voxel pairs that constitute an image, we use an N-dimensional co-occurrence matrix where each of the characteristics and relations correspond to a different dimension of the matrix. Thus, a N-dimensional co-occurrence matrix is a N-dimensional histogram W whose elements have the general form w(c1, c2, …, cM; rM+1, rM+2, …, rN), where ci takes all possible (quantified) values of characteristic i, and rj all possible values of relation j defined on a voxel pair. The value w of the matrix is the frequency of occurrence of given voxel pair in the image, or the frequency of certain combinations of image characteristics.

The selection of an appropriate matrix type, i.e., type of elementary voxel characteristics and relations used is a mathematically intractable problem. It depends on the purpose of the analysis, intrinsic image properties and the way image classes under comparison manifest their differences. As a general way for multidimensional co-occurrence matrix construction, one may follow the principle of “orthogonal” sets of elementary image features associated with different matrix axes. According to studies of human texture perception conducted by Rao and Lohse [19] and Tamura et al. [20], such features capture visual properties expressed by the terms “granularity” (coarseness, contrast), ”repetitiveness” (periodicity or randomness), and “directionality” (anisotropy). Based on these results and experience using of traditional intensity co-occurrence matrices, we suggest the following three basic characteristics: intensity, gradient magnitude and relative orientation of gradient vectors. Being considered together with their variations in spatial domain, these image features could be accepted as an “orthogonal basis” for co-occurrence matrix axes.

For a formal definition of the corresponding co-occurrence matrix, let us consider an arbitrary voxel pair (i,k) defined on discrete voxel lattice by voxel indices and with the Euclidean distance . Let us denote intensities of these voxels by and , local gradient magnitudes by and the angle between their 3D gradient vectors by . Then the general, six-dimensional co-occurrence matrix can be defined as:

Formula 1
Gradient magnitudes , and the angle between gradient vectors can be calculated as:
Formula 1.1
  where is the dot vector product and correspond to the normalized gradient vectors. Gradient vector components and can be calculated by any suitable 3D operator. Since we are dealing with high frequency textures, we use a filter with a small 3x3x3 window proposed by Zucker and Hummel [6].

Denoting integer intensity bins by indices bI=1,…,BI, gradient magnitude bins by bG=1,…,BG, relative gradient angle bins byba=1,…,Ba, and distance bins by bd=1,…,D, the matrix element w(I(i), I(k), G(i), G(k), a(i,k), d(i,k)) can be formally defined as:

Formula 1.2
 where x, y, and z are offsets on X, Y and Z axes measured in image raster units. The last two lines of the definition formalize the requirement of selection of all possible voxel pairs with no repetition. When calculating the matrices, we always follow the original image raster and round Euclidean distances d(i,k) to integer matrix bins in order to avoid incorporation of non-existing intensity values caused by interpolation. Therefore the roundoperator in (1.2) is defined in the common sense, i.e., as rounding to the nearest integer value. According to axis types, we call this kind of co-occurrence matrices IIGGAD for brevity.

In many practical occasions some reduced versions of the above general IIGGAD matrix can be used as well. In particular, it is worth to consider the following single-sort, intensity (IID), gradient magnitude (GGD) and gradient angle (gAD) matrices:

Formula 2
Formula 3
Formula 4
It is easy to see that single-sort matrices W1-W3 can always be derived from basic IIGGAD matrix by summation along the corresponding axes. They may be considered as reduced, low-resolution versions of W. Conversely, the original IIGGAD matrix can not be restored from the W1-W3 matrices. Therefore any combination of “low-resolution”, particular co-occurrence features can not describe the image structure as detailed as the IIGGAD matrix.

Note that the IID co-occurrence matrix W1 is a simple 3D version of the traditional co-occurrence matrix. It describes image spatial structure based explicitly on the intensity information with no respect to other important features. Indeed, all image voxel pairs at given distance bd with intensity levels and fall into a single matrix bin w1(bIi,bIk,bd), while in multi-sort IIGGAD matrices they are additionally separated between the three axes and according to the local intensity slopes and mutual orientations.

B.     Matrix Properties and Related Practical Issues

In this section we briefly discuss important properties of extended co-occurrence matrices and some aspects of their practical use for brain image analysis.

1)     Resolution of the co-occurrence representation

The “resolution” property of an image descriptor can be evaluated by considering cases where two or more different images have exactly the same descriptor. According to the definitions (1)-(4), any given image and its rotated/reflected versions have the same co-occurrence matrix. In the context of brain image analysis, with the characteristic “reflected” intensity distribution in the left and right hemispheres and unpredictable sulcal variability, this is a highly desirable property. Applying such transforms as rotation, reflection, and translation to an arbitrary image segment does not change the corresponding co-occurrence matrix (provided the segment surrounded by a uniform background with thickness greater than maximum considered distance D). Therefore in the general case, one co-occurrence matrix may correspond to a set of different images, which are indistinguishable by a given kind of descriptors.

2)     Statistical properties of the inverse matrix-image transform

As mentioned earlier, finding inter-relations between the direct and inverse co-occurrence transforms is a hard combinatorial problem, which is not studied in this work theoretically. Statistically, this problem is closely related to the synthesis (reconstruction) of textures with given properties (e.g., [21, 22, 23]). In order to examine co-occurrence properties relevant to the practical aspects of the analysis of brain datasets, we adapt the texture synthesis method suggested by Lohmann [23]. The method is based on an iterative stochastic procedure that reconstructs a 2D textural image from a given intensity co-occurrence matrix defined in the classical way. Note, we use a 2D version of the IID matrix defined by the equation (2). Along with visual estimation of the “equivalent” images, the main goals were to find out an appropriate way for matrix normalization, selection of suitable distance range, and consideration of texture anisotropy.

Fig. 1 shows an original MRI-T1 image slice (Fig. 1.a) and images reconstructed by its IID co-occurrence matrix under different conditions. For calculating the original matrix we do not limit ourselves to the intracranial cavity but background pixels are also considered for demonstration purposes. From Fig. 1b it can be seen that for a relatively small distance range of d=1,…,4, the IID co-occurrence matrix describes local textural properties reasonably well while the “global” image structure is almost ignored. Increasing the distance range to a maximum D=30 (Fig. 1.c) results in the opposite behavior: the global brain shape is similar to original one, but local textural properties are completely lost. This is because the number of possible pixel pairs for greater distances is much bigger than for a local neighborhood. Therefore, global spatial relations strongly dominate the local ones. Matrix normalization is relevant for comparing objects with different size (e.g., brains of two individuals) but do not help to avoid large distance domination. The natural solution is to normalize the matrix for every distance bin separately (see Fig. 1.d).


Fig. 1.  Example of 2D images with the same intensity co-occurrence matrices. (a) Original image slice. (b)-(c) Images reconstructed by original intensity co-occurrence matrix with distance ranges d=1,4 and d=1,30 respectively. (d) Same as (c) but with normalization of the co-occurrence matrix for every distance bin separately.

3)     Consideration of texture anisotropy

Consideration of the texture anisotropy properties based on co-occurrence descriptors is a bit more complicated issue. It depends on both anisotropy properties of the image analyzed and the type of co-occurrence tools employed. First, let us conditionally categorize image texture anisotropy into the following three classes:

  • isotropic,
  • anisotropic, uni-directional,
  • anisotropic, multi-directional.

In case of classical co-occurrence matrices [2, 18], anisotropy is not considered directly. However, since a positional operator is used to define directions for pixel pairs, it should be synchronized with the uni-directional texture orientation to avoid this “undesirable” property. In contrast, the directional dependence is emphasized in a sub-class of these matrices, the so-called Gray Level Difference Histograms (GLDH), where all possible directions are examined. With such modification, this kind of matrix is used explicitly for anisotropy analysis of 2D images [24, 25]. Comparative study of 3D versions of GLDH-based and gradient-based approaches showed that GLDH-based approach is much less sensitive and too robust for brain image analysis [7]. When applied to the anisotropic, multi-directional textures (e.g., brain sulci) both approaches produce results that largely depend on individual orientation structure of the image.

In the definition of 3D IID matrices (2) we have emphasized their rotation invariance. Indeed, we consider all possible voxel pairs with no respect on what direction they appear. Thus, we may expect that these matrices do not capture (i.e., are independent of) image anisotropy. This fact is experimentally tested by the 2D inverse co-occurrence transform (see Fig. 2) in the way described in previous section. Anisotropy histograms were computed by the gradient-based method described in [7].


Fig. 2. Intensity co-occurrence matrices and texture anisotropy. (a) Original texture patch selected from a slice of a brain MR-T2 image. (b) Texture reconstructed from the co-occurrence matrix. (c)-(d) Anisotropy histograms of original and reconstructed textures that plot cumulative directionality according to local texture gradients.

In this work we suggest to use gradient angle co-occurrence in form of separate gAD matrices (4) or fuse anisotropy together with other texture features in general IIGGAD matrices (1). Both are reflection/rotation invariant because they only take relative orientations into account. Another suitable property is that these descriptors are insensitive to the global (low frequency) shape of a multi-directional 3D image pattern if the range of inter-voxel distances is relatively small. For smooth, laminar 3D structures, such (locally defined) matrices would produce similar descriptors independent of their mutual orientations in space. This is not the case for other approaches where local orientations summed up over pre-defined solid angle bins [24, 7].

C.     Technical Remarks

Let us close this section by enumerating particular points critical for implementing of this approach.

1) Anisotropic image sampling

So far, we supposed that high-resolution 3D image data with the cubic voxel (say, about 1 mm3) are available. In case of non-cubic voxels the calculation of co-occurrence matrix must be corrected. First, weights of 3D filter used should be appropriately scaled by multiplying relative sampling rates along X, Y, and Z axes. Second, in the equations (1)-(4) the image raster unit should be associated with the smallest physical voxel dimension, distances measured in true physical units (e.g., mm) and voxel pairs exceeding a pre-defined, maximum distance D truncated accordingly. Note that these geometrical corrections do not avoid artificially high gradients along the image axes with low spatial resolution (typically, the inter-plane axis).

2) Rotation/reflection invariance

In practice, co-occurrence matrices are exactly the same for reflected images and images rotated by angles that are multiplies of 90 degrees. For arbitrary rotations they would be slightly different. Differences depend on the intensity interpolation algorithm but not the texture feature computation.

3) Matrix normalization

Summing elements for every distance separately is an appropriate way for matrix normalization when the whole brain image and/or reasonably large segments (brain compartments, brain lobes, etc.) are considered. In case of characterization/comparison of small, equal-sized volumes (e.g., 5x5x5 mm VOIs), there is no need for normalization at all. Moreover, the above mentioned “standard” normalization procedure emphasizes relatively rare voxels pairs located at the opposite VOI edges.

4) Binning of matrix axes

The number of bins for every matrix dimension should be relatively small. Bin sizes smaller than the noise levels are useless because feature values may not be reliably measured. Second, since we use considerably large multidimensional matrices, combining multi-sort features normally characterizes objects much better than more “precise” measurements of a single (noisy) feature. Experimentation revealed suitable bin count for feature axes of 4-16 and a distance range of 1-5 mm.

5) Implementation details

There are two key points of the implementation of co-occurrence matrix calculation algorithm. First, the condition of consideration of all possible voxel pairs with no repetition can be satisfied by counting of pairs, formed by every current image voxel and subsequent ones (in image raster order). For instance, the number of different voxel pairs in the nearest 3D voxel neighborhood (radius D=1) is 26/2=13. The second point is related to elimination of undesirable dependence of matrix content on the specific order and direction of the program loops over the image raster axes (equivalent to image reflections and rotations to 90 degrees). Such independence can be achieved by keeping a single-sort 2D matrix sub-sections (e.g., intensity-intensity) triangle that is with elements above leading diagonal equivalent to zero. The simplest way is to swap corresponding bin indices (e.g., and ) before counting, if .

The typical execution time of calculation of a general IIGGAD co-occurrence matrix (d=1,…,4, MRI dataset with about 106 brain voxels) on a mid-range PC workstation with a 800 MHz processor is 3 minutes.

III.     Experimental Study

Two important issues are addressed in this section: first, we compare the sensitivity of the proposed co-occurrence matrices. The second issue is concerned with the spatial image scaling which is closely related to image registration problems.

A. Sensitivity of Co-occurrence Descriptors

The sensitivity of co-occurrence descriptors can be evaluated by quantitatively comparing their change with respect to some pre-defined change in an image. For this purpose we take a MRI-T1 image of a normal healthy subject and add Gaussian noise with zero mean and standard deviation (STD) at increasing levels of 5 to 25 image intensity units (Fig. 3.a-c). All suggested descriptors were calculated for the original image and its noisy versions with the same binning parameters (10 feature bins, distance range d=1,…,4 mm) and normalized. The L1 distance from each noisy descriptor to corresponding noiseless descriptor in feature space is used as a sensitivity measure. Matrix cells of a co-occurrence descriptor are considered as a feature vector with N positive elements. Thus, for any two vectors and , the L1  norm is calculated as:

Formula 5

The distance L1 is symmetric, ranges from 0 to 1 and expresses the dissimilarity of two descriptors.

Note that original intensity co-occurrence matrices are traditionally not used directly. Instead, various integral features based on these matrices are computed (e.g., [2, 15]). In order to compare the sensitivity of such features with co-occurrence descriptors, we compute the four most commonly used texture features based on IID matrices in 3D: homogeneity, local homogeneity, contrast, and entropy. Integral texture features were normalized similar to other descriptors.


Fig. 3. Sensitivity of different image descriptors to changes caused by additive Gaussian noise. (a) Example slice of the original image. (b)-(c) Same slice corrupted by Gaussian noise with STD 5 and 25 intensity units. (d) L1 distances between the original and the noisy images measured by different descriptors.

Results of the sensitivity evaluation are shown in Fig. 3d. As expected, general IIGGAD matrices demonstrate the highest sensitivity because they combine all kinds of elementary image features (intensity, gradient magnitude, and mutual angles of orientation tokens). Apparently, integral texture features which “average” specific image variations are too robust for comparative brain image analysis. The other three single-sort descriptors have an intermediate sensitivity with respect to these extremes. However, it is important to note that sensitivity is not the only criterion for choosing suitable descriptors in a particular image analysis task. For instance, intensity co-occurrence matrices IID are reasonably better than gradient angle matrices gAD (Fig. 3.d). However, in this experiment all 6 images are well intensity-adjusted, which is generally not the case. Adequate intensity fitting may be problematic and IID matrices may lead to unstable and even incorrect results. In contrast, gAD matrices are independent on intensity range and even on the rate of spatial intensity variation (i.e., gradient magnitude). They can be used for estimating the amount of brain structure “disorder” in comparison with control regions and are considered to be more suitable for measuring the severity of lesions.

B.     Texture and Spatial Scaling

Inter-subject image registration is one of the common steps when analyzing brain images. Therefore it is important to estimate brain texture distortions caused by spatial scaling and/or registration to a standard atlas representation.

We need a reference image that represents a “typical case” of an image class or population under consideration. For this purpose we adopt the idea of an average representative. In context of texture analysis, the straightforward way of computing an “average” image by registration and spatial averaging is questionable. Instead, we use a mean co-occurrence descriptor that represents the center of a given image class in feature space. The validity of this approach is provided by the fact that our descriptors are independent of the specific object size and are translation, rotation, and reflection invariant. The “mean descriptor” technique is tested by comparing object deviations from the mean matrix and the matrix of a real image that is close to the mean one in feature space. Another consequence of these properties is that spatial 3D image patterns of the left and right brain hemispheres may be compared without reflection and/or registration.

Changes of textural brain image properties with scaling were evaluated by comparing IIGGAD matrices computed for a scaled image with “natural” differences observed in datasets which match scaled images by brain volume. Accordingly, two experiments were performed. In the first one, we took a MRI-T1 reference image of a healthy subject with 1 mm3 voxel size and scaled it from 85% up to 115% in 5% steps (Fig. 4.a-b), which corresponds to changes in the brain volume from 640 to 1540 cm3. Textural changes are measured as L1 distance to the IIGGAD descriptor of the smallest volume. The original image is excluded to guarantee the homogeneity of the image set and to ensure that measured differences are explicitly related to scaling but not to differences between interpolated and non-interpolated versions of the dataset. Comparisons were performed for the left and right brain hemispheres separately (see Fig. 4.c). In addition, for every scaled image we also computed the individual difference between the left and right hemispheres (Fig. 4.c, thin solid line with triangle points).


Fig. 4. Change of brain textural properties with respect to image scaling and natural brain volume variability. (a)-(b) Example slices of a control brain image scaled to 85% and 115% of its original size. (c) Change of textural properties due to image scaling measured as L1 distance to the mean IIGGAD co-occurrence matrix. (d)-(e) Example slices of brain images of two individuals matched with the corresponding scaled images depicted in (a) and (b) by the brain volume. (f) Same as (c) but with respect to natural brain volume changes in six controls.

For the second experiment, we selected 6 MRI-T1 images of different subjects (see Fig. 4.d-e) where brain volumes correspond to scaled versions of the reference image (mean volume deviation is 3.8%). These 6 subjects matched the reference one by age, health status, and image resolution. The corresponding measurements to the average descriptor are shown in Fig. 4.f.

The first experiment (Fig. 4.c) strongly suggests that spatial image scaling in the range of natural brain volume variability leads to dramatic changes of textural properties of brain datasets (correlation of the dissimilarity measure L1 with brain volume is r=0.99, p<0.0001). Most likely, such transform substantially changes spatial frequency of texture and other related properties. Because these changes are similar for the both hemispheres, we suppose that they are not associated with brain shape and sulcal variability. Individual differences between the left and right hemispheres are almost preserved with image scaling (p=0.01) and seems to be an individual, “constant” characteristic of a subject’s brain.

The second experiment did not reveal any dependency between textural properties and brain volume. The ANOVA method reported F=1.23, p=0.38 for the left hemisphere and F=0.20, p=0.69 for the left one. To avoid selection effects, this hypothesis was tested additionally on the large sample of 210 MRI-T1 brain datasets of young healthy subjects including 103 male (group A) and 107 age-matched female (group B) with mean age 24.8 and STD 3.9 years in the same way. Again, no statistically significant correlation of textural properties and brain volume was found (group A: p=0.35 and p=0.69, group B: p=0.32 and p=0.51 for the left and right hemispheres respectively). Fig. 5 illustrates these findings for all controls in form of the scatterplot ofL1 distance to the average brain IIGGAD descriptor and brain volume (r=0.62, p=0.19).


Fig. 5. Dissimilarity of brain texture of 210 normal controls vs. brain volume.

IV.     Examples of Application

Now we demonstrate how the proposed method can be used for the analysis of MRI brain datasets. Two different examples are given: a brain pathology classification task and the segmentation of diffuse brain lesions.

A.     Classification of Brain Datasets

This example is related to the characterization and clustering of patients with (clinically apparant) mild cognitive disturbances (MCD) and normal, healthy elderly subjects. Note that we do not focus on the extraction of specific, particular features of the image classes. Instead, we use the six-dimensional IIGGAD co-occurrence matrices (1) which incorporate intensity, gradient, and anisotropy information as a single descriptor. Although such an approach may be considered “too straightforward”, the elimination of the feature extraction stage provides an unbiased analysis without risking (possibly voluntary) preferences. Nevertheless, once IIGGAD matrices are computed for every image, corresponding particular features always can be derived from these matrices and considered separately.

Forty-three volumetric MRI-T1 brain datasets (28 patients and 15 controls) obtained on a 1.5T Siemens scanner (MPRAGE sequence, TR 11.4 ms, TE 4.4 ms, 128 slices, matrix 256´256, voxel size 0.9´0.9´1.5 mm) were used for the experiment. Patients were diagnosed clinically as either suffering from white matter encephalopathy and/or Alzheimer’s disease, while controls are healthy elderly individuals (patients’ spouses). Images were interpolated to an isotropic resolution of 1 mm and aligned with the stereotactical coordinate system using fourth-order b-spline interpolation [26]. The intracranial cavity is segmented by the method described in [8]. Only the part of the brain that above the plane defined by the anterior and posterior comissure (AC-PC plane) is used. Since intensity and gradient magnitude are both scaling-dependent, the intensity of peeled brain images was re-scaled prior to computing co-occurrence matrices. This re-scaling is performed by calculating the histogram over intracranial cavity and cutting down of 0.5% of “noise” voxels from both ends. Examples of pre-processed images are given in Fig. 6.a-b.


Fig. 6. Clustering of patients with mild cognitive disturbances based on white matter encephalopathy and/or Alzheimer’s disease and normal, healthy elderly subjects. Upper two rows: example slices of a patient (a) and normal control (b). (c) L1 distance between the average IIGGAD co-occurrence matrix of controls for the left and right hemispheres separately.

GGIIAD co-occurrence matrices were computed for the left and right brain hemispheres separately, with considerably low resolution at every dimension: 8 intensity bins (32 units each), 6 gradient magnitude bins (160 gradient units each), 6 angle bins (30 degrees each) and 4 distance bins (d=1-4 mm). Classification was done by measuring the L1 distance to the “typical control” for each brain dataset. As the typical control in feature space we consider the average IIGGAD co-occurrence matrix calculated cell-by-cell over 15 controls for the left and right hemispheres separately. Classification results are shown in Fig. 6.c, where patients are clearly separated from controls. Values for controls follow the diagonal line of the scatterplot closely, because their deviation from the typical control is similar for both hemispheres. However in pathological cases (shown as triangles), textural changes manifest differently in both hemispheres. Note that reduced co-occurrence descriptors defined by equations (2)-(4) do not provide such a good separation of classes in feature space. For instance, K-means clustering based on gradient angle co-occurrence matrices gAD resulted with 5 miss-classified cases, i.e., a 88.4% of classification accuracy. This is not surprising because these descriptors capture particular features only.

B.     Segmentation of Diffuse Brain Lesions

This example demonstrates how the proposed descriptors can be used for 3D texture segmentation. Changes in the cerebral hemispheric white matter (WM) are often detected in MRI brain datasets of elderly persons [9]. The pathogenesis, clinical significance and morphological substrate of these changes are incompletely understood [10-13]. In order to deduce the clinical significance of these findings, it is necessary to derive quantitative descriptors for them. The usual clinical practice is to evaluate images visually on the basis of a semi-quantitative rating scale [27]. Common types of WM lesions are diffuse white matter hypointensities (DWMH), periventricular hypointensities and enlarged periventricular spaces. One of the open questions is whether the first two are distinctive pathological features at all. Using texture features to segment DWMH appears a worthwhile and attractive approach, which we will discuss as an example of texture-based segmentation in the following.

Based on the co-occurrence texture descriptors, the segmentation can be performed in four steps.

(a) Calculation of the typical (representative) descriptor of an “elementary” VOI of the lesion to be segmented. A VOI training set that including sample lesion regions and corresponding control regions is constructed. Selection a proper VOI size is a compromise: it should be big enough to capture typical structure of the lesion, but small enough to provide a reasonably precise segmentation of the lesion borders. Since image VOIs are supposed to be of the same size, there is no need to normalize co-occurrence descriptors. Similar to the previous example, the representative descriptor can be calculated as an average matrix over the lesion image VOIs given in the training set.

(b) Fitting of a mapping function based on distances measured on the training set. The purpose of this step is to select and tune a suitable function, which maps distances between the current and representative VOI descriptors into a lesion probability map. A suitable choice is a simple linear mapping which addresses a non-zero probability to distances found for lesion samples and zeros for distances close to control VOIs. Note that control VOI samples are included for this tuning process only.

(c) Segmentation. Segmentation is performed by scanning the image with a given VOI size, calculating the distance from the current VOI to the representative one in feature space and addressing a corresponding probability label to the central voxel of the current VOI. Thus, a voxel of the probability map corresponds to the similarity of a neighborhood with the lesion VOI.

(d) Post-processing of the probability map. It is unlikely that all image VOIs within the box-shaped scanning area are consistent with respect to the lesion similarity criteria. Therefore some post-processing is necessary to remove false-positive map labels outside of the typical lesion space (here, the white matter compartment).

IIGGAD co-occurrence matrices were employed as texture descriptors for segmenting DWMH regions. Experimentation revealed a VOI of 7x7x7 mm to be adequate. Dimensions of matrix axes are the same as in the previous example, except that the number of intensity bins was reduced to 4 to provide a better reliability on the relatively small image VOIs. For calculation of the representative DWMH descriptor and fitting of a mapping function, a VOI training set was formed from 3 MRI-T1 patient datasets and 4 controls described in section 3.1. The training set included 130 VOIs, 65 control samples and 65 DWMH samples. Fig. 7 show typical examples of the lesion VOI (upper row) and L1 distances to the representative descriptor as a bar plot on the bottom. To compute DWMH probability maps, a linear mapping of the L1 distance to the label range 0-255 was used, where the maximal value 255 was associated with L1=0.42 (mean distance over 65 DWMH VOIs) and the minimal label 0 with L1=0.69 (median distance between two VOI classes of the training set).


Fig. 7.  Calculation of a representative IIGGAD co-occurrence matrix for segmentation of diffuse white matter hypointensity (DWMH). On the top: typical example of DWMH sample region defined as 7x7x7 mm VOI with the center pointed out by crossing lines. On the bottom: distances to the mean lesion co-occurrence matrix for 65 lesion and 65 control VOIs.


Fig. 8.  Result of the DWMH segmentation. (a) Original image slice. (b) Probability map of the lesion for the slice depicted in (a). (c)-(d) Two rendered views of 3D lesion map.

Fig. 8 shows example segmentation result as 2D section of the probability map for a reference image slice (upper row) and two directly rendered views [28] of the map in 3D at the bottom. Lesion maps produced by this technique have a voxel-by-voxel correspondence with the original brain dataset and can be used for a quantitative estimation of lesion severity. Detailed description and validation of the use of these maps, however, is the subject of a separate neurobiological paper.

V.     Conclusion and Future Work

In this paper we have suggested a new method for 3D texture analysis of MRI brain datasets. The method is based on extended, multi-sort co-occurrence matrices that combine intensity, gradient and anisotropy image features in a systematic and consistent way. Depending on a given problem, reduced versions of the general 6D co-occurrence matrices can be employed for texture analysis as well. The suggested co-occurrence descriptors are natively 3D, reflection and translation invariant and, to some extent, rotation-insensitive. Normalization of co-occurrence descriptors provides a basis for inter-subject analysis and comparisons of brain regions with different size.

A comparative study revealed that general 6D matrices are the most sensitive texture descriptors. Traditional integral texture features appear too robust for analyzing faint, not well-defined brain textural changes. Another important issue is the dependency of textural properties with spatial image scaling, which renders this operation as unacceptable in neurological research involving texture measurements.

We have demonstrated that the extended co-occurrence descriptors can be used as an efficient tool in various MRI brain image analysis tasks such as classification of brain datasets and segmentation of diffuse brain lesions. In general, the problem of choosing basic features (matrix axes) depends on the image data modality and the specific analysis to be performed. The process of matrix design and selection of appropriate bin sizes can be partly formalized by the use of suitable statistical procedure. For instance, one may take all “promising” features, evaluate their usefulness for the problem in hands separately, and design corresponding multi-sort co-occurrence matrix so that it combines useful features as matrix axes. Then the bin sizes can be tuned based on a priori knowledge about the reliability of feature measurements and/or the criteria of an optimal separation of test objects in the feature space.

Our future work will concern the quantitative characterization of textural properties of anatomical brain datasets acquired from normal subjects. In addition, textural properties are quantitative features of brain tissues that may statistically be compared with clinical features such as cognitive abilities measured on performance scales.


The authors would like to thank the anonymous reviewers for their professionalism and fast review. They also wish to thank Dr. M.Svensen for proof-reading of the paper.


[1]     F. Liu and R.W. Pikard, “Periodicity, directionality, and randomness: Wold features for image modeling and retrieval,” IEEE Trans. Pattern Analysis Mach. Intell., vol. 18, No. 7, July, pp.722-733, 1996.

[2]     R.M. Haralick, K. Shanmugan, and I. Dinstein, “Textural features for image classification,” IEEE Trans. System Manag. Cybernetics, SMC-3 (6), pp. 610-621, 1973.

[3]     T. Chang and C.-C. J. Kuo, “Texture analysis and classification with tree-structured wavelet transform,” IEEE Trans. Image Processing, No. 2 (4), Oct., pp. 429-441, 1993.

[4]     L. Van Gool, P. Dewaele, and A. Oosterlinck, “Texture analysis anno 1983,” Comp. Vision Graph. Image Proc., vol. 29, pp. 336-357, 1985.

[5]     T.R. Randen and J.H. Husoy, “Filtering for texture classification: a comparative study,” IEEE Trans. Pattern Analysis Mach. Intell., vol. 21(4), pp. 291-310, 1999.

[6]     S.W. Zucker and R.A. Hummel, “A 3D edge operator,” IEEE Trans. Pattern Analysis Mach. Intell., vol. 3, pp 324-331, 1981.

[7]     V.A. Kovalev, M. Petrou, and Y.S. Bondar, “Texture Anisotropy in 3D Images,” IEEE Trans. on Image Processing, vol. 8, No. 3, pp. 346-360, 1999.

[8]     S.A. Hojjatoleslami, F. Kruggel, and D.Y. von Cramon, “Segmentation of white matter lesions from volumetric MR images,” In: Medical Image Computing and Computer-Assisted Intervention – MICCAI’99, 2nd Int. Conf., Sept. 1999, Cambridge, UK, C. Taylor and A. Colchester (Eds.), LNCS, vol. 1679, Springer Verlag, pp. 52-61, 1999.

[9]     L. Pantoni and J.H. Garcia, “The significance of cerebral white matter abnormalities 100 years after Binswanger’s report: A review,” Stroke, vol. 42, pp. 1293-1301, 1995.

[10] M.M.B. Breteler, N.M. van Amerongen, J.C. van Swieten, J.J. Claus, D.E. Grobbee, J. van Gijn, A. Hofman, F. van Harskamp, “Cognitive correlates of ventricular enlargement and cerebral white matter lesions on magnetic resonance imaging: The Rotterdam study,” Stroke, vol. 25, pp. 1109-1115, 1994.

[11] D. Leifer, F.S. Buonanno, E.P. Richardson, “Clinicopathologic correlations of cranial magnetic resonance imaging of periventricular white matter,” Neurology, vol. 40, pp. 911-918, 1990.

[12] L.A. Heier, C. J. Bauer, L. Schwarz, R.D. Zimmerman, S. Motello and M.D.F. Deck, “Large Virchow-Robin Spaces: MR-clinical correlation,” American Journal of Neuroradiology, vol. 10, pp. 929-936, 1989.

[13] A.D. Elster and D.N. Richardson, “Focal high signal on MR scans of the midbrain caused by enlarged perivascular spaces: MR-pathologic correlation,” American Journal of Neuroradiology, vol. 11, pp. 1119-1122, 1990.

[14] M.M. Galloway, “Texture analysis using gray level run length,” Comp. Graphics Image Processing, vol. 4, pp. 172-179, 1975.

[15] P.A. Freeborough and N.C. Fox, “MR image texture analysis applied to the diagnosis and tracking of Alzheimer’s disease,” IEEE Trans. on Medical Imaging, vol. 17, No. 3, June, pp. 475-479, 1998.

[16] P. Brodatz, Textures: a photographic album for artists and designers, Dover, New York, 1996.

[17] V. Kovalev and M. Petrou, “Multidimensional Co-occurrence Matrices for Object Recognition and Matching,” Graph. Models Image Proc., vol. 58, No. 3, May, pp. 187-197, 1996.

[18] R.C. Gonzales and R.E Woods, Digital image processing, Addison-Wesley, Reading, MA, 1992, pp. 508-510.

[19] A.R. Rao and G.L. Lohse, “Towards a texture naming system: Identifying relevant dimensions of texture,” In IEEE Conf. On Visualization, San Jose, CA, IEEE Comp. Soc. Press, pp. 220-227, 1993.

[20] H. Tamura, S. Mori, and T. Yamawaki, “Textural features corresponding to visual perception,” IEEE Trans. Sys., Man and Cybern., SMC-8 (6), pp. 429-441, 1978.

[21] S. De Ma and A. Gagalowicz, “Sequential synthesis of natural textures,” Comp. Vision Graph. Image Proc., No. 30, pp. 289-315, 1985.

[22] A. Aubert and D. Jeulin, “Estimation of the influence of second- and third-order moments on random sets reconstructions,” Pattern Recognition,vol. 33, pp. 1083-1104, 2000.

[23] G. Lohmann, “Analysis and synthesis of textures: a co-occurrence-based approach,” Comput. & Graphics, vol. 19, No. 1, pp. 29-36, 1995.

[24] D. Chetverikov, “GLDH-based analysis of texture anisotropy and symmetry: an experimental study,” In: Proc. 12th Int. Conf. Pattern Rec.,Jerusalem, Israel, Oct. 9-13, IEEE Comp. Soc. Press, vol. 1, pp. 444-448, 1994.

[25] D. Chetverikov and R.M. Haralick, “Texture anisotropy, symmetry, regularity: recovering of structure and orientation from interaction maps,” In:Proc. Brit. Machine Vision Conf., Sept. 11-14, BMVA, vol. 1, pp. 57-66, 1995.

[26] F. Kruggel and D.Y. von Cramon, “Alignment of magnetic-resonance brain datasets with the stereotactical coordinate system,” Medical Image Analysis, vol. 3, No. 2, pp. 175-185, 1999.

[27] P. Scheltens, T. Erkinjunti, D. Leys, L.-O. Wahlund, D. Inzitari, T. del Ser, F. Pasquier, F. Barkhof, R. Maentylae, J. Bowler, A. Wallin, J. Ghika, F. Fazekas, and L. Pantoni, “White matter changes on CT and MRI: an overview of visual rating scales,” European Neurology, vol. 39, No. 2, pp. 80-89, 1998.

[28] M. Leroy, “Efficient ray tracing of volume data,” ACM Trans. on Graphics, vol. 9, No. 3, pp. 245-261, 1990.