Pages

January 1, 2011

Direct Volume Rendering of Segmented Data

A colleague asked me if I could make a visualization for the brain structures she was examining. The structures of interest were the ventricles, hippocampus, amygdala, pallidum, putamen, caudate, thalamus and cerebellum.

The data we had were MRI that had been processed in FreeSurfer in order to obtain an 3D segmentation of brain structures. In particular, the data was sampled at a 256 x 256 x 256 regular grid where each voxel were about 1 mm^3. The segmentation process gave a file (aseg.mgz) containing an index for each voxel (x,y,z) indicating which brain structure the voxel belonged to (in the file FreeSurferColorLUT.txt where index 9 is Left-Thalamus, 11 is Left-Caudate, etc).

In order to visualize these structures I decided to experiment with Direct Volume Rendering (DVR). Another alternative would have been to go with marching cubes, but I decided against it since I felt DVR gave more flexibility for use in other contexts.

For me the conceptual challenge with applying DVR to segmented data was how to transform the indexes into something that could be smoothed and interpolated properly. In the end I decided to convert the segments of interested into binary data sets, which each could be smoothed using 3D convolution. One problem with this approach was that quite large amounts of data needs to be stored. In particular, the original segmented data set is 256 x 256 x 256 8-bit unsigned chars, totaling about 17 MB. When convolving k=16 such individual segments (8 brain structures for each of the two brain hemispheres) and storing the results as float values the results will be a 256 x 256 x 256 x 16 dataset of about one GB.

So in principle, for each step I make along a ray originating from the eye into the dataset, I interpolate these k pre-convoluted datasets separately. If one or more of the interpolated intensities are greater than a threshold value, e.g. 0.25, then I consider the segment with the greatest interpolated intensity as hit and apply the color assigned to that segment.

A simple optimization would be to store the voxelwise maximum over each of the k (smoothed and interpolated) segments in a separate segment, and then use this to determine if any segments are hit. This could save computation time since most voxels in the data set does not belong to any segments and can thus be ignored quickly. If it is determined from this maximum segment that any segments are hit, then the algorithm proceed by interpolating this position as before (by examining all k individual segments) to determine which segment.

The segments seen from the left front side of the head.


Finally the process proceeds as usual by estimating normals at the point hit and calculating Phong shading. An image of size 1024x1024 takes about 2 hours to render on a relatively slow laptop computer.