Pages

January 28, 2011

Partially Transparent Brush Strokes

A fundamental unit when creating digital images in most popular digital drawing software is the brush. The brush enables the artist to create strokes with different properties such as color, thickness or weight by repeated application of the brush along a path (which may be a simple point, line or a curve).

One way to represent a brush is as a simple bitmap. When the bitmap-brush is applied to draw a point, the bitmap is simply rendered at the specified location using an "over"-operator to merge the colors. This is seen below where a round and partially transparent red brush is applied to two points over a white and gray chessboard background.

 
Lines (and other curves) can be created by repeatedly painting the same brush over the background along the path. In the picture below the path between the two points is a simple line.


Notice that the repeated application of the same partially transparent brush to nearby points causes the transparent parts to become less transparent. This happens since the over-operator will merge the next (slightly offset) brush over a background which has already been affected by the previous brush stroke(s). In this way the effect of the brush is "amplified", and transparency lost. The results may be percieved as "unintuitive" by the user, since the intensity and transparency pattern of the resulting stroke actually looks quite different from that of the original brush.

For Pipedream I figured out the following algorithm as an alternative when rendering strokes with partially transparent brushes:
  • Rather than painting directly over the image background (the chessboard here), I first render the stroke over a separate new, fully transparent background. 
  • During this rendering I only update the RGBA value of this new image if the alpha component A of the corresponding brush position is greater than the value already there. I.e. the stored pixel color will be the one corresponding to the largest A-value of the brush which has "touched" that pixel. In this sense the algorithm can be considered a color version of a "maximum value over-operator".
  • Finally this separately rendered line stroke is merged with the original (chessboard) background, giving the result below.


Comparing this stroke to the original brush, the result may be closer to what the user expects.

January 1, 2011

Direct Volume Rendering of Segmented Data

A colleague asked me if I could make a visualization for the brain structures she was examining. The structures of interest were the ventricles, hippocampus, amygdala, pallidum, putamen, caudate, thalamus and cerebellum.

The data we had were MRI that had been processed in FreeSurfer in order to obtain an 3D segmentation of brain structures. In particular, the data was sampled at a 256 x 256 x 256 regular grid where each voxel were about 1 mm^3. The segmentation process gave a file (aseg.mgz) containing an index for each voxel (x,y,z) indicating which brain structure the voxel belonged to (in the file FreeSurferColorLUT.txt where index 9 is Left-Thalamus, 11 is Left-Caudate, etc).

In order to visualize these structures I decided to experiment with Direct Volume Rendering (DVR). Another alternative would have been to go with marching cubes, but I decided against it since I felt DVR gave more flexibility for use in other contexts.

For me the conceptual challenge with applying DVR to segmented data was how to transform the indexes into something that could be smoothed and interpolated properly. In the end I decided to convert the segments of interested into binary data sets, which each could be smoothed using 3D convolution. One problem with this approach was that quite large amounts of data needs to be stored. In particular, the original segmented data set is 256 x 256 x 256 8-bit unsigned chars, totaling about 17 MB. When convolving k=16 such individual segments (8 brain structures for each of the two brain hemispheres) and storing the results as float values the results will be a 256 x 256 x 256 x 16 dataset of about one GB.

So in principle, for each step I make along a ray originating from the eye into the dataset, I interpolate these k pre-convoluted datasets separately. If one or more of the interpolated intensities are greater than a threshold value, e.g. 0.25, then I consider the segment with the greatest interpolated intensity as hit and apply the color assigned to that segment.

A simple optimization would be to store the voxelwise maximum over each of the k (smoothed and interpolated) segments in a separate segment, and then use this to determine if any segments are hit. This could save computation time since most voxels in the data set does not belong to any segments and can thus be ignored quickly. If it is determined from this maximum segment that any segments are hit, then the algorithm proceed by interpolating this position as before (by examining all k individual segments) to determine which segment.

The segments seen from the left front side of the head.


Finally the process proceeds as usual by estimating normals at the point hit and calculating Phong shading. An image of size 1024x1024 takes about 2 hours to render on a relatively slow laptop computer.

December 4, 2010

Pipedream

Pipedream is the name of my software project. I plan to use it for things that I consider cool, interesting or useful. This blog will document what I learn as Pipedream grows.