Quick Links

In order to tracing the path of neuron, we need to get the direction that the neuron is “going” on. So to find direction, we implement two ways. This page talks about the first way we tried – the Kernel Response Lookup Table method.

The procedure of this method can be divided into five parts described below:

        1. Load the original image;

        2. Extend the size of the image;

        3. Rotate the extended image;

        4. Create the detection kernel and cross-correlate it with the rotated images, store the result into matrixes with the same size;

        5. Rotate the result matrixes back.

Basic Idea

The basic idea of this method is to put line-detect kernel onto all the positions on the image on all the directions we need. Basically it’s impossible to put the kernel in all 360 degrees and it’s also useless to do that. So we actually decided to use 16 directions for the kernel, which is to divide the 360 degrees into 16 parts, 22.5 degrees each. The plot of the directions is showed below (Figure 2.1),

Figure 2.1_16 Directions

Compared with one of the original images (Figure 2.2), we think that 16 directions are enough to show the direction of neuron, and you will see that this assumption is proved to be true.

Figure 2.2_One of the Original Images

Kernel Design

We will design a kernel, which after cross-correlated with the image, can return a number result, which tells how “likely” the neuron is going on certain direction.

The kernel we use is showed below (Figure 2.3),

Figure 2.3_the Kernel

The kernel is a matrix that just looks like the figure above, which is 4 columns of [-1 -2 0 2 1 … 0 … 1 2 0 -2 -1], here the “…” part is placed with nothing, but actually it can be placed with zeros to fit actual data image situation. We will cross-correlate this kernel with the image and store result at the center location of the kernel on the image. The design idea comes from the paper (Jennifer, Lan, Diane, 2013): after using this kind of kernel, if the direction of the neuron is the same as the kernel, which means horizontal here, the number result will be largest. It’s easy to see that if the neuron is slant, the number result will be smallest because some points of the neuron will be correlated with 0 or negative number. Actually, the thickness of the neurons in our data image is 3 to 4 pixels, which greatly fit to the thickness of the kernel we used. For the same reason, we choose the length of the kernel to be 4, which will not take in too much length of the neuron and lead to the situation that the neuron changes direction in the duration of the kernel.

Rotate the Image or the Kernel?

After we decided the kernel we used, we need to consider if we should rotate the image to the 16 different directions, or rotate the kernel to the 16 different directions. In this method, we decided to rotate the image. Because we think that the kernel we used is too small, and it is represented in Matlab using matrix. If we rotate it, two possible results will appear, first is like the figure below (Figure 2.4),

Figure 2.4_One Possible Result of Rotation Kernel

In this kind of result, we find that the kernel is rotated 22.5 degree clockwise. However, the size of the kernel become too large and different with the original kernel, which will lead to bias to the cross-correlated number result compared with other direction’s kernel.

And other kind of result is that we limit the size of the kernel to be exactly same. But from the previous kind of result, we can tell that the kernel will get big distortion and cannot get a good rotation without distortion.

Because the two kind of results above, we decided to rotate the images, which are bigger, and easier to be rotated.

Extend the Image

Now we want to rotate the image, but we are facing another problem: if we rotate the image in Matlab, due to the fixed size of matrix in Matlab, the some part of the image will be cut off like the (Figure 2.5),

Figure 2.5_Image Being Cut Off (Jennifer, Lan, Diane, 2013)

Actually the figure above is an image in frequency domain, but basically for the real image it is same. So for this problem, we actually extend the size of the original image, so that it will be like (Figure 2.6),

Figure 2.6_Image Not Being Cut Off (Jennifer, Lan, Diane, 2013)

Because our data image is square shape, we just extend the size of the image to sqrt(2) times to implement this.

Rotate the Image

To rotate the image, the first method we tried to rotate the image is to transform it by Fourier Transform first, rotate the image in frequency domain, and then transform it back. The reason is that we think rotate the image directly may lead to some loss of information, and also distortion. And also, in (Jennifer, Lan, Diane, 2013), the Fourier Rotation method is proved to be the best among the four different rotation method – nearest-neighbor, bilinear, bicubic, and Fourier.

So let’s do it through, firstly apply the build-in function “fft2” in Matlab to perform fast Fourier Transform onto two-dimensional images. This procedure is showed in (Figure 2.7),

Figure 2.7_Fourier Transformed Image

Then, also for the same cut-off reason above, as we will loss different amount of information during different directions of rotation, we decided to put a mask onto the Fourier Transformed Image above. So we give up the same amount of not-such-important high frequency information in all the different directions, therefore there is no bias in these rotations. The masked image is showed in (Figure 2.8),

Figure 2.8_Masked Image

Now we start to rotate the masked image in all 16 directions. For one direction, 22.5 degree for example, we calculate the rotation matrix of this direction, showed as (Formula 2.1),

Formula 2.1_Rotation Matrix

So for every point (x, y) on the masked image, we calculate the destination point (x’, y’) in the rotated image using this rotation matrix.

At this point we meet one problem, which we haven’t solve before the due time of this project, and we decided to change the rotation method to get the result for this part.

Basically the problem is like this, here is two couples of the outputs of the rotation, the frequency domain and space domain (using the build-in function in Matlab “ifft2” to transform back), for counter-clockwise rotation of 22.5 degrees and 90 degrees (Figure 2.9 to 2.12):

Figure 2.9_Rotation of 22.5 Degrees (Frequency Domain)
Figure 2.10_Rotation of 22.5 Degrees (Space Domain)
Figure 2.11_Rotation of 90 Degrees (Frequency Domain)
Figure 2.12_Rotation of 90 Degrees (Space Domain)

As you can see, the space domain image for 22.5 degrees has a serious aliasing, while the image for 90 degrees doesn’t. The aliasing is unaffordable for our program and the actual result is wrong. Since the rotated image has aliasing, the cross-correlated number result is wrong. So on the final output image we cannot trace the neuron’s path using this rotation method.

For the reason of the aliasing, we think that maybe it is because that since we are using rotation matrix to rotate the image, the calculation result of the rotated position (x’, y’) is not guaranteed to be an integer. It is more likely to become a floating number. But for the matrix grid, the position must be integer. So we took the “floor” function of the position x’ and y’ to be the new position. We think that because of this, we will get some point without value in rotated image, as showed in (Figure 2.9). There are a lot of black points, since maybe the point that should be in these points are floored to be the point near it. And in (Figure 2.10) there is terrible aliasing. But for 90 degrees, we know that the new position is always integer since our data image is square shape. Therefore we can see in (Figure 2.11) there is no black points like in (Figure 2.9) and in (Figure 2.12), there is no terrible aliasing! So we think we find the reason for aliasing. But we cannot fix it using rotation matrix, so we decided to change our rotation method.

Rotate in Space Domain

Since we cannot solve the problem above, we decided to change our rotation method. We decided to rotate the image in space domain.

Using the same codes above, except the fft2 and ifft2 parts, we rotate the images in space domain. The output is showed in (Figure 2.13),

Figure 2.13_Space Rotated Images

You can see that the output is very good in terms of that there is no more aliasing since it is totally in space domain. However, we can still see that except the 90 degrees, 180 degrees, 270 degrees, 360 degrees plots on the forth column, the other images all have a lot of black points in them. This is also due to the same reason we discussed before, that we “floored” the positions so that some positions got “missed” because their values got “floored” to nearby positions. But in terms of the large scale, we can still have hope that the output will be useful.

Calculate the Number Result and Rotate the Matrix Back

After that we just apply our kernel with all the positions on the rotated image and store the cross-correlation number result in one matrix for one direction. So we have 16 matrixes which all have the size of extended image, and we need to rotate them back.

Just applying the same method before, we rotate them all back to the direction of original image. In this step, since we still use “floor” to get the new position, there is also some possibility to get data error. All these errors may lead to final output to be wrong at some points. And we can see that actually there is some error in the final output.

Also, we should not forget the resize the matrixes to the size of the original image to fit it. And we know that the positions out of the original image’s size should all be black.

Because these data matrixes are all matrixes with values indicating “how likely the neuron is going on this direction”. The data is hardly to understand by just watching one matrix. So we do not plot the data matrix here, and will show the effects of the data in the section below.

Final Output

Finally we get 16 matrixes of the size of the original image. Each matrix represents one direction. In our data, KRLT_resize1 stands the direction of counter-clockwise rotation 22.5 degrees, KRLT_resize2 stands the direction of counter-clockwise rotation 45 degrees, KRLT_resize16 stands the direction of counter-clockwise rotation 360 degrees.

Therefore, for one position on the original image, after comparing all the 16 values in the 16 KRLT matrixes, we can know in which direction matrix the value is the largest. And the number of the matrix indicates the direction of the neuron path.

Here we design two ways to show the effects of our data, the first one is to plot points’ direction widely on the original, for every three points on the image on one row, we pick one point, plot itself with blue star, and plot its direction using a red line near it, as showed in (Figure 2.14 and 2.15):

Figure 2.14_A Broad View of the Data’s Effect
Figure 2.15_A Specific View of the Data’s Effect

As showed in the figures, the directions are following the neuron’s path well. And we also have another way to show the effects of our data, we plot the directions of the points that is likely to be on the neuron path (simply detected by value greater than half), as showed in (Figure 2.16 and 2.17):

Figure 2.16_A Broad View of the Data’s Effect
Figure 2.17_A Specific View of the Data’s Effect

So we can see that the directions are also following the neuron’s path well. We can say that our assumption was proved to be true and the function that we want our program to perform at this part is implemented.

Some Problems

At this part, we are still facing some problems, one of the problem is that at some points, the KRLT table output the wrong direction, as showed in (Figure 2.18):

Figure 2.18_Wrong Output Example

As showed in above image, at the center part of the image, the directions are obviously wrong, as they are pointing to the nearby neurons but not the direction of the path. One of the possible reasons for this problem is that our kernel is not good enough. So that when cross-correlating the kernel and the image, the kernel takes the adjacency neuron paths in, and considers that direction to be better than the right direction. Another possible reason is that, as we have mentioned before, that due to the “floor” function, some points get missed with no value in some KRLT matrixes, so that sometimes the KRLT matrixes missed the right direction and output the wrong answer.

Another problem, which is actually used to be a problem but not a problem anymore, is the time complexity. So previously this part’s program was very slow, so slow that it takes more than 40 minutes to complete one time for one image. We found that that was due to some wrong using of functions and data structure. Previously, to make the code shorter, we use a lot of for loop in Matlab for different names of variables. But the loop used in Matlab processing different variables takes a lot of time. Before we used the function 'eval' to type in code processing different variables like 'part_1', 'part_2' to 'part_16'. This will take a lot of time. Now we simply type in the same codes 16 times, which speeds up the program but makes the program too long to read.

We also thought that maybe the rotation is the reason to slow things down. Actually we tried two ways of rotation. One is to use the rotation matrix to calculate the destination location, while another is to calculate the tan, sin, cos for each point. It turns out that these two ways differs very little, which means that the rotation is not the problem’s reason.

Page 5/11