2
votes

What I am trying to do:

Make an empty 3D image (.dcm in this case) with image direction as [1,0,0; 0,1,0; 0,0,1]. In this image, I insert an oblique trajectory, which essentially represents a cuboid. Now I wish to insert a hollow hemisphere in this cuboid (cuboid with all white pixels - constant value, hemisphere can be anything but differentiable), so that it is aligned along the axis of the trajectory.

What I am getting

So I used the general formula for a sphere:

x = x0 + r*cos(theta)*sin(alpha)
y = y0 + r*sin(theta)*sin(alpha)
z = z0 + r*cos(alpha)

where, 0 <= theta <= 2 * pi, 0 <= alpha <= pi / 2, for hemisphere.

What I tried to achieve this

  1. So first I thought to just get the rotation matrix, between the image coordinate system and the trajectory coordinate system and multiply all points on the sphere with it. This didn't give me desired results as the rotated sphere was scaled and translated. I don't get why this was happening as I checked the points myself.

  2. Then I thought why not make a hemisphere out of a sphere, which is cut at by a plane lying parallel to the y,z plane of the trajectory coordinate system. For this, I calculated the angle between x,y and z axes of the image with that of the trajectory. Then, I started to get hemisphere coordinates for theta_rotated and alpha_rotated. This didn't work either as instead of a hemisphere, I was getting a rather weird sphere.

Without any transformations This is without any transformations

Angle transformation This is with the angle transformation (second try)

For reference,

The trajectory coordinate system :

[-0.4744, -0.0358506, -0.8553; -0.7049, 0.613244, 0.3892; -0.5273, -0.787537, 0.342;];

which gives angles:

x_axis angle 2.06508 pi y_axis angle 2.2319 pi z_axis angle 1.22175 pi

Code to generate the cuboid

Vector3d getTrajectoryPoints(std::vector<Vector3d> &trajectoryPoints, Vector3d &target1, Vector3d &tangent1){
    double distanceFromTarget = 10;
    int targetShift = 4;
    target -= z_vector;
    target -= (tangent * targetShift);
    Vector3d vector_x = -tangent;
    y_vector = z_vector.cross(vector_x);
    target -= y_vector;
    Vector3d start = target - vector_x * distanceFromTarget;

    std::cout << "target = " << target << "start = " << start << std::endl;
    std::cout << "x " << vector_x << " y " << y_vector << " z " << z_vector << std::endl;
    double height = 0.4;
    while (height <= 1.6)
    {
        double width = 0.4;
        while (width <= 1.6){
            distanceFromTarget = 10;
            while (distanceFromTarget >= 0){
                Vector3d point = target + tangent * distanceFromTarget;
                //std::cout << (point + (z_vector*height) - (y_vector * width)) << std::endl;
                trajectoryPoints.push_back(point + (z_vector * height) + (y_vector * width));
                distanceFromTarget -= 0.09;
            }
            width += 0.09;
        }
        height += 0.09;
    }
}

The height and width as incremented with respect to voxel spacing.

Do you guys know how to achieve this and what am I doing wrong? Kindly let me know if you need any other info.

EDIT 1 After the answer from @Dzenan, I tried the following:

target = { -14.0783, -109.8260, -136.2490 }, tangent = { 0.4744, 0.7049, 0.5273 };

typedef itk::Euler3DTransform<double> TransformType;
    TransformType::Pointer transform = TransformType::New();

    double centerForTransformation[3];

const double pi = std::acos(-1);
try{
        transform->SetRotation(2.0658*pi, 1.22175*pi, 2.2319*pi);
        // transform->SetMatrix(transformMatrix);
    }
    catch (itk::ExceptionObject &excp){
        std::cout << "Exception caught ! " << excp << std::endl;
        transform->SetIdentity();
    }
transform->SetCenter(centerForTransformation);

Then I loop over all the points in the hemisphere and transform them using,

point = transform->TransformPoint(point);

Although, I'd prefer to give the matrix which is equal to the trajectory coordinate system (mentioned above), the matrix isn't orthogonal and itk wouldn't take it. It must be said that I used the same matrix for resampling this image and extracting the cuboid and this was fine. Thence, I found the angles between x_image - x_trajectory, y_image - y_trajectory and z_image - z_trajectory and used SetRotation instead which gives me the following result (still incorrect): sphere transformed

EDIT 2

I tried to get the sphere coordinates without actually using the polar coordinates. Following discussion with @jodag, this is what I came up with:

Vector3d center = { -14.0783, -109.8260, -136.2490 };

    height = 0.4;
    while (height <= 1.6)
    {
        double width = 0.4;
        while (width <= 1.6){
            distanceFromTarget = 5;
            while (distanceFromTarget >= 0){
// Make sure the point lies along the cuboid direction vectors
                Vector3d point = center + tangent * distanceFromTarget + (z_vector * height) + (y_vector * width);
                double x = std::sqrt((point[0] - center[0]) * (point[0] - center[0]) + (point[1] - center[1]) * (point[1] - center[1]) + (point[2] - center[2]) * (point[2] - center[2]));
                if ((x <= 0.5) && (point[2] >= -136.2490 ))
                    orientation.push_back(point);
                distanceFromTarget -= 0.09;
            }
            width += 0.09;
        }
        height += 0.09;
    }

But this doesn't seem to work either.

This is the output SPhere

2
Could you post the code on how you rasterize the cuboid?jodag
So i generated it on c++, plotted the points in matlab. I'll add it as an editRick M.
I believe what you want to do is perform the inverse transformation on your coordinates, then draw a hemisphere in that coordinate system. I'll post some MATLAB example code in a moment.jodag
I am actually iterating, as you can see from the code, but I guess matlab code would be fine too. Let me know if you need any other info.Rick M.

2 Answers

1
votes

I'm a little confused about your first plot because it appears that the points being displayed are not defined in the image coordinates. The example I'm posting below assumes that voxels must be part of the image coordinate system.

The code below transforms the voxel coordinates in the image space into the trajectory space by using an inverse transformation. It then rasterises a 2x2x2 cube centered around 0,0,0 and a 0.9 radius hemisphere sliced along the xy axis.

Rather than continuing a long discussion in the comments I've decided to post this. Please comment if you're looking for something different.

% define trajectory coordinate matrix
R = [-0.4744, -0.0358506, -0.8553;
     -0.7049, 0.613244, 0.3892;
     -0.5273, -0.787537, 0.342]

% initialize 50x50x50 3d image
[x,y,z] = meshgrid(linspace(-2,2,50));
sz = size(x);
x = reshape(x,1,[]);
y = reshape(y,1,[]);
z = reshape(z,1,[]);
r = ones(size(x));
g = ones(size(x));
b = ones(size(x));

blue = [0,1,0];
green = [0,0,1];

% transform image coordinates to trajectory coordinates
vtraj = R\[x;y;z];
xtraj = vtraj(1,:);
ytraj = vtraj(2,:);
ztraj = vtraj(3,:);

% rasterize 2x2x2 cube in trajectory coordinates
idx = (xtraj <= 1 & xtraj >= -1 & ytraj <= 1 & ytraj >= -1 & ztraj <= 1 & ztraj >= -1);
r(idx) = blue(1);
g(idx) = blue(2);
b(idx) = blue(3);

% rasterize radius 0.9 hemisphere in trajectory coordinates
idx = (sqrt(xtraj.^2 + ytraj.^2 + ztraj.^2) <= 0.9) & (ztraj >= 0);
r(idx) = green(1);
g(idx) = green(2);
b(idx) = green(3);

% plot just the blue and green voxels
green_idx = (r == green(1) & g == green(2) & b == green(3));
blue_idx = (r == blue(1) & g == blue(2) & b == blue(3));

figure(1); clf(1);
plot3(x(green_idx),y(green_idx),z(green_idx),' *g')
hold('on');
plot3(x(blue_idx),y(blue_idx),z(blue_idx),' *b')

axis([1,100,1,100,1,100]);
axis('equal');
axis('vis3d');

enter image description here

1
votes

You can generate you hemisphere in some physical space, then transform it (translate and rotate) by using e.g. RigidTransform's TransformPoint method. Then use TransformPhysicalPointToIndex method in itk::Image. Finally, use SetPixel method to change intensity. Using this approach you will have to control the resolution of your hemisphere to fully cover all the voxels in the image.

Alternative approach is to construct a new image into which you create you hemisphere, then use resample filter to create a transformed version of the hemisphere in an arbitrary image.