September 14, 2016
For the week of 9/5/16, our group met on Tuesday to discuss topics of interest which primarily consisted of robotics and programming geared towards measurement of different markers in and on the body such as use of computer vision or spectrophotometry to track some time of marker. On 9/9/16, we met with Dr. Tiezhi Zhang, who suggested that we develop some sort of technology that can improve the current VisionRT system of tracking a patient's body orientation during CT scans. We then met Dr. Bernard Miller on 9/10/16 to discuss methods of noninvasive biomarker measurement, including spectrometry, and spectrophotometry. A meeting with Mike Sabo was planned for 9/14/16.
September 26th, 2016
During the week of 9/26/16, we researched surface reconstruction articles and source code to help guide us before we begin to attempt a basic code for the XBox Kinect, which will be picked up today from Dr. Zhang. In particular, we want to test connectivity and functionality of code with the XBox Kinect before we begin the developing the mathematics needed for our product. We also retrieved Dr. Zhang's computer vision textbooks that were lent to a different senior design group. These textbooks will help us understand the basics of computer vision and will take some time to read. We plan to use Git to facilitate merging of code between the three of us as we begin to work on our project.
October 6th, 2016
This week we met with Dr. Zhang to pick up his XBox Kinect as well as a different 3-d scanning system, complete with software and testing apparatus. We discussed a bit further about how to start, and he recommended we try out using the 3d--scanner on a simple object to first get an idea of how the laser projection grid refraction works in order to view an object. In preparation for the preliminary report, we researched more into the market feasibility of our option, and obtained data about differing radiotherapy options in general. as well as our market competitors. We still are trying to decide on a specific kinect library to use, and have set up intermediate access control to our code on github.com. Also, after reading through some papers about interfacing direct camera images into matlab or other scripting languages, we determined that using matlab directly with raw image data requires too much overhead and transformation to be viable for us to use. So, we would rather use an intermediary program or a differing language.
October 14th, 2016
This week we met with Dr. Zhang to pick up his XBox Kinect as well as a different 3-d scanning system, complete with software and testing apparatus. We discussed a bit further about how to start, and he recommended we try out using the 3d--scanner on a simple object to first get an idea of how the laser projection grid refraction works in order to view an object. In preparation for the preliminary report, we researched more into the market feasibility of our option, and obtained data about differing radiotherapy options in general. as well as our market competitors. We still are trying to decide on a specific kinect library to use, and have set up intermediate access control to our code on github.com. Also, after reading through some papers about interfacing direct camera images into matlab or other scripting languages, we determined that usingmatlab directly with raw image data requires too much overhead and transformation to be viable for us to use. So, we would rather use an intermediary program or a differing language.
October 21st, 2016
This week we have worked to develop the basic foundation of our webpage/designsafe. Peter developed the overall frame and wrote the mission statement and summary for our group. Eric worked to develop the contact page and compiled all weekly reports to be posted on the website. Additionally, we have been researching different source code libraries for initial setup and development of Kinect surface reconstruction. Currently, we expect that the majority of the code will be in C++. The primary source code that we have examined outputs an RGB value for each pixel during each frame, so one potential problem for our system is the variation in output color of the projected grid on different colored surfaces. We plan to hook up the Kinect and begin looking at the data outputs from this source code as soon as possible to determine whether it is usable for our purposes.
[1] http://research.microsoft.com/en-us/um/people/pkohli/papers/uist_2011.pdf
[2] https://github.com/GerhardR/kfusion
October 28th, 2016
Peter Kim finished up the design on Weebly and added images and ourpreliminary project scope information to the website Eric Chao filled in the Gallery andContact team pages, and filled in some text Tommy Du tested the site.
Dr. Silva suggests that a higher resolution camera be used rather than the Kinect inorder to achieve the error requirement of +/- 1mm or +/- 1 degree on patientlocation and angle of gantries. To improve resolution, we will have to look at variousmethods and test the precision of the Kinect to attempt to find an optimal solution.
This week we had focused on creating the website. Peter Kim worked on the initial design, and then we consolidated to upload images, work on formatting, and putting contact information in. Tommy Du tested the site, and had also presented last week with the preliminary report. We already pulled the Kinect v1 Sensor that Dr. Zhang gave us to a lab and tried connecting it to a computer. There are multiple options we can choose from, and we decided to try an open source library by Daniel Schiffman (http://shiffman.net/p5/kinect/) , which suggests using an IDE called Processing.
Sources: Chang, David, et al. “Linac Quality Assurance.” Basic Radiotherapy Physicsand Biology. Springer. 151-154; 23 June 2014.
November 4th, 2016
This week our team convened to lay out an initial timeline for interfacing the Kinect. Given that the Kinect has depth, infrared, and color reception capabilities, we plan to start our data collection with the depth camera, and then eventually proceed to color to enable feature mapping. The open source IDE, Processing, which we are using is aJava based language--we plan to continue initial prototyping in this environment. However, Shiffman’s site also referenced alternate IDE’s such as openFrameworks or Cinder that interface with the Kinect through C++,which could give us improved speed and data processing in later phases.
November 11th, 2016
This week, Tommy examined the different Kinect methods available for data collection, namely, RGB, depth, and infrared (IR). Using a still setting, Tommy recorded data and determined the range of values using each type of data collection in a few frames of video in order to qualitatively examine the consistency and precision of each recording method. Specifically, RGB seemed to have the least variation, followed by depth, and lastly IR. Additionally, Tommy and Peter discussed the necessity of using a projected grid for surface reconstruction, given that depth seems to potentially provide all the necessary data. Additionally, the Kinect does not have an attached projector for such a grid, so this method would require an increase in size and cost of the device. All options will be further compared in a Pugh chart once we believe we have developed a sufficient list of alternatives.
The Kinect correctly interfaces through Peter Kim's laptop. The output data includesRGB video, depth map video, and IR video, but cannot be recorded simultaneously.Analysis of this data using surface reconstruction mathematics will be performed in the future along with accuracy testing to determine the error in position and angle.The depth map video is non functional at a range of approximately 0.5m or less to the camera, the mount may need to be adjusted to maximize distance between the patient and the camera to resolve this issue.
Today, we connected the Kinect that Dr. Zhang gave us to one of the Urbauer Terminals. Following one of the open source computer vision libraries for the Kinectv1 sensor (http://shiffman.net/p5/kinect/), we tried installing Processing (an IDE with access to the library) and opening the Kinect video stream. There were issues encountered with admin privileges, and so we will re-try on our home terminals.
November 18th, 2016
During this week, we did some research on people who could help us with our other options. The Kinect seems the most viable, and we have the most accessibility to it, so we have started interfacing with it. We decided to ask Dr. Yasutaka Furukawa for help in a manually-built system that can accomplish the same thing. We also began compiling our Progress Report, but decided that some of our options we could not feasibly physically access, so we asked Dr. Silva for some additional input as well.
December 2nd, 2016
December 1st our group reconvened to continue fleshing out the Progress paper.Peter constructed a new design timeline (Gant chart) for the Spring semester andTommy worked on solution analysis for the mount by making a Pugh chart. The teamcontinued to analyze solutions and estimated the cost of 3D printing the mount. Theteam then constructed a new budget and continued finishing the paper.
This week our group met to outline and flesh out the Progress Report paper in Urbauer. We continued research in alternate avenues for solutions within the problem space, and continued analysis of these alternatives. Additionally, Tommy and Peter worked to design the mount which will interface between the Kinect andLinac machine. Eric focused on design alternatives, Pugh chart analysis, and customization
January 27th, 2017
Peter, Eric, and Tommy meet with Dr. Moran to discuss the scope of the overall project. Following our correspondence with Professor Furukawa, the group decided that it was too difficult to implement a fully computerized method of surface reconstruction using a Kinect without enough resolution to detect minor perturbations. As a result, we met with Moran to find a reasonable method of constructing the prototype within a the time span of a semester using a simplified computational technique. After discussion, Moran suggested using a method of 4 fiducials as tracking points to define the axes of the space. Through finding the direction and length between different points, we can characterize the x and y directions with unit vectors, and cross these two vectors to return the z direction.Following this initial calibration procedure, if the patient moves, deformation of the skin leads to shifting of the point locations which will be tracked and characterized in real time to return an output command to the coach that adjusts the patient's position as necessary.
Following the meeting with Moran, Peter, Eric, and Tommy discuss the implementation of the deformation computation method using fiducials. Peter andTommy consider the a method where the undeformed and deformed states are compared using distance computations and changes in vector length and angle. Eric suggests a method of using rotation matrices to characterize each of the possible transformations that could occur: Translation in x, y, z or rotation in x, z. The group decides that the most appropriate method for the project is the rotation matrix method. Additionally, the group makes plans to meet Dr. Widder on Monday(1/30/17) to borrow some equipment for the prototype mounting device.
February 3rd, 2017
During this week, we spoke to Dr. Widder to discuss the construction of the mount for our Kinect. Under the premise that the demo at the end of the semester will be a proof of concept for the prototype. We have decided that use of two ring stands and two clamps to hold the Kinect in position should suffice for the purposes of our project. Additionally, we have examined the difference between the Kinect v1 and v2 in order to maintain high enough resolution and precision to use the vector transformation method of tracking patient movement. Consequently, we have determined that the v2 is the best choice and have found the lowest price available on Amazon for the Kinect v2 and a windows adapter.
February 17th, 2017
This week Group 29 continued their research into the various approaches to image processing for achieving robust motion detection through implementation of the Kinect v 2.0. In particular, Eric Chao researched Matlab image processing documentation as well as Otsu’s algorithm in reducing color variation. Tommy Du and Peter Kim looked into human mechanics and the various motion types/rotation categories that we will have to consider. Next week Group 29 will continue compiling references and soon move on to application and prototyping the algorithm.
The particular type of rotation depicted in the image below begins at the bottom of the spine, wrapping clockwise around the z axis. As the patient rotates towards the right, the markers on their left half will begin to stretch further apart while the markers on the right side will shift closer together. In terms of the axis, the x and y axis will indeed rotate clockwise; however the shifted positions of the left and right markers will distort the axis-vector directions significantly. These will be the indicators used to identity and categorize this type of rotation.
Given four points that form a square with center at (0,0,0) and a camera returns both RGB values and depth values each pixel, the main motion types can be characterized. Translation is the most straightforward of the multiple types, since it concerns movement of the entire body in the same direction. Consequently, an equal shift in x, y, and z components of each fiducial point would characterize a translation. If component shift is not equal, there must be some sort of rotation at play. Uniform rotations along the longitudinal axis and antero-posterior axis are equally simple to characterize using the format of Ax = B, where x is a 3 by 4 matrix containing the three components of a point in each column. B is the solution matrix containing the new 3 by 4 matrix which contains new positions for each fiducial marker. A is a 3 by 3 rotation matrix, which relates old and new positions. For rotation about the longitudinal axis (y axis for our purposes), the rotation matrix would be A =[cos(a), 0, sin(a); 0, 1, 0; -sin(a), 0, cos(a)] where a is the angle of rotation to be computed (positive direction is counter-clockwise). For rotation about the antero-posterior axis (z axis for our purposes), the rotation matrix would be A = [cos(a), -sin(a), 0; sin(a), cos(a), 0; 0, 0, 1] with positive a being counter-clockwise [1]. Partial body rotations with a linearly increasing angle will be examined soon, but we have not yet determined a simple matrix for partial rotation. Additionally, we'd like to add another form of partial rotation in addition to lateral flexion. This rotation is similar to the longitudinal axis rotation, however, the lower body remains stationary, and the upper body rotates starting from the base spine up to the head with linearly increasing angle a. [1] Swokowski E. W. Calculus with Analytic Geometry. Taylor &Francis. 1979.
When characterizing the possible unit vector movements that may occur as a personshifts on the moving stage, it is important to consider the human anatomy. Generally,translations would be characterized as a shifting of the entire body, resulting in equalshift of all fiducials in a particular direction. Within the scope of the project we willprimarily be considering two types of rotation: rotation about longitudinal axis, androtation about the antero-posterior axis. We assume that any rotation about thehorizontal axis corresponds to the patient sitting up, and may easily be correctedthrough a vocal command directly to the patient. Rotation about the longitudinal axiswill be considered in a simple form where the body rotates uniformly about the axis. Asimple rotation matrix will be used to model uniform rotation [1]. Rotation about theantero-posterior axis must be considered in two ways: uniform rotation, and lateralflexion of the spine (rotation only in the upper body) [2]. The lateral flexion involvessome stretching and compression of the computed fiducial vectors so it may bedifficult to characterize. For this project, we will assume that a lateral flexion involvesuniform rotation of the entire spine for simplicity. Given that we plan to utilize solelypoint markers, it will be difficult to determine the exact curve of the back, so we planto use vector length differences as the primary source of charazterization for thismovement. [1]https://en.wikipedia.org/wiki/Rotationmatrix[2] https://www.physio-pedia.com/CardinalPlanesandAxesofMovement
Image Processing Documentation
http://stackoverflow.com/questions/23999205/detect-black-dots-from-color-background Use Otsu's Algo to reduce color variation
.https://www.mathworks.com/help/images/examples/detect-and-measure-circular-objects-in-an-image.html
https://www.mathworks.com/help/images/examples/detect-and-measure-circular-objects-in-an-image.html
https://en.wikipedia.org/wiki/Otsu%27s_method
https://www.mathworks.com/help/vision/feature-detection-and-extraction.html
https://en.wikipedia.org/wiki/Otsu%27s_method
February 24th, 2017
This week, our group has run into some concerns with the time frame of our prototyping. The Kinect v2 that was ordered several weeks ago has been delayed, and has no estimate for shipping date. Consequently we have deleted the order and purchased from a different vendor, and it is expected to arrive on Saturday. Additionally, we have further characterized some of the vector mathematics necessary for relocation of the patient to the optimal spot for therapy. In detail, we are using a linear model for longitudinal rotation about the y axis, and we are making an assumption for lateral flexion that the spine curve is perfectly semi-circular. We have made plans to meet with Dr. Zhang for discussion of the project scope changes and return of borrowed materials.
March 3rd, 2017
This week we received the Kinect Adapter and Sensor, and were able to implement multiple parts of the code to test the initial point tracking. By pulling from code on line and a lot of trial and error, we were able to get the kinect camera to pull both color and depth information, and while it was it running to identify sections that were a certain color (known as Blob Detection). After trying some libraries, we figured that configuring an existing library to a more simple application would be the fastest, since some techniques like Hough Circle Detection were more advanced than we needed. We also worked heavily on the V and V report.
March 10th, 2017
The group met with our client, Dr. Zhang today to discuss updates to the project scope following the V & V report. Specifically, we explained the complete redesign of the tracking method to the 4 fiducial matrix math and received input as to whether such a method would be reasonable in clinical work. Dr. Zhang primarily voiced concerns with the approximation of tumor movement by suggesting that we additionally add a fiducial directly above the tumor location to track the x & y directional movements. We intend to compares results between our original method and add the additional 5th fiducial method to see whether accuracy can be improved in this manner.
March 24th, 2017
This week, Group 29 met to do some coding, and examination of source code to return data in useful forms for computation of a transformation matrix used to construct instructions for realigning the tumor. In particular, due to the difference of resolution between the depth camera and color camera despite the equivalent field of view, we utilized a simple weighted average to compute depth values associated to each pixel of the color image. Additionally, we constructed the Moore-Penrose inverse, which will be utilized to return the transformation matrix at each collected frame, and verified its accuracy, since the data matrix, f, multiplied by the MP inverse should return an approximation of the identity matrix.
March 31st, 2017
This week the group met and continued consolidation of the program. There were slight issues in designing the program to not actually start despite the camera running due to issues with the computational matrix calculations beginning before the paper/patient model was set up properly. We circumvented the problem by giving the program states to run in, and also discretized some of the functions so that the marker points were accessible from the global program view. Basic distance calculations were also done with a ruler to see how close/far the depth camera can work and calculate depth positions in, and this provided us a framework for our mount dimensions.
April 7th, 2017
The week of 4/3 - 4/7 Group 29 met and dealt with 3 primary issues. First and foremost, the construction of the transformation matrix, which requires a Matrix related Java library to be imported to support SVD in our linear analysis of the problem. Additionally, a Prototype Verification time line was agreed upon and generated, as well as an initial draft of our Software Flow Diagram. To effectively compute the Transformation Matrix, we tried to first use JAMA, Java's primary linear algebra package, but to no avail (see "Import of Matrix-Related Java Libraries Continued"). We will instead be using the Apache Common Maths 2 and 3 libraries (we imported the 2.2 binary tar.gz file version).
April 14th, 2017
This week, group 29 modified some of the mathematics for simpler decomposition of the affine transformation matrix into its components, and began testing of precision components and accuracy of the algorithm. In particular, we decided to utilize the Kabsch algorithm for computing transformation matrices, which utilizes a difference of centroid to return translation and assume all other movements are simple rotations. Precision testing began by locating the optimum distance from the Kinect's camera to obtain useful depth values (51 cm). Using dots of set distance apart (printed), we examined the pixel dimensions in metric lengths by counting the number of pixels between the dots, Preliminary tests on translation accuracy were also performed.
April 21st, 2017
This week our group focused upon completing the project and program so that we could also complete verification testing. We finalized our setup and made a stable mount so that we could run our program continuously. Then we also updated the GUI and exported the code into an executable package so it could run more easily. We finished testing the verification proposed design specs and compiled all of our results, and then discussed the viability of our tested measurements. Finally we compiled all of our work into a report, and also practiced our final presentation.
For the week of 9/5/16, our group met on Tuesday to discuss topics of interest which primarily consisted of robotics and programming geared towards measurement of different markers in and on the body such as use of computer vision or spectrophotometry to track some time of marker. On 9/9/16, we met with Dr. Tiezhi Zhang, who suggested that we develop some sort of technology that can improve the current VisionRT system of tracking a patient's body orientation during CT scans. We then met Dr. Bernard Miller on 9/10/16 to discuss methods of noninvasive biomarker measurement, including spectrometry, and spectrophotometry. A meeting with Mike Sabo was planned for 9/14/16.
September 26th, 2016
During the week of 9/26/16, we researched surface reconstruction articles and source code to help guide us before we begin to attempt a basic code for the XBox Kinect, which will be picked up today from Dr. Zhang. In particular, we want to test connectivity and functionality of code with the XBox Kinect before we begin the developing the mathematics needed for our product. We also retrieved Dr. Zhang's computer vision textbooks that were lent to a different senior design group. These textbooks will help us understand the basics of computer vision and will take some time to read. We plan to use Git to facilitate merging of code between the three of us as we begin to work on our project.
October 6th, 2016
This week we met with Dr. Zhang to pick up his XBox Kinect as well as a different 3-d scanning system, complete with software and testing apparatus. We discussed a bit further about how to start, and he recommended we try out using the 3d--scanner on a simple object to first get an idea of how the laser projection grid refraction works in order to view an object. In preparation for the preliminary report, we researched more into the market feasibility of our option, and obtained data about differing radiotherapy options in general. as well as our market competitors. We still are trying to decide on a specific kinect library to use, and have set up intermediate access control to our code on github.com. Also, after reading through some papers about interfacing direct camera images into matlab or other scripting languages, we determined that using matlab directly with raw image data requires too much overhead and transformation to be viable for us to use. So, we would rather use an intermediary program or a differing language.
October 14th, 2016
This week we met with Dr. Zhang to pick up his XBox Kinect as well as a different 3-d scanning system, complete with software and testing apparatus. We discussed a bit further about how to start, and he recommended we try out using the 3d--scanner on a simple object to first get an idea of how the laser projection grid refraction works in order to view an object. In preparation for the preliminary report, we researched more into the market feasibility of our option, and obtained data about differing radiotherapy options in general. as well as our market competitors. We still are trying to decide on a specific kinect library to use, and have set up intermediate access control to our code on github.com. Also, after reading through some papers about interfacing direct camera images into matlab or other scripting languages, we determined that usingmatlab directly with raw image data requires too much overhead and transformation to be viable for us to use. So, we would rather use an intermediary program or a differing language.
October 21st, 2016
This week we have worked to develop the basic foundation of our webpage/designsafe. Peter developed the overall frame and wrote the mission statement and summary for our group. Eric worked to develop the contact page and compiled all weekly reports to be posted on the website. Additionally, we have been researching different source code libraries for initial setup and development of Kinect surface reconstruction. Currently, we expect that the majority of the code will be in C++. The primary source code that we have examined outputs an RGB value for each pixel during each frame, so one potential problem for our system is the variation in output color of the projected grid on different colored surfaces. We plan to hook up the Kinect and begin looking at the data outputs from this source code as soon as possible to determine whether it is usable for our purposes.
[1] http://research.microsoft.com/en-us/um/people/pkohli/papers/uist_2011.pdf
[2] https://github.com/GerhardR/kfusion
October 28th, 2016
Peter Kim finished up the design on Weebly and added images and ourpreliminary project scope information to the website Eric Chao filled in the Gallery andContact team pages, and filled in some text Tommy Du tested the site.
Dr. Silva suggests that a higher resolution camera be used rather than the Kinect inorder to achieve the error requirement of +/- 1mm or +/- 1 degree on patientlocation and angle of gantries. To improve resolution, we will have to look at variousmethods and test the precision of the Kinect to attempt to find an optimal solution.
This week we had focused on creating the website. Peter Kim worked on the initial design, and then we consolidated to upload images, work on formatting, and putting contact information in. Tommy Du tested the site, and had also presented last week with the preliminary report. We already pulled the Kinect v1 Sensor that Dr. Zhang gave us to a lab and tried connecting it to a computer. There are multiple options we can choose from, and we decided to try an open source library by Daniel Schiffman (http://shiffman.net/p5/kinect/) , which suggests using an IDE called Processing.
Sources: Chang, David, et al. “Linac Quality Assurance.” Basic Radiotherapy Physicsand Biology. Springer. 151-154; 23 June 2014.
November 4th, 2016
This week our team convened to lay out an initial timeline for interfacing the Kinect. Given that the Kinect has depth, infrared, and color reception capabilities, we plan to start our data collection with the depth camera, and then eventually proceed to color to enable feature mapping. The open source IDE, Processing, which we are using is aJava based language--we plan to continue initial prototyping in this environment. However, Shiffman’s site also referenced alternate IDE’s such as openFrameworks or Cinder that interface with the Kinect through C++,which could give us improved speed and data processing in later phases.
November 11th, 2016
This week, Tommy examined the different Kinect methods available for data collection, namely, RGB, depth, and infrared (IR). Using a still setting, Tommy recorded data and determined the range of values using each type of data collection in a few frames of video in order to qualitatively examine the consistency and precision of each recording method. Specifically, RGB seemed to have the least variation, followed by depth, and lastly IR. Additionally, Tommy and Peter discussed the necessity of using a projected grid for surface reconstruction, given that depth seems to potentially provide all the necessary data. Additionally, the Kinect does not have an attached projector for such a grid, so this method would require an increase in size and cost of the device. All options will be further compared in a Pugh chart once we believe we have developed a sufficient list of alternatives.
The Kinect correctly interfaces through Peter Kim's laptop. The output data includesRGB video, depth map video, and IR video, but cannot be recorded simultaneously.Analysis of this data using surface reconstruction mathematics will be performed in the future along with accuracy testing to determine the error in position and angle.The depth map video is non functional at a range of approximately 0.5m or less to the camera, the mount may need to be adjusted to maximize distance between the patient and the camera to resolve this issue.
Today, we connected the Kinect that Dr. Zhang gave us to one of the Urbauer Terminals. Following one of the open source computer vision libraries for the Kinectv1 sensor (http://shiffman.net/p5/kinect/), we tried installing Processing (an IDE with access to the library) and opening the Kinect video stream. There were issues encountered with admin privileges, and so we will re-try on our home terminals.
November 18th, 2016
During this week, we did some research on people who could help us with our other options. The Kinect seems the most viable, and we have the most accessibility to it, so we have started interfacing with it. We decided to ask Dr. Yasutaka Furukawa for help in a manually-built system that can accomplish the same thing. We also began compiling our Progress Report, but decided that some of our options we could not feasibly physically access, so we asked Dr. Silva for some additional input as well.
December 2nd, 2016
December 1st our group reconvened to continue fleshing out the Progress paper.Peter constructed a new design timeline (Gant chart) for the Spring semester andTommy worked on solution analysis for the mount by making a Pugh chart. The teamcontinued to analyze solutions and estimated the cost of 3D printing the mount. Theteam then constructed a new budget and continued finishing the paper.
This week our group met to outline and flesh out the Progress Report paper in Urbauer. We continued research in alternate avenues for solutions within the problem space, and continued analysis of these alternatives. Additionally, Tommy and Peter worked to design the mount which will interface between the Kinect andLinac machine. Eric focused on design alternatives, Pugh chart analysis, and customization
January 27th, 2017
Peter, Eric, and Tommy meet with Dr. Moran to discuss the scope of the overall project. Following our correspondence with Professor Furukawa, the group decided that it was too difficult to implement a fully computerized method of surface reconstruction using a Kinect without enough resolution to detect minor perturbations. As a result, we met with Moran to find a reasonable method of constructing the prototype within a the time span of a semester using a simplified computational technique. After discussion, Moran suggested using a method of 4 fiducials as tracking points to define the axes of the space. Through finding the direction and length between different points, we can characterize the x and y directions with unit vectors, and cross these two vectors to return the z direction.Following this initial calibration procedure, if the patient moves, deformation of the skin leads to shifting of the point locations which will be tracked and characterized in real time to return an output command to the coach that adjusts the patient's position as necessary.
Following the meeting with Moran, Peter, Eric, and Tommy discuss the implementation of the deformation computation method using fiducials. Peter andTommy consider the a method where the undeformed and deformed states are compared using distance computations and changes in vector length and angle. Eric suggests a method of using rotation matrices to characterize each of the possible transformations that could occur: Translation in x, y, z or rotation in x, z. The group decides that the most appropriate method for the project is the rotation matrix method. Additionally, the group makes plans to meet Dr. Widder on Monday(1/30/17) to borrow some equipment for the prototype mounting device.
February 3rd, 2017
During this week, we spoke to Dr. Widder to discuss the construction of the mount for our Kinect. Under the premise that the demo at the end of the semester will be a proof of concept for the prototype. We have decided that use of two ring stands and two clamps to hold the Kinect in position should suffice for the purposes of our project. Additionally, we have examined the difference between the Kinect v1 and v2 in order to maintain high enough resolution and precision to use the vector transformation method of tracking patient movement. Consequently, we have determined that the v2 is the best choice and have found the lowest price available on Amazon for the Kinect v2 and a windows adapter.
February 17th, 2017
This week Group 29 continued their research into the various approaches to image processing for achieving robust motion detection through implementation of the Kinect v 2.0. In particular, Eric Chao researched Matlab image processing documentation as well as Otsu’s algorithm in reducing color variation. Tommy Du and Peter Kim looked into human mechanics and the various motion types/rotation categories that we will have to consider. Next week Group 29 will continue compiling references and soon move on to application and prototyping the algorithm.
The particular type of rotation depicted in the image below begins at the bottom of the spine, wrapping clockwise around the z axis. As the patient rotates towards the right, the markers on their left half will begin to stretch further apart while the markers on the right side will shift closer together. In terms of the axis, the x and y axis will indeed rotate clockwise; however the shifted positions of the left and right markers will distort the axis-vector directions significantly. These will be the indicators used to identity and categorize this type of rotation.
Given four points that form a square with center at (0,0,0) and a camera returns both RGB values and depth values each pixel, the main motion types can be characterized. Translation is the most straightforward of the multiple types, since it concerns movement of the entire body in the same direction. Consequently, an equal shift in x, y, and z components of each fiducial point would characterize a translation. If component shift is not equal, there must be some sort of rotation at play. Uniform rotations along the longitudinal axis and antero-posterior axis are equally simple to characterize using the format of Ax = B, where x is a 3 by 4 matrix containing the three components of a point in each column. B is the solution matrix containing the new 3 by 4 matrix which contains new positions for each fiducial marker. A is a 3 by 3 rotation matrix, which relates old and new positions. For rotation about the longitudinal axis (y axis for our purposes), the rotation matrix would be A =[cos(a), 0, sin(a); 0, 1, 0; -sin(a), 0, cos(a)] where a is the angle of rotation to be computed (positive direction is counter-clockwise). For rotation about the antero-posterior axis (z axis for our purposes), the rotation matrix would be A = [cos(a), -sin(a), 0; sin(a), cos(a), 0; 0, 0, 1] with positive a being counter-clockwise [1]. Partial body rotations with a linearly increasing angle will be examined soon, but we have not yet determined a simple matrix for partial rotation. Additionally, we'd like to add another form of partial rotation in addition to lateral flexion. This rotation is similar to the longitudinal axis rotation, however, the lower body remains stationary, and the upper body rotates starting from the base spine up to the head with linearly increasing angle a. [1] Swokowski E. W. Calculus with Analytic Geometry. Taylor &Francis. 1979.
When characterizing the possible unit vector movements that may occur as a personshifts on the moving stage, it is important to consider the human anatomy. Generally,translations would be characterized as a shifting of the entire body, resulting in equalshift of all fiducials in a particular direction. Within the scope of the project we willprimarily be considering two types of rotation: rotation about longitudinal axis, androtation about the antero-posterior axis. We assume that any rotation about thehorizontal axis corresponds to the patient sitting up, and may easily be correctedthrough a vocal command directly to the patient. Rotation about the longitudinal axiswill be considered in a simple form where the body rotates uniformly about the axis. Asimple rotation matrix will be used to model uniform rotation [1]. Rotation about theantero-posterior axis must be considered in two ways: uniform rotation, and lateralflexion of the spine (rotation only in the upper body) [2]. The lateral flexion involvessome stretching and compression of the computed fiducial vectors so it may bedifficult to characterize. For this project, we will assume that a lateral flexion involvesuniform rotation of the entire spine for simplicity. Given that we plan to utilize solelypoint markers, it will be difficult to determine the exact curve of the back, so we planto use vector length differences as the primary source of charazterization for thismovement. [1]https://en.wikipedia.org/wiki/Rotationmatrix[2] https://www.physio-pedia.com/CardinalPlanesandAxesofMovement
Image Processing Documentation
http://stackoverflow.com/questions/23999205/detect-black-dots-from-color-background Use Otsu's Algo to reduce color variation
.https://www.mathworks.com/help/images/examples/detect-and-measure-circular-objects-in-an-image.html
https://www.mathworks.com/help/images/examples/detect-and-measure-circular-objects-in-an-image.html
https://en.wikipedia.org/wiki/Otsu%27s_method
https://www.mathworks.com/help/vision/feature-detection-and-extraction.html
https://en.wikipedia.org/wiki/Otsu%27s_method
February 24th, 2017
This week, our group has run into some concerns with the time frame of our prototyping. The Kinect v2 that was ordered several weeks ago has been delayed, and has no estimate for shipping date. Consequently we have deleted the order and purchased from a different vendor, and it is expected to arrive on Saturday. Additionally, we have further characterized some of the vector mathematics necessary for relocation of the patient to the optimal spot for therapy. In detail, we are using a linear model for longitudinal rotation about the y axis, and we are making an assumption for lateral flexion that the spine curve is perfectly semi-circular. We have made plans to meet with Dr. Zhang for discussion of the project scope changes and return of borrowed materials.
March 3rd, 2017
This week we received the Kinect Adapter and Sensor, and were able to implement multiple parts of the code to test the initial point tracking. By pulling from code on line and a lot of trial and error, we were able to get the kinect camera to pull both color and depth information, and while it was it running to identify sections that were a certain color (known as Blob Detection). After trying some libraries, we figured that configuring an existing library to a more simple application would be the fastest, since some techniques like Hough Circle Detection were more advanced than we needed. We also worked heavily on the V and V report.
March 10th, 2017
The group met with our client, Dr. Zhang today to discuss updates to the project scope following the V & V report. Specifically, we explained the complete redesign of the tracking method to the 4 fiducial matrix math and received input as to whether such a method would be reasonable in clinical work. Dr. Zhang primarily voiced concerns with the approximation of tumor movement by suggesting that we additionally add a fiducial directly above the tumor location to track the x & y directional movements. We intend to compares results between our original method and add the additional 5th fiducial method to see whether accuracy can be improved in this manner.
March 24th, 2017
This week, Group 29 met to do some coding, and examination of source code to return data in useful forms for computation of a transformation matrix used to construct instructions for realigning the tumor. In particular, due to the difference of resolution between the depth camera and color camera despite the equivalent field of view, we utilized a simple weighted average to compute depth values associated to each pixel of the color image. Additionally, we constructed the Moore-Penrose inverse, which will be utilized to return the transformation matrix at each collected frame, and verified its accuracy, since the data matrix, f, multiplied by the MP inverse should return an approximation of the identity matrix.
March 31st, 2017
This week the group met and continued consolidation of the program. There were slight issues in designing the program to not actually start despite the camera running due to issues with the computational matrix calculations beginning before the paper/patient model was set up properly. We circumvented the problem by giving the program states to run in, and also discretized some of the functions so that the marker points were accessible from the global program view. Basic distance calculations were also done with a ruler to see how close/far the depth camera can work and calculate depth positions in, and this provided us a framework for our mount dimensions.
April 7th, 2017
The week of 4/3 - 4/7 Group 29 met and dealt with 3 primary issues. First and foremost, the construction of the transformation matrix, which requires a Matrix related Java library to be imported to support SVD in our linear analysis of the problem. Additionally, a Prototype Verification time line was agreed upon and generated, as well as an initial draft of our Software Flow Diagram. To effectively compute the Transformation Matrix, we tried to first use JAMA, Java's primary linear algebra package, but to no avail (see "Import of Matrix-Related Java Libraries Continued"). We will instead be using the Apache Common Maths 2 and 3 libraries (we imported the 2.2 binary tar.gz file version).
April 14th, 2017
This week, group 29 modified some of the mathematics for simpler decomposition of the affine transformation matrix into its components, and began testing of precision components and accuracy of the algorithm. In particular, we decided to utilize the Kabsch algorithm for computing transformation matrices, which utilizes a difference of centroid to return translation and assume all other movements are simple rotations. Precision testing began by locating the optimum distance from the Kinect's camera to obtain useful depth values (51 cm). Using dots of set distance apart (printed), we examined the pixel dimensions in metric lengths by counting the number of pixels between the dots, Preliminary tests on translation accuracy were also performed.
April 21st, 2017
This week our group focused upon completing the project and program so that we could also complete verification testing. We finalized our setup and made a stable mount so that we could run our program continuously. Then we also updated the GUI and exported the code into an executable package so it could run more easily. We finished testing the verification proposed design specs and compiled all of our results, and then discussed the viability of our tested measurements. Finally we compiled all of our work into a report, and also practiced our final presentation.