My name is Jeremy Higbee. I’m a Captain in the United States Air Force and I’m a computer engineering master’s student at the Air Force Institute of Technology. We came in with the idea of a three dimensional sensor and three dimensional modelling capabilities—a very inexpensive sensor. The Kinect originally started as an accessory for the Xbox 360, but through some algorithms that were released recently, there’s the ability to actually generate full three dimensional models, very high quality, from a very inexpensive sensor. Things like a visual inspection that we do in the United States Air Force after a post flight to find all those (what we call) deviations and things like that, other than having to rely on people who go through and do a visual inspection with their eyeball across the entire surface of an aircraft. The algorithm is called KinectFusion. It consists of four main stages where they convert the information that’s coming out of the Kinect which is just information that is the depth between the sensor and an object that it’s hitting in its field of view. And then it takes that information and it converts it into a three dimensional point, and then it takes that information and it tracks based on what it has seen in its field of view—where it thinks the camera is in relation to that field—and then it converts all the information from two or more frames and it says, “I was here, then I was here and the points that I saw at each one of these,” and then it puts them all into a single global coordinate system so that they can all be averaged and aggregated to build up a three dimensional model. And then the final step of the algorithm is called “rendering” or “ray casting” where they actually generate the view that the user would see. The idea of being able to accurately locate where you are in relation to the aircraft based on what you’re seeing is the advanced navigation concept. I am seeing this profile of the aircraft and I see these features potentially on the aircraft, I must be standing approximately five feet away and at 30 degrees off the nose. That kind of capability doesn’t exist right now. So the ability for something like a very inexpensive Kinect, which I can hold in my hand and can throw in a box and take with me down range, that ability to use something like that to accurately say, “I’m exactly here based on what I see,” is pretty impressive.