Cooperative Localization Using Stochastic Constraints
Cooperative Localization Using Stochastic Constraints - 300 Oct 2014
Research video by 2nd Lt Justin Soeder
Video Transcript
My name is Justin Soeder. I’m a Second Lieutenant in the United States Air Force an currently I’m a master’s student at the Air Force Institute of Technology. So, my project was a follow-on from a previous student. What he found was that a single vehicle by itself wasn’t very good at estimating its own position. So our idea was, well, if we have two vehicles cooperatively navigating in an environment, what does that buy us? Can we do navigation more precisely? So it’s pretty simple. The AR drone quad rotor vehicle that we’re using comes with this custom software development kit that lets you take the telemetry stream, such as the video and the navigational data and actually process that on a computer that’s communicating with it wirelessly. Once that information is there, what I do is I synchronize two videos from the leading and trailing cameras and then I have to estimate where those two cameras are in relation to each other. And from there, I can actually estimate the 3D locations of features, track those features, and then estimate what’s the motion of the two vehicles in a global reference frame. One of the scenarios we could look at is… let’s say we have a building that is deemed too dangerous for personal to enter. What we could do with this system is actually put it in the building and have it autonomously navigate the building, maybe find some target areas of interest, maybe some structural deficiencies. The main core task this thing needs to do is reliably navigate itself around the hallways. I could see it easily developing into sort of a swarm context where you have not just two vehicles, maybe you have three, four, and upwards really. It could even be extended to outdoor navigation and really the only limitation would be how you pick out features in the outdoors verses the indoors.