×
×
×
×
×
×
×
×

Alumni

Alumni
×

Search

×

On the TecEdge

Posted Monday, March 10, 2008

 

By Maj Michael J. Mendenhall, AFIT/ENG

The TecEdge is an outside-the-fence experiment under the direction of the Air Force Research Laboratory’s Sensors Directorate (AFRL/RY). The intent of the effort is to foster the collaboration of government, contractors, and academia to accomplish a common research mission, which is to demonstrate a persistent surveillance capability using real data from an urban environment. The near-term goal is to prepare a demonstration for the Scientific Advisory Board visit in Fall 2008. Members of the team include government (AFRL/RYAT), contractors (Numerica, QBase, SAIC, WOLPERT, and others), and academia. The academic component is made up of Wright State University, the Ohio State University, and the Air Force Institute of Technology, with AFIT’s Department of Electrical and Computer Engineering (ENG) being the largest academic contributor. The TecEdge has proven to be a very effective collaboration environment that fosters faculty-student interaction, student-student interaction, and interaction between government and industry. The effects on student theses are profound as the TecEdge facility provides a set of low-level tools and processes that support the students and allow them to concentrate on their research projects.

AFIT’s current research thrust in support of the TecEdge mission is primarily in small target tracking. Specifically, four MS students are working in the areas of context-aided tracking (Capt Scott Pierce), change detection (Capt Thomas Fulton), feature-aided tracking (Maj Thomas Lenz), hyperspectral-augmented tracking (Capt Neil Soliman), and track prediction (Capt Pierce). Combined, these projects address the asymmetrical threats in urban environments through the exploitation of persistent surveillance systems.

tracking system

Context-aided tracking aims to create scene context to help reduce the number of errors generated during the change detection process. An image labeled with scene context will identify areas such as buildings, roads, and terrain. These labeled areas provide an effective means for increasing the probability of detecting true tracks while reducing the probability of falsely identifying tracks. Just as important, it effectively reduced the area in the image that the system must process, which allowed the system to concentrate resources on other parts of the tracking task. The current TecEdge effort used a three-dimensional ladar model of the urban scene, the persistent surveillance platform flight path, and the camera look angles in order to perform ray tracing from the center of the camera’s field of view to the known road locations. Aggregating the hits and misses allowed us to generate a probability of detection map that was then incorporated into the change detection portion of the tracker.

Change detection is the “front end” of the tracking process. A change detection algorithm must compare a series of images and identify areas within the images that have changed. The goal is to detect “significant” changes while rejecting “insignificant” changes. It is a difficult task as this component of the tracking system must detect motion while minimizing the number of false alarms. False alarms occur for a number of reasons, including building parallax (the apparent motion of a building relative to the ground as the camera moves), registration errors (not perfectly aligning two images), and platform disturbances (typically from vibration and turbulence). The application of a probability of detection map to the detections reduced the number of insignificant detections. Filtering the remaining detections based on moving object characteristics further reduced the number of insignificant detections. The change detection results were then fed to a tracking system for further processing.

Feature-aided tracking is designed to address challenging issues such as image rotation, illumination variation, partial obscurations, close proximity between two or more targets of interest, and move-stop-move transitions. The concept is to extract a set of meaningful features that are (hopefully) unique to each object. These features then provide additional information to help locate and track a target. In the case of black-and-white video data, the task is relatively demanding because only a few features are available. The current research effort used two-dimensional image intensity histograms as a source of meaningful and unique features. The project was successful in handling image rotation and move-stop-move transitions, but did not fully resolve the other challenging tracking issues. To further improve target tracking, the system should extract additional features using other methods, such as hyperspectral measurements.

Hyperspectral-augmented tracking is a form of feature-aided tracking and aims at mitigating the difficult scenarios described above. The feature sets used here are the high-dimensional (on the order of 200 dimensions) spectral characteristics of the target. The project investigated the feasibility of using a multiple-target tracking system with hyperspectral image data and considered the applicability of existing hyperspectral image acquisition systems. Results demonstrated significant gains in tracking performance compared to a kinematic-only system. Furthermore, the work explored several novel ideas, including how to associate hyperspectral measurements with established tracks. In the overall target tracking system, once tracks are defined, we can perform higher level processing to extract other useful information.

Track prediction attempts to learn traffic patterns at intersections so that one may predict how a target will travel throughout the scene. The model used known roads and created innumerable combinations of trajectories due to intersections. Predicting a target through an intersection was modeled by compiling track histories to create a velocity-based Markov model specific to regions of the image. A transition matrix at each intersection allowed us to predict a target’s path through an intersection based on the history of that intersection and track velocity. Iterative predictions established track estimates at varying time intervals. These predictions can be implemented in conjunction with the kinematic component of a multi-target tracking system to provide more accurate estimates of track predictions. It also has the potential of enhancing sensor resource management, anticipating camera views, modifying aerial platform flight paths, or tasking other sensor resources if the target is of high value.

Future efforts include advances in the foundation laid for hyperspectral-augmented tracking (First Lt Torsten Howard) and traffic modeling for “what-if scenarios” and target prediction throughout the scene (First Lt Richard Muster). Dismount tracking and identification will be included (Maj Abel Nunez) and the introduction of persistent synthetic aperture radar for scene exploitation (First Lt Brian Donnell) will push the state of the art in the exploitation of persistent surveillance systems.

 

More news...

Return to the top of the page

Air Force Institute of Technology
2950 Hobson Way
Wright-Patterson Air Force Base, OH 45433-7765
Commercial: 937-255-6565 | DSN: 785-6565