Automatic Target Recognition

1. Research Goal

The goal of this research is to develop real-time algorithms for moving target detection, identification, and tracking in cluttered environments. The research will focus on three topics: (1) target detection, (2) target identification, and (3) target tracking using mobile sensor platforms. The moving sensor platform contains pan/tilt/zoom moving camera, monocular camera mounted on a vehicle, or binocular camera mounted on a vehicle or on a UAV (Unmanned Aerial Vehicle).

2. Project Objectives

The first objective is the detection of moving objects using moving sensor platform. Detecting moving objects from a video sequence is a fundamental and critical task in many computer-vision applications. The representative methods are from simple algorithms such as differencing, adaptive background subtraction, temporal models, shading model, to complicated methods such as significance and hypothesis tests, optical flow, and genetic algorithm (GA). There are many challenges in developing a robust target detection algorithm. First, it must be robust against changes in illumination. Second, it should avoid detecting non-stationary background objects such as moving leaves, rain, snow, and shadows caused by moving objects. Finally, its internal background model should react quickly to changes in background such as starting and stopping of vehicles. To solve the problems of existing algorithms, this project proposes a hybrid method for the target detection. The proposed method will integrate the adaptive background subtraction, optical flow, and GA into a robust algorithm to extract the entire moving region. The self-motion of the moving sensor platform introduces a challenging video understanding issue.  The difficulty arises from trying to detect small blocks of moving pixels representing independently moving objects when the whole image is shifting due to self-motion.  The key to success with the airborne sensor is to characterize and remove the self-motion from the video sequence. We proposed a hybrid method that employs affine transformation model, temporal difference, optical flow, and temporal filtering. Affine transformation is used to represent the apparent motion induced by the camera motion (i.e., background motion).  The parameters of this transformation are estimated from the entire image through a statistical regularization process.  In order to detect cases of slow motions or temporally stopped objects, a weighted accumulation with a fixed weight for the new observation is used to compute the temporal difference image. Optical flow will be used to detect the motion. Filtering, region dilation, and region connection are used to determine the target of interest.

The second objective is the identification of the detected objects (targets). After moving targets have been detected, the succeeding processing is the object identification to determine if they are objects of interest. Object identification in target models means that every object instance has a unique, unchanging identity such as rotational invariance, size invariance, and translational invariance. It contains the invariant feature extraction from the detected object, and feature matching between the features extracted from the object of interest and the features of the objects registered in the dictionary (feature base). Parametric and information theoretic feature-based inference techniques and physical models will be employed and compared for accuracy and computational complexity. Physical object discriminators such as geometrical, structural, and spectral features will be used to develop physical models. Correlation measures, cluster algorithms, and artificial neural networks (ANN) will be used for classification and identification.

The third objective is tracking of an object of interest. After an object of interest has been identified, the next task is how to track it as it moves. The tasks here contain the prediction of the new location of the targets, and correction of their current location. The tracking must function as follows: If the targets are detected by the target acquisition program, the tracking algorithm must estimate their position; if the target detection fails, the tracking algorithm must predict their possible location and moving velocity from the current state. That is, the tracking needs an estimation and prediction mechanism. The Extended and Unscented Kalman Filters are powerful recursive estimators. They only need the previous state and the current measurements to estimate the current state. In contrast to batch estimation techniques, it does not require the history of observations and the estimates. In this research, parametric Kalman Filters and non-parametric Particle Filters will be used for predicting the targetfs motion and a reliable and robust target tracking algorithm will be developed.

3. Experiment Results

Video clip for target detection from moving platform
Rotomotion SR20 Demo