Robust Visual Tracking via Pixel Classification and Integration
- Cha Zhang ,
- Yong Rui
We propose a novel framework for tracking non-rigid objects via pixel classification and integration (PCI). Given a new input frame, the tracker first performs object classification on each pixel and then finds the region that has the highest integral of scores. There are several key advantages of the proposed approach: it is computationally very efficient; it finds a global, instead of local (e.g., mean-shift), optimal solution within a search range; and it is inherently robust to different object scales with minimum extra computation. Within this framework, a mixture of long-term and shortterm appearance model is further introduced to perform PCI. As a result, the tracker is able to adapt to both slow and rapid appearance changes without drifting. Challenging video sequences are presented to illustrate how the proposed tracker handles large motion, dramatic shape changes, scale variations, illumination variations and partial occlusions