Computational Eye Movement Model based on Adaptive Saliency Map
Journal of Vision |
Computational modeling of eye movements has received much attention in the last decade when several initiatives were directed towards modeling saliency in terms of low-level scene features (Itti & Koch, 1998). Others have tried to integrate scene information with cognitive goals (Renninger et al, 2004). We attempt to model the eye movements of an observer during casual browsing of a scene in terms of low-level feature characteristics of localized regions as well as the correlation between these regions. We base our work on Itti’s model but argue that the driving factors behind the location of each subsequent saccade is based not only on the saliency of the location but also on several other parameters such as the effort needed to foveate the next location (proportional to the distance between successive locations) and the probability of finding new information at the new location (represented in an information theoretic framework). We propose an adaptive saliency map where the probabilities of prospective target locations for foveation change as a function of the choice of the present location. To this end, we have implemented a model that follows our proposed approach, and we are in the process of verifying the method against empirical findings.