An Active Learning Framework for Content Based Information Retrieval
In this paper, we propose a general active learning framework for content-based information retrieval. We use this framework to guide hidden annotations in order to improve the retrieval performance. For each object in the database, we maintain a list of probabilities, each indicating the probability of this object having one of the attributes. During training, the learning algorithm samples objects in the database and presents them to the annotator to assign attributes to. For each sampled object, each probability is set to be one or zero depending on whether or not the corresponding attribute is assigned by the annotator. For objects that have not been annotated, the learning algorithm estimates their probabilities with kernel regression. Furthermore, the normal kernel regression algorithm is modified into a biased kernel regression, so that an object that is far from any annotated object will receive an estimate result of the prior probability. This is based on our basic assumption that any annotation should not propagate too far in the feature space if we cannot guarantee that the feature space is good. Knowledge gain is then defined to determine, among the objects that have not been annotated, which one the system is the most uncertain of, and present it as the next sample to the annotator to assign attributes to. During retrieval, the list of probabilities works as a feature vector for us to calculate the semantic distance between two objects, or between the user query and an object in the database. The overall distance between two objects is determined by a weighted sum of the semantic distance and the low-level feature distance. The algorithm is tested on both synthetic database and real database. In both cases the retrieval performance of the system improves rapidly with the number of annotated samples. Furthermore, we show that active learning outperforms learning based on random sampling