Fine-Grained Visual Categorization by Localizing Object Parts With Single Image

Data:27-05-2021  |  【 A  A  A 】  |  【Print】 【Close

Zheng, Xiangtao; Qi, Lei; Ren, Yutao; Lu, Xiaoqiang

Fine-grained visual categorization (FGVC) refers to assigning fine-grained labels to images which belong to the same base category. Due to the high inter-class similarity, it is challenging to distinguish fine-grained images under different subcategories. Recently, researchers have proposed to firstly localize key object parts within images and then find discriminative clues on object parts. To localize object parts, existing methods train detectors for different kinds of object parts. However, due to the fact that the same kind of object part in different images often changes intensely in appearance, the existing methods face two shortages: 1) Training part detector for object parts with diverse appearance is laborious; 2) Discriminative parts with unusual appearance may be neglected by the trained part detectors. To localize the key object parts efficiently and accurately, a novel FGVC method is proposed in the paper. The main novelty is that the proposed method localizes the key object parts within each image only depending on a single image and hence avoid the influence of diversity between parts in different images. The proposed FGVC method consists of two key steps. Firstly, the proposed method localizes the key parts in each image independently. To this end, potential object parts in each image are identified and then these potential parts are merged to generate the final representative object parts. Secondly, two kinds of features are extracted for simultaneously describing the discriminative clues within each part and the relationship between object parts. In addition, a part based dropout learning technique is adopted to boost the classification performance further in the paper. The proposed method is evaluated in comparison experiments and the experiment results show that the proposed method can achieve comparable or better performance than state-of-the-art methods.

The result was published on IEEE TRANSACTIONS ON MULTIMEDIA.  DOI:10.1109/TMM.2020.2993960