A Joint Relationship Aware Neural Network for Single-Image 3D Human Pose Estimation

Data:12-06-2020  |  【 A  A  A 】  |  【Print】 【Close

This paper studies the task of 3D human pose estimation from a single RGB image, which is challenging without depth information.

 

Recently many deep learning methods are proposed and achieve great improvements due to their strong representation learning.

 

However, most existing methods ignore the relationship between joint features. In this paper, a joint relationship aware neural network is proposed by a research team led by Prof. Dr. LU Xiaoqiang from Xi'an Institute of Optics and Precision Mechanics (XIOPM) of the Chinese Academy of Sciences (CAS) to take both global and local joint relationship into consideration.

 

First, a whole feature block representing all human body joints is extracted by a convolutional

neural network. A Dual Attention Module (DAM) is applied on the whole feature block to generate attention weights. By exploiting the attention module, the global relationship between the whole joints is encoded. Second, the weighted whole feature block is divided into some individual joint features. To capture salient joint feature, the individual joint features are refined by individual DAMs.

 

Finally, a joint angle prediction constraint is proposed to consider local joint relationship. Quantitative and qualitative experiments on 3D human pose estimation benchmarks demonstrate the effectiveness of the proposed method.

 
The overview of the proposed method. (Image by XIOPM)


(Original research article “IEEE Transactions on Image Processing” (2020) http://dx.doi.org/10.1109/TIP.2020.2972104)