Chaolong Li (李超龙)

李超龙

Master Student

 

Affective Information Processing Lab,

Key Laboratory of Child Development and Learning Science of Ministry of Education,

School of Biological Sciences and Medical Engineering,

Southeast University, Nanjing, Jiangsu Province, China.

 

Supervisors: Prof. Wenming Zheng & Prof. Zhen Cui

 

Email: lichaolong[at]seu.edu.cn



Biography

I received my B.Sc. degree in Science Education from Southeast University in June 2016. Since September 2016, I have become a master student of Affective Information Processing Lab (AIPL), Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science & Medical Engineering in Southeast University, under the supervision of Prof. Wenming Zheng and Prof. Zhen Cui.

Recent News

Research Interests

My main interests include affective computing, pattern recognition, computer vision and deep learning. Especially focus on facial expression recognition, deep learning on graphs and its applications to skeleton-based action recognition.

Publications

Journal Article

  1. Chaolong Li, Zhen Cui, Wenming Zheng, Chunyan Xu, Rongrong Ji, Jian Yang, “Action-Attending Graphic Neural Network,” IEEE Transactions on Image Processing (TIP), vol. 27, no. 7, pp. 3657-3670, 2018. [Project] [Paper] [Abstract] [BibTex] [CCF-A] (IF:4.828) The motion analysis of human skeletons is crucial for human action recognition, which is one of the most active topics in computer vision. In this paper, we propose a fully end-to-end action-attending graphic neural network (A²GNN) for skeleton-based action recognition, in which each irregular skeleton is structured as an undirected attribute graph. To extract high-level semantic representation from skeletons, we perform the local spectral graph filtering on the constructed attribute graphs like the standard image convolution operation. Considering not all joints are informative for action analysis, we design an action-attending layer to detect those salient action units (AUs) by adaptively weighting skeletal joints. Herein the filtering responses are parameterized into a weighting function irrelevant to the order of input nodes. To further encode continuous motion variations, the deep features learnt from skeletal graphs are gathered along consecutive temporal slices and then fed into a recurrent gated network. Finally, the spectral graph filtering, action-attending and recurrent temporal encoding are integrated together to jointly train for the sake of robust action recognition as well as the intelligibility of human actions. To evaluate our A²GNN, we conduct extensive experiments on four benchmark skeleton-based action datasets, including the large-scale challenging NTU RGB+D dataset. The experimental results demonstrate that our network achieves the state-of-the-art performances.

    @article{li2018action,
        title={Action-Attending Graphic Neural Network},
        author={Li, Chaolong and Cui, Zhen and Zheng, Wenming and Xu, Chunyan and Ji, Rongrong and Yang, Jian},
        journal={IEEE Transactions on Image Processing},
        volume={27},
        number={7},
        pages={3657–3670},
        year={2018},
        publisher={IEEE}
    }

Conference Paper

  1. Chaolong Li, Zhen Cui, Wenming Zheng, Chunyan Xu, Jian Yang, “Spatio-Temporal Graph Convolution for Skeleton Based Action Recognition,” In Proc. AAAI, Feb. 2018, pp. 3482-3489. [Paper] [Abstract] [BibTex] [CCF-A] [Spotlight] Variations of human body skeletons may be considered as dynamic graphs, which are generic data representation for numerous real-world applications. In this paper, we propose a spatio-temporal graph convolution (STGC) approach for assembling the successes of local convolutional filtering and sequence learning ability of autoregressive moving average. To encode dynamic graphs, the constructed multi-scale local graph convolution filters, consisting of matrices of local receptive fields and signal mappings, are recursively performed on structured graph data of temporal and spatial domain. The proposed model is generic and principled as it can be generalized into other dynamic models. We theoretically prove the stability of STGC and provide an upper-bound of the signal transformation to be learnt. Further, the proposed recursive model can be stacked into a multi-layer architecture. To evaluate our model, we conduct extensive experiments on four benchmark skeleton-based action datasets, including the large-scale challenging NTU RGB+D. The experimental results demonstrate the effectiveness of our proposed model and the improvement over the state-of-the-art.

    @inproceedings{li2018spatio,
        title={Spatio-Temporal Graph Convolution for Skeleton Based Action Recognition},
        author={Li, Chaolong and Cui, Zhen and Zheng, Wenming and Xu, Chunyan and Yang, Jian},
        booktitle={Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence},
        pages={3482–3489},
        year={2018},
        organization={AAAI Press}
    }
  2. Tong Zhang, Wenming Zheng, Zhen Cui, Chaolong Li, “Deep Manifold-to-Manifold Transforming Network,” In Proc. ICIP, 2018. [Paper] [Abstract] [BibTex] [CCF-C]

Research Project

  • Design of Scientific Literacy Assessment Platform Based on Sensor and Android, Student Innovation and Entrepreneurship Training Program of Jiangsu Province, 2015-2016, PI.

Honors and Awards

  • Chien-Shiung Wu · BME Scholarship (2018)
  • Third Prize in the Thirteenth National Post-Graduate Mathematical Contest in Modeling (2016)
  • Merit Student of Southeast University (2014)
  • National Scholarship for Encouragement (2014)
  • Merit Student of Southeast University (2013)
  • National Scholarship for Encouragement (2013)
  • Zhang Zhiwei Scholarship (2013)
  • Outstanding Communist Youth League Member of Southeast University (2013)

Correspondence

Room 318 (Middle), Liwenzheng Building, Southeast University, Sipailou 2#, Nanjing, Jiangsu Province, 210096 P. R. China.

 
 
Last Modified: 2018-06-06

返回顶部