Cross-Database Micro-Expression Recognition Project
Tong Zhang1, Yuan Zong2, Wenming Zheng2, C. L. Philip Chen1, Xiaopeng Hong3, Chuangao Tang2, Zhen Cui4, and Guoying Zhao5
1 School of Computer Science and Engineering, South China University of Technology, Guangzhou, China
2 Affective Information Processing Lab (AIPL), Southeast University, Nanjing, China
3 Xi’an Jiao Tong University, China
4 School of Computer Science and Engineering, Nanjing University of Science and Technology, China
5 Center for Machine Vision and Signal Analysis (CMVS), University of Oulu, Finland
Cross-database micro-expression recognition (CDMER) is one of recently emerging and interesting problem in micro-expression analysis. CDMER is more challenging than the conventional micro-expression recognition (MER), because the training and testing samples in CDMER come from different micro-expression databases, resulting in the inconsistency of the feature distributions between the training and testing sets. In this project, we first establish a CDMER experimental evaluation protocol aiming to allow the researchers to conveniently work on this topic and provide a standard platform for evaluating their proposed methods. Second, we conduct benchmark experiments by using NINE state-of-the-art domain adaptation (DA) methods and SIX popular spatiotemporal descriptors for respectively investigating CDMER problem from two different perspectives. Third, we propose a novel DA method called region selective transfer regression (RSTR) to deal with the CDMER task. The major motivation of this work is to attract and encourage more researchers to join this challenging but interesting topic and provide convenience for them to get started. For this reason, we released all the data and codes involving CDMER in this project website. Please remember that all the data and source codes are only free downloaded and used for the purpose of the academic research.
1. CDMER Protocol
We use CASME II and SMIC micro-expression databases to design two types of CDMER experiments.
- Data Preparation and Proprocessing
- Micro-Expression Feature Extraction
- CDMER Tasks
- Evaluated Methods (Please click the hyperlink to download the corresponding data and code. UPDATING…)
- Spatiotemporal Feature used for Describing Micro-Expressions: LBP-TOP [R3P8 for DA experiments][Other Parameters][Fast LBPTOP Implementation], LBP-SIP [Feature][Code], LPQ-TOP [Feature1][Feature2][Official Code], HOG-TOP [Feature][Code], HIGO-TOP [Feature][Code], and C3D [Feature][Code].
- DA Methods (LBPTOP with R3P8 is served as feature): SVM [libSVM][Codes for 12 Exps.], IW-SVM [Official Code][Codes for 12 Exps.], TCA [Our Implementation][Codes for 12 Exps.], GFK [Official Code][Codes for 12 Exps.], SA [Official Code][Codes for 12 Exps.], STM [Code][Codes for 12 Exps.], TKL [Official Code][Codes for 12 Exps.], TSRG [Codes][Codes for 12 Exps.], DRFS-T [Codes][Codes for 12 Exps.], DRLS [Codes][Codes for 12 Exps.], and RSTR [Codes][Codes for 12 Exps.].
2. Implementation and Results
- Tong Zhang, Yuan Zong, Wenming Zheng, C. L. Philip Chen, Xiaopeng Hong, Chuangao Tang, Zhen Cui, and Guoying Zhao. “Cross-Database Micro-Expression Recognition: A Benchmark,” Submitted to IEEE Transactions on Knowledge and Data Engineering, 2020. [Arxiv Version]
- Yuan Zong, Wenming Zheng, Xiaohua Huang, Jingang Shi, Zhen Cui, and Guoying Zhao. “Domain Regeneration for Cross-Database Micro-Expression Recognition, ” IEEE Transactions on Image Processing, Vol. 27, No. 5, pp. 2484 – 2498, 2018.
- Yuan Zong, Xiaohua Huang, Wenming Zheng, Zhen Cui, and Guoying Zhao. “Learning a Target Sample Re-Generator for Cross-Database Micro-Expression Recognition,” in ACM Multimedia, 2017.
- Tong Zhang (tony[AT]scut[DOT]edu[DOT]cn)
- Yuan Zong (xhzongyuan[AT]seu[DOT]edu[DOT]cn)
- Wenming Zheng (wenming_zheng[AT]seu[DOT]edu[DOT]cn)
- C. L. Philip Chen (Philip.Chen@ieee[DOT]org)
- Xiaopeng Hong (xiaopeng[DOT]hong[AT]oulu[DOT]fi)
- Chuangao Tang (tcg2016[AT]seu[DOT]edu[DOT]cn)
- Zhen Cui (zhen[DOT]cui[AT]njust[DOT]edu[DOT]cn)
- Guoying Zhao (guoying[DOT]zhao[AT]oulu[DOT]fi)