Yazar "Chen, Luefeng" seçeneğine göre listele
Listeleniyor 1 - 4 / 4
Sayfa Başına Sonuç
Sıralama seçenekleri
Öğe Convolutional Features-Based Broad Learning With LSTM for Multidimensional Facial Emotion Recognition in Human-Robot Interaction(Ieee-Inst Electrical Electronics Engineers Inc, 2024) Chen, Luefeng; Li, Min; Wu, Min; Pedrycz, Witold; Hirota, KaoruConvolutional feature-based broad learning with long short-term memory (CBLSTM) is proposed to recognize multidimensional facial emotions in human-robot interaction. The CBLSTM model consists of convolution and pooling layers, broad learning (BL), and long-and short-term memory network. It aims to obtain the depth, width, and time scale information of facial emotion through three parts of the model, so as to realize multidimensional facial emotion recognition. CBLSTM adopts the structure of BL after processing was done at the convolution and pooling layer to replace the original random mapping method and extract features with more representation ability, which significantly reduces the computational time of the facial emotion recognition network. Moreover, we adopted incremental learning, which can quickly reconstruct the model without a complete retraining process. Experiments on three databases are developed, including CK+, MMI, and SFEW2.0 databases. The experimental results show that the proposed CBLSTM model using multidimensional information produces higher recognition accuracy than that without time scale information. It is 1.30% higher on the CK+ database and 1.06% higher on the MMI database. The computation time is 9.065 s, which is significantly shorter than the time reported for the convolutional neural network (CNN). In addition, the proposed method obtains improvement compared to the state-of-the-art methods. It improves the recognition rate by 3.97%, 1.77%, and 0.17% compared to that of CNN-SIPS, HOG-TOP, and CMACNN in the CK+ database, 5.17%, 5.14%, and 3.56% compared to TLMOS, ALAW, and DAUGN in the MMI database, and 7.08% and 2.98% compared to CNNVA and QCNN in the SFEW2.0 database.Öğe Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human-Robot Interaction(Ieee-Inst Electrical Electronics Engineers Inc, 2023) Chen, Luefeng; Li, Min; Wu, Min; Pedrycz, Witold; Hirota, KaoruA coupled multimodal emotional feature analysis (CMEFA) method based on broad-deep fusion networks, which divide multimodal emotion recognition into two layers, is proposed. First, facial emotional features and gesture emotional features are extracted using the broad and deep learning fusion network (BDFN). Considering that the bi-modal emotion is not completely independent of each other, canonical correlation analysis (CCA) is used to analyze and extract the correlation between the emotion features, and a coupling network is established for emotion recognition of the extracted bi-modal features. Both simulation and application experiments are completed. According to the simulation experiments completed on the bimodal face and body gesture database (FABO), the recognition rate of the proposed method has increased by 1.15% compared to that of the support vector machine recursive feature elimination (SVMRFE) (without considering the unbalanced contribution of features). Moreover, by using the proposed method, the multimodal recognition rate is 21.22%, 2.65%, 1.61%, 1.54%, and 0.20% higher than those of the fuzzy deep neural network with sparse autoencoder (FDNNSA), ResNet-101 + GFK, C3D + MCB + DBN, the hierarchical classification fusion strategy (HCFS), and cross-channel convolutional neural network (CCCNN), respectively. In addition, preliminary application experiments are carried out on our developed emotional social robot system, where emotional robot recognizes the emotions of eight volunteers based on their facial expressions and body gestures.Öğe Coupled Multimodal Emotional Feature Analysis Based on Broad-Deep Fusion Networks in Human-Robot Interaction(Institute of Electrical and Electronics Engineers Inc., 2024) Chen, Luefeng; Li, Min; Wu, Min; Pedrycz, Witold; Hirota, KaoruA coupled multimodal emotional feature analysis (CMEFA) method based on broad-deep fusion networks, which divide multimodal emotion recognition into two layers, is proposed. First, facial emotional features and gesture emotional features are extracted using the broad and deep learning fusion network (BDFN). Considering that the bi-modal emotion is not completely independent of each other, canonical correlation analysis (CCA) is used to analyze and extract the correlation between the emotion features, and a coupling network is established for emotion recognition of the extracted bi-modal features. Both simulation and application experiments are completed. According to the simulation experiments completed on the bimodal face and body gesture database (FABO), the recognition rate of the proposed method has increased by 1.15% compared to that of the support vector machine recursive feature elimination (SVMRFE) (without considering the unbalanced contribution of features). Moreover, by using the proposed method, the multimodal recognition rate is 21.22%, 2.65%, 1.61%, 1.54%, and 0.20% higher than those of the fuzzy deep neural network with sparse autoencoder (FDNNSA), ResNet-101 + GFK, C3D + MCB + DBN, the hierarchical classification fusion strategy (HCFS), and cross-channel convolutional neural network (CCCNN), respectively. In addition, preliminary application experiments are carried out on our developed emotional social robot system, where emotional robot recognizes the emotions of eight volunteers based on their facial expressions and body gestures. © 2012 IEEE.Öğe Robust control of feeding speed for coal mine tunnel drilling machines(Institute of electrical and electronics engineers inc., 2024) Liu, Xiao; Chen, Luefeng; Wu, Min; Cao, Weihua; Lu, Chengda; Pedrycz, WitoldChanges in coal seam hardness cause fluctuations in the feed resistance at the drill bit during the drilling process, leading to unstable feeding speed. This paper proposes a robust dynamic output feedback controller to suppress disturbances caused by the variations in coal seam hardness in the feed system. Firstly, an unknown parameter measuring coal seam hardness is introduced, and an uncertain model of the feeding system is established based on the finite element model of the drill string. By designing weighted functions based on industrial field requirements and constructing a generalized plant, the controller achieves loop shaping, reducing the low-frequency impact of coal seam hardness variations on the feed system and suppressing the systems resonance peak. Simulation results demonstrate that the controller effectively suppresses parameter variations and external disturbances caused by changes in coal seam hardness, achieving stable control of the drilling speed.