Understanding Complex-Valued Transformer for Modulation Recognition

RPG Seminar banner

Understanding Complex-Valued Transformer for Modulation Recognition

Zoom Link: https://hku.zoom.us/j/95380440070

Abstract

Complex-valued convolution neural networks (CVCNNs) have been recently applied for modulation recognition (MR), due to its ability to capture the relationship between the real and imaginary parts of the received signal. On the other hand, the transformer model has been shown to be distinguished in MR by its superior capability to extract the correlation among high-dimensional signals compared to the CNN. It is a logical next step to ask whether a fully complex-valued transformer based neural network (CVTNN) can bring further performance gain? If so, where the gain comes from? To answer these questions, this letter designs the building blocks of the CVTNN for MR, which is composed of a convolution embedding module, a complete transformer encoder, and a C2R classifier, and establishes the estimation error bound of the proposed CVTNN from an inductive bias perspective. We theoretically prove that the estimation error bound of the proposed CVTNN is lower than that of the real-valued transformer based neural network (RVTNN) for MR. Simulation results further show that the proposed CVTNN outperforms the RVTNN and other benchmarks under different settings, which corroborates the proposed theoretical analysis.

Speaker

Mr. Jingreng Lei
Department of Electrical and Electronic Engineering
The University of Hong Kong

Speaker’s Biography

Jingreng Lei received the B.Eng. degree from Sun Yat-sen University, China, in 2023. He is currently working towards MPhil degree with The University of Hong Kong, Hong Kong. His research interests include complex-valued neural network, distributed optimization and wireless communication.

All are welcome!

More Upcoming Events