Research problem:

    Bayesian inference with automatic model order selection


Brief Description:
   

Model order selection is a fundamental problem in signal processing and machine learning. Choosing too small the model size cannot fit the data well, while choosing too large the model size leads to overfitting. Traditionally, a regularization term is added to the loss function to strike a balance between data fitting and the model complexity, with the regularization parameter tuned to obtain the best performance. However, the ``best'' parameter varies widely across datasets and applications, and may not be effective for unseen data.

 

Sparse Bayesian learning provides a tuning-free alternative. The idea is that if a suitable prior distribution is specified, the relative importance of different components in the signal can be learnt. Together with the learnt noise power from the data, one can easily tell what is the proper model order. While the idea is simple, the algorithm derivation is usually not straightforward. In particular, in Bayesian statistics, if we want to estimate a certain parameter, other parameters should be integrated out from the joint posterior distribution.  Unfortunately, most of the time, we cannot perform the integration analytically due to the complicated nature of the posterior distribution. To handle this problem, previous Bayesian analysis is mainly based on Monte Carlo statistical methods, such as Markov chain Monte Carlo (MCMC) and Gibbs sampling, where a large number of random samples are generated from the joint distributions and marginalization is approximated by operations on samples. Although sampling methods can approach the true posteriors when the number of samples approaches infinity, it is computationally demanding.

 

Variational inference is another way to approximate the parameter inference. It seeks a variational distribution that closely approximates the true posterior distribution. Under the commonly used mean-field constraint, the variational distribution will be in a form where marginalization can be easily carried out. This saves the computational complexity significantly, as each update is in closed-form.

 

We have applied the variational inference with automatic model order selection to various signal processing problems. The first one is the iterative joint doubly-selective channel estimation and data detection in wireless communication systems, where the model order corresponds to the channel length and Doppler shift. The second one is the distributed estimation of system state in power grid. The third one is the subspace identification for channel estimation in massive MIMO systems, where the model order is the number of paths the signal takes. Recently, we further extended the Bayesian inference to tensor decompositions, where the model order is the unknown tensor rank. This latest framework finds applications in blind CDMA receiver design, face image classification, object tracking in surveillance video, fluorescence data analysis, and email data mining.

 
Related Publications:

 

Data detection in doubly-selective channels:

 

1.      Jingrong Zhou, Jiayin Qin and Yik-Chung Wu, ``Variational Inference-based Joint Interference Mitigation and OFDM Equalization under High Mobility,'' IEEE Signal Processing Letters, Vol. 22, no. 11, pp. 1970 - 1974, Nov 2015.

2.      Ke Zhong, Yik-Chung Wu, and Shaoqian Li, ``Signal Detection for OFDM-Based Virtual MIMO Systems under Unknown Doubly Selective Channels, Multiple Interferences and Phase Noises," IEEE Trans. on Wireless Communications, Vol. 12, no. 10, pp.5309-5321, Oct 2013.

3.      Lanlan He, Yik-Chung Wu, Shaodan Ma, Tung-Sang Ng and H. Vincent Poor, ``Superimposed Training Based Channel Estimation and Data Detection for OFDM Amplify-and-Forward Cooperative Systems under High Mobility," IEEE Trans. on Signal Processing, Vol. 60, no. 1, pp. 274-284, Jan 2012.

4.      Lanlan He, Shaodan Ma, Yik-Chung Wu, Yiqing Zhou, Tung-Sang Ng, and H. Vincent Poor, ``Pilot-Aided IQ Imbalance Compensation for OFDM Systems Operating over Doubly Selective Channels," IEEE Trans. on Signal Processing, Vol. 59, no. 5, pp. 2223-2233, May 2011.

 

Power system state estimation:

  1. Jian Du, Shaodan Ma, Yik-Chung Wu, and H. Vincent Poor, ``Distributed Hybrid Power State Estimation under PMU Sampling Phase Errors," IEEE Trans. on Signal Processing, Vol. 62, no. 16, pp.4052-4063, Aug 2014

Channel estimation in massive MIMO system:

6. Lei Cheng, Chengwen Xing, and Yik-Chung Wu, ``Irregular Array Manifold Aided Channel Estimation in Massive MIMO Communications," IEEE Journal of Selected Topics in Signal Processing, Vol. 13, no. 5, pp. 974-988, Sep 2019.

7. Lei Cheng, Yik-Chung Wu, Jianzhong (Charlie) Zhang, and Lingjia Liu, ``Subspace Identification for DOA Estimation in Massive / Full-dimension MIMO System:  Bad Data Mitigation and Automatic Source Enumeration,'' IEEE Trans. on Signal Processing, Vol. 63, no. 22, pp. 5897-5909, Nov 2015.  

General tensor decompositions:

8. Lei Cheng, Xueke Tong, Shuai Wang, Yik-Chung Wu, and H. Vincent Poor, ``Learning Nonnegative Factors from Tensor Data: Probabilistic Modeling and Inference Algorithm," in IEEE Trans. on Signal Processing, vol. 68, pp. 1792-1806, 2020, doi: 10.1109/TSP.2020.2975353.

9. Lei Cheng, Yik-Chung Wu, and H. Vincent Poor, ``Scaling Probabilistic Tensor Canonical Polyadic Decomposition to Massive Data," IEEE Trans. on Signal Processing, Vol. 66, no. 21, pp. 5534-5548, Nov 2018.

10.Lei Cheng, Yik-Chung Wu, and H. Vincent Poor, ``Probabilistic Tensor Canonical Polyadic Decomposition with Orthogonal Factors," IEEE Trans. on Signal Processing, Vol. 65, no. 3, pp. 663-676, Feb 2017.