Towards theoretically-founded learning-based denoising
01 January 2019
Denoising a stationary process (Xi)i−⤠corrupted by additive white Gaussian noise (Zi)i−â¤, i.e., recovering Xn from Yn = Xn + Zn, is a classic and fundamental problem in information theory and statistical signal processing. Theoretically-founded and computationally-efficient denoising algorithms which are applicable to general sources are yet to be found. In a Bayesian setup, given the distribution of Xn, a minimum mean square error (MMSE) denoiser computes E[Xn|Yn]. However, for general sources, computing E[Xn|Yn] is computationally very challenging, if not infeasible. In this paper, starting from a Bayesian setup, a novel denoiser, namely, quantized maximum a posteriori (Q-MAP) denoiser, is proposed and its asymptotic performance is analyzed. Both for memoryless sources, and for structured first-order Markov sources, it is shown that, asymptotically, as Ï2 (noise variance) converges to zero, $frac{1}{{{sigma ^2}}}{text{E}}left[ {{{left( {{X_i} - hat X_i^{{text{Q - MAP}}}} right)}^2}} right]$ converges to the information dimension of the source. For the studied memoryless sources, this limit is known to be optimal. A key advantage of the Q-MAP denoiser is that, unlike a MMSE denoiser, it highlights the key properties of the source distribution that are to be used in its denoising. This naturally leads to a learning-based denoising algorithm. Using ImageNet database for training, initial simulation results exploring the performance of such a learning-based denoiser in image denoising are presented.