Interpolation by Convolutional Codes, Overload Distortion, and the Erasure Channel.

01 January 1999

New Image

This paper is motivated by sequence based methods of quantization, specifically trellis coded quantizations as described by Marcellin and Fischer. In this method n bits specify one of 2 sup n different quantizers, and consecutive outputs of a rate k/n convolutional code specify admissible sequences of quantizers. We investigate how closely randomly generated binary source sequences can be matched by codewords from a convolutional code. What distinguishes this paper from prior work is that a randomly selected subsequence with density lambda is to be approximated as closely as possible. This subsequence of marked bits might indicate when a source sample is generated from the tail of a distribution (an overload point). The capacity of a convolutional code to interpolate the marked sequence is a measure of the effectiveness with which the trellis code handles overload distortion. We quantify this capacity by means of a Markov chain on subsets of trellis states. We also investigate the effect of memory on the probability of perfect interpolation, and we calculate the residual rate on the unmarked bits. The Markov chain methods serve to analyze the performance of convolutional codes on the pure erasure channel.