Rate vs. Fidelity for the Binary Source
01 March 1977
Rate vs Fidelity for the Binary Source By S. P. LLOYD (Manuscript received May 5, 1976) Errors are deliberately introduced in the output of a binary message source to reduce the entropy rate. The errors depend on the source sequence in a deterministic shift-invariant manner. The tradeoff between error rate permitted and reduction of entropy rate is of interest. It is shown that the ideal bound cannot be attained. If the errors are required to be produced causally, then a bound stronger than the ideal bound takes over. The quantities of interest are found explicitly for the example: change all O's in 0-runs of length 1 to 1 's. If a transmission channel has capacity C bits/second and a message source has entropy rate H bits/second satisfying H ^ C, then the source can be encoded, fed to the channel, decoded at the channel output, and recovered essentially without error after such handling. The rate-distortion theory is concerned with the case where H > C; we try to minimize some measure of the errors that are necessarily present.1 We treat here a special class of systems in which the errors are deliberately introduced before submission to the channel to reduce the entropy rate to that of the channel; the mutilated source is then handled without further error by the channel. The usual treatment involves use of block codes, but we will examine the more interesting sliding (or shift-invariant) codes. The source in Fig. 1 emits letters xn, -- ® n at rate 1 per unit time. The letters are drawn from alphabet A = {0,1} with probability distribution Pxn = 0} = Pxn = 1} = 1/2, the same for all n, and the draws are statistically independent.