State Learning and Mixing in Entropy of Hidden Markov Processes and the Gilbert-Elliott Channel
01 January 1999
Hidden Markov processes such as the Gilbert-Elliott channel have an infinite dependency structure. Therefore, entropy and channel capacity requires knowledge of the infinite past. In practice, such calculations are often approximated with a finite past. It is commonly assumed that this is not necessarily true. We derive an exponentially decreasing upper bound on the accuracy of the finite-past approximation that is much tighter that exisisting upper bounds when the Markov chain does not mix at all. Our methods are demonstrated on the Gilbert-Elliott channel, where we prove that a prescribed finite-past accuracy is quickly-reached, independently of the Markovian memory. We conclude that the past can be used either to learn the channel state when the memory is high, or wait until the states mix when the memory is low. Implications for computing and achieving capacity on the Gilbert-Elliott channel are discussed. 1