Ph.D Thesis


Ph.D StudentTamir Ran
SubjectTypical Random Codes
DepartmentDepartment of Electrical and Computer Engineering
Supervisor PROF. Neri Merhav
Full Thesis textFull thesis text - English Version


Abstract

We define the error exponent of the typical random code as the long-block limit of the negative normalized expectation of the logarithm of the error probability of the random code, as opposed to the traditional random coding error exponent, which is the limit of the negative normalized logarithm of the expectation of the error probability. We believe that the error exponent of the typical random code is the more relevant performance metric, because it captures the true exponential behavior of the probability of error, as opposed to the random coding error exponent, which is dominated by the relatively poor codes of the ensemble.

Moreover, in the random coding scenario, the code is selected randomly only once and then remains fixed from that point onward. Under such circumstances, it is natural to ask what would be the error exponent associated with the typical randomly selected code. This thesis contains three main contributions concerning the error exponent of the typical random code. The first is a large-deviations analysis of the logarithmic error probability. It is proved that the probability of randomly picking a code, whose exponent is strictly lower than the error exponent of the typical random code, converges to zero exponentially fast, while the probability of randomly drawing a code, whose exponent is strictly higher than the error exponent of the typical random code, converges to zero double-exponentially fast.

Next, we investigate the error exponent of the typical random code in a communication scenario of source coding with side information at the decoder. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. We show that the performance under optimal decoding can be attained also by certain universal decoders, like the stochastic likelihood decoder with an empirical entropy metric. We also discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function.

In the third part of this thesis, we discuss error exponents in the bee identification problem, which is a problem of properly recognizing a massive amount of data which have been mixed and corrupted by noise. We derive various error exponents in the bee identification problem under two different decoding rules. Under naive decoding, which decodes each bee independently of the others, we analyze a general discrete memoryless channel and a relatively wide family of stochastic decoders.

Bounds to the random coding error exponent are derived and proved to be equal at relatively high coding rates. We propose a lower bound on the error exponent of the typical random code and derive a third bound, which is related to expurgated codes.

Moving further, we derive error exponents under optimal decoding, the relatively wide family of symmetric channels, and the maximum likelihood decoder.

We first propose a random coding lower bound, and then, an improved bound which stems from an expurgation process.