The rapid development of memory technologies have introduced challenges to the continued scaling of memory devices in density and access speed. Many of these challenges can be mitigated by coding techniques, which optimize the representation of data within these memories. Hence, we focus in this research on the analysis and design of error-correcting codes for high-speed memory devices and systems. For this aim, we develop three models that allow an efficient analysis of coding gains in such devices and systems, using tools from information theory and coding theory.
The first model we develop is termed as the q-ary partial erasure channel (QPEC). The QPEC has a q-ary input, and its output is either the input symbol or a set of M (2<=M<=q) symbols, containing the input symbol. This channel serves as a generalization to the binary erasure channel, and mimics situations when a symbol output from the channel is known only partially; that is, the output symbol contains some ambiguity, but is not fully erased. Our investigation is concentrated on the performance of GF(q) low-density parity-check (LDPC) codes when used over this channel, where we propose an efficient decoder of low complexity. We provide the exact QPEC density-evolution equations that govern the decoding process, and suggest a cardinality-based approximation as a proxy. We then provide several bounds and approximations on the proxy density evolutions, and verify their tightness through numerical experiments. Finally, we provide tools for the practical design of LDPC codes for use over the QPEC.
The second model is the q-ary multi-bit channel (QMBC). In the QMBC, a symbol bit is provided in each measurement step, starting from the most significant bit. An error event occurs when not all the symbol bits are known. To deal with such error events, we use GF(q) LDPC codes and analyze their decoding performance. We start with iterative-decoding threshold analysis, and derive optimal edge-label distributions for maximizing the decoding threshold. We later move to finite-length iterative-decoding analysis and propose an edge-labeling algorithm for improved decoding performance. We then provide finite-length maximum-likelihood decoding analysis for both the standard non-binary random ensemble and LDPC ensembles. Finally, we demonstrate by simulations that the proposed edge-labeling algorithm improves finite-length decoding performance by orders of magnitude.
In the last part of the
thesis, we propose a new model of a coded switch. In this model, packets are
coded upon write to the switch memory, thus giving more flexibility at read
time to achieve higher utilization of the memory. We present a detailed study
of coded network switches, and in particular how to design them to maximize the
throughput advantages over standard uncoded switches. Toward that objective we
contribute a variety of algorithmic and analytical tools to improve and
evaluate the throughput performance. The most interesting finding is that the
placement of packets in the switch memory is the key to both high performance
and algorithmic efficiency.