This lesson introduces you to the steps that are involved in converting a signal from analog, or continuous time and continuous amplitude, format to a format that can be manipulated with a digital computer. Digital computers represent numbers using a finite number of bits. This implies that only a finite number of signal levels are possible. An analog-to-digital converter samples the signal, quantizes the amplitude, and codes the quantized amplitude in a binary format for further manipulation by a computer. The errors associated with quantization can have a dramatic effect on signal processing, especially when a relatively small number of levels are used to represent the amplitude.