The Signal Processing Foundations series of 16 lessons begins with the philosophy of the field. Next you will discover the basic notation and terminology. Signal Processing Foundations also introduces methods for describing the interaction between signals and signal-processing systems. Understanding the philosophy of signal processing will help you later follow the context and rationale for different signal processing methods. Signal processing has developed its own language for clearly communicating important concepts; Signal Processing Foundations will teach you cornerstone vocabulary of the field. You will also be introduced to several mathematical tools for relating the input to the output of signal-processing systems. Different tools provide different perspectives on the interaction and have different roles in signal processing.
This series of lessons builds on several of the concepts introduced in the Foundations series. It concerns time-domain descriptions for the characteristics of linear, time-invariant (LTI) systems. You will learn the origins and properties of convolution for describing LTI systems in terms of the impulse response and a procedure for evaluating convolution. You will also learn how differential and difference equations are used to represent LTI systems and what they reveal about system behavior. If you have no prior experience with LTI systems, then this series is designed to efficiently teach you the knowledge you need for future study.
This series of lessons reviews the basics of Fourier transforms and series. You will learn the details of how to represent signals in the frequency domain and the properties of Fourier representations. You will also gain understanding how to use Fourier methods to analyze interactions between signals and systems. This series is designed to efficiently teach you the knowledge you need to use and understand Fourier methods in signal processing.
All signals in the physical world, e.g., light, sound, seismic waves, and so on, have continuous independent variables. These signals must be sampled to convert them to a sequence of numerical values prior to computer-based signal processing. The Sampling and Reconstruction series of 15 lessons introduces you to the requirements on sampling in order to ensure a unique representation. You will learn to use the Fourier transform as a tool for analyzing the effect of sampling in the frequency domain. Much of the series will teach you practical issues associated with sampling and techniques for addressing them, including anti-aliasing, oversampling, anti-imaging, upsampling and downsampling. Finally, you will learn how to model the apparent noise that is introduced when representing the amplitude of each sample with a finite number of bits.
The discrete Fourier transform or DFT is the frequency domain workhorse of signal processing. It is the only Fourier representation that can be evaluated with a computer. In this series of 13 lessons you will learn how the DFT is related to the discrete-time Fourier transform, and how the DFT can be used to approximate the Fourier transform. You will learn the principles behind the fast Fourier transform algorithm for efficiently computing the DFT. You will also learn the principles of circular convolution and how to implement filtering using the DFT. This series is essential for everyone interested in spectral analysis of data or any computational Fourier analysis application.
The -transform is an important signal-processing tool for analyzing the interaction between signals and systems. A significant advantage of the -transform over the discrete-time Fourier transform is that it exists for many signals that do not have a discrete-time Fourier transform. Thus, it is a more general analysis tool. In this series of 13 lessons you will learn how to work with the -transform and use it to characterize signal processing systems. You will learn how the poles and zeros of a system tell us whether the system can be both stable and causal, and whether it has a stable and causal inverse system. You will also learn how the pole and zero locations of a system give us insight into the nature of its frequency and impulse response. The insights gained with the -transform are particularly useful for designing frequency-selective filters.
This series of four lessons sets the context for the subsequent two series on designing frequency selective filters. You will learn the different types of frequency selective filters and the difference between infinite and finite impulse response (IIR and FIR) filters. You will also learn how group delay characterizes the phase distortion introduced by a filter and how to implement zero-phase filters when all the data to be filtered is already stored. This series will provide insight that helps you master IIR and FIR filter design in the next two series.
Infinite impulse response or IIR filters are designed using well-established designs for continuous-time filters. In this series of eight lessons you will learn the characteristics of the four widely used types of IIR filters and the principles of converting a continuous-time prototype filter to a discrete-time filter that satisfies your design specifications. In practice the steps of the design process are normally performed using a software package such as MATLAB. You will learn the rationale behind and limitations of IIR filter design methodology. Examples of both good and poor quality filter designs are provided so you can recognize when your design is effective and when it is problematic.
Finite impulse response or FIR filters have the advantage of always being stable, even for very high orders, and can be designed to introduce no phase distortion. In this series of six lessons you will learn about the conditions for an FIR filter to introduce no phase distortion and three different methods for FIR filter design. You will learn about the intuitive, able-to-be-performed-with-pencil-and-paper window design method. You will also learn about the minimax optimal Parks-McClellan computer-based filter design method. The third method uses the technique of frequency sampling to obtain designs with arbitrary magnitude and phase response.
The ability to deal with uncertainty in the characteristics of signals is a very important part of advanced signal processing methods. This series of seven lessons introduces you to tools from probability for describing signals that are modeled as having random characteristics. You will learn about auto- and cross-correlation for describing random signals in the time domain, and power spectra, cross spectra, and coherence for describing random signals in the frequency domain. You will also learn how to represent random signals as the output of a linear time-invariant system with white noise input using autoregressive, moving average, and autoregressive moving average models.
Representing signals as a weighted sum (or integral) of certain basis signals is a powerful signal-processing tool. It is the very essence of Fourier transforms. In this series of seven lessons you will learn the general form of basis representations. You will also learn about wavelets as an alternative basis expansion to the sinusoids of Fourier methods. You will also learn about principal component analysis, a method for choosing efficient bases for random data.
This series of eight lessons addresses the signal-processing problem of estimating properties of a random signal from measurements. Five of the eight lessons concern estimation of frequency domain characteristics such as the power spectrum and coherence. You will learn about the periodogram and why averaging is necessary to obtain acceptable estimates of the power spectral density. You will learn how the averaged periodogram or Welch's method reduces the variance of the periodogram estimator at the expense of resolution loss. In the final two lessons you will learn about maximum likelihood estimation as a general tool for estimating unknown parameters in a random signal.
This brief series of three lessons introduces you to the principles of signal detection or hypothesis testing. You will learn how to classify different types of hypothesis tests and the metrics used to characterize the performance of detectors such as the probability of correct detection and the receiver operating characteristic or ROC. You will learn about the likelihood ratio, which is the optimal test of simple binary hypotheses. There are no known optimal tests for more general testing scenarios, so you will learn about the generalized likelihood ratio as a principled approach for obtaining a good test.
This sections covers a wide ranging series of signal-processing methods for minimum mean-squared error filtering and other applications of least squares problems occurring in estimation and imaging applications. This includes adaptive filtering methods such as the least-mean-square (LMS) algorithm.
This section enables you to both check and further your understanding with additional problems and solutions. They build on the skills you developed in the Exercises and Explorations associated with lesson categories, so be sure to do those first. Remember, your learning benefits immensely by working problems. Wrestle with them, and if you get stuck, sneak a peak at the solution for a hint. But then finish working them on your own. Once you have completed a set, then - and only then - check your work against the solution so you can fill any gaps in your understanding.