• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar

ALLSIGNALPROCESSING.COM

Learn signal processing online

  • Home
  • Courses
  • Courses
  • About
  • FAQ
  • My Account
  • Blog
  • News
  • Contact
  • Login
  • Logout
  • Get All Access

Estimation of the Amplitude and Phase of Sinusoids

January 12, 2015 by 3200 Creative

Estimation of the Amplitude and Phase of Sinusoids
Emission Spectrum of Iron Containing Spectral Lines at Known Frequencies

by Barry Van Veen

Sinusoids play an important role in describing physical phenomena. In some problems the amplitude and phase of data involving a sum of sinusoids is of interest.  This post addresses the problem of estimating the amplitude and phase assuming the frequencies of the sinusoids are known.  Estimation of the amplitude and phase of sinusoids is actually a much simpler problem than estimating the frequencies.  The approach we take is based on formulating the problem in matrix-vector form.  Matrix-vector formulations are extremely powerful tools in signal processing problems.

An often used approach to this problem is to take the discrete Fourier transform (DFT) of the data and identify the amplitude and phase from the DFT coefficients. This can work well if the sinusoids are of comparable power and well separated in frequency.  It also can work well if the frequencies are harmonically related, especially if the data length is chosen to contain an integral number of periods of each sinusoid.  In that case each sinusoid influences only one DFT coefficient.  However, if the frequencies are spaced arbitrarily, the frequencies are close together, or the dynamic range is large, then the sidelobes associated with windowing cause interference between DFT coefficients.

Representing Sinusoids

The first step is to develop a "model" for the signal of interest.  We begin by assuming that the data consists of a sum of cosines with varying amplitudes a_k and phases \phi_k

x[n]=\sum_{k=1}^K a_k \cos(\omega_k n-\phi_k); \;\;\;n=0,1,2,\ldots,N-1                            (1)

We assume the \omega_k are known, and that the number of samples N is greater than twice the number of sinusoids, i.e., N\ge 2K.  It is convenient to use the identity for the cosine of a sum of angles to rewrite Eqn. (1) as a sum of a cosine and a sine term at each frequency

x[n]=\sum_{k=1}^K b_k\cos(\omega_k n)+d_k\sin(\omega_k n); \;\;\;n=0,1,2,\ldots,N-1                (2)

where b_k=a_k\cos(\phi_k) and d_k=a_k\sin(\phi_k). We will estimate b_k and d_k since a_k and \phi_k are easily obtained as a_k = \sqrt{b_k^2+d_k^2} and \phi_k=\tan^{-1}(d_k/b_k).

Next we rewrite Eqn. (2) in matrix-vector form.  Collect all the data samples x[n] into an N-by-1 vector {\bf x}

{\bf x} = \left[\begin{array}{c} x[0]\\x[1]\\x[2]\\ \vdots\\ x[N-1]\end{array}\right]

We rewrite Eqn. (2) in vector form as

{\bf x} = \sum_{k=1}^K b_k{\bf c}_k+d_k{\bf s}_k                                                 (3)

where we have defined an N-by-1 cosine vector {\bf c}_k and an N-by-1 sine vector {\bf s}_k as

 {\bf c}_k=\left[\begin{array}{c} \cos(0\omega_k)\\ \cos(\omega_k)\\ \cos(2\omega_k)\\ \vdots\\\cos((N-1)\omega_k)\end{array}\right],\;\;\;\;{\bf s}_k=\left[\begin{array}{c} \sin(0\omega_k)\\ \sin(\omega_k)\\ \sin(2\omega_k)\\ \vdots\\\sin((N-1)\omega_k)\end{array}\right]

Next we collect the b_k and d_k into a 2K-by-1 vector {\bf f}

{\bf f}=\left[\begin{array}{c} b_1\\d_1\\b_2\\d_2\\\vdots\\b_K\\d_K\end{array}\right]

and we collect the vectors {\bf c}_k and {\bf s}_k into a N-by-2K matrix {\bf H}

{\bf H}=\left[\begin{array}{ccccccc}{\bf c}_1 & {\bf s}_1&{\bf c}_2 & {\bf s}_2&\cdots&{\bf c}_K & {\bf s}_K\end{array}\right]

Finally, using the definitions of {\bf x}, {\bf f}, and {\bf H} we may rewrite Eqn. (3) as the simple matrix-vector product

{\bf x}={\bf Hf}                                                                                (4)

Note that the matrix {\bf H} is known since the frequencies are known.  The unknown parameters of interest are confined to the vector {\bf f}.

Estimation of the Amplitude and Phase of Sinusoids: A Matrix-Based Solution

The advantage of writing our model for the data as in Eqn. (4) is that it allows us to use simple matrix-vector manipulations to estimate {\bf f} from {\bf x}. First we need a few observations.  {\bf H} is a rectangular matrix with at least as many rows (N) as columns (2K). If the frequencies \omega_k are distinct and confined to the interval [0\le\omega_k<\pi], then it can be shown that the columns of {\bf H} are linearly independent.  In such a case the 2K-by-2K matrix {\bf H}^T{\bf H} is invertible. Note that superscript T denotes the transpose of a matrix - that is, interchanging the rows and columns. Proving linear independence and invertibility involves a bit of linear algebra, and is something we will not show here.

Armed with these facts, first multiply both sides of Eqn. (4) by {\bf H}^T to obtain

{\bf H}^T{\bf x}={\bf H}^T{\bf Hf}

Now take the inverse of {\bf H}^T{\bf H} to obtain the estimate for {\bf f}

{\bf f}=\left({\bf H}^T{\bf H}\right)^{-1}{\bf H}^T{\bf x}                                                               (5)

Once we spent the time to define the notation, the final answer is almost trivial to write down. This is the power of matrix-vector formulations of signal processing problems.

Equation (5) is an exact expression for {\bf f} in terms of {\bf x}, which implies we can obtain the amplitudes a_k and phases \phi_k exactly.  This relationship only holds for N\ge 2K, which places an upper bound on the number of sinusoids for a given data length.  Note that if N<2K, then we have more unknown quantities than we have data!

Perfect Estimation?

Equation (5) suggests we can perfectly recover the amplitude and phase of sinusoids having arbitrary frequency spacings. If this sounds too good to be true, it is.  There are two effects that make perfect estimation impossible in practice: noise and errors in the assumed frequencies.  Noise is always present - even if only due to quantization errors when the data is sampled. Noise introduces errors into the estimates.  The size of the errors depends on the power of the noise, the number of sinusoids, and the frequency spacing.  This topic will be addressed in detail in the next post. Obviously, if the assumed frequencies of the sinusoids are not equal to the true frequencies, we will also incur errors.

What About the DFT?

It turns out that a special case of the procedure presented here is equivalent to the DFT.  That is, the DFT is equivalent to Eqn. (5) if and only if the frequencies \omega_k are integral multiples of 2\pi/N.  This fact is a slight extension of The Matrix Interpretation of the DFT lesson.  If the \omega_k are not integral multiples of 2\pi/N, then the DFT matrix applied to {\bf x} in Eqn. (4) does not recover {\bf f}, but in general each DFT coefficient contains contributions from all the sinusoids. The mainlobe and sidelobes associated with each sinusoid interfere with the amplitude and phase of all the others.  This can be viewed as an error due to a difference between the assumed (integer multiples of 2\pi/N) and true frequencies \omega_k.  In the special case where the \omega_k are integral multiples of 2\pi/N, then all the sinusoids are located at zero crossings of the sinc functions associated with the other sinusoids, and the sinusoids do not interfere with one another in the DFT coefficients.

In contrast, the procedure described by Eqn. (5) applies to arbitrary sinusoid frequencies and only requires that{\bf H}^T{\bf H} be invertible.  This condition is satisfied if the data length N exceeds twice the number of sinusoids N\ge 2K, and all the frequencies are distinct. Example 2 in the Maximum Likelihood Estimation Examples lesson shows how Eqn. (5) is related to the maximum likelihood estimate of {\bf f}.

Filed Under: Blog

Primary Sidebar

Courses

  • Foundations

  • Time Domain LTI Systems

  • Fourier Series and Transforms

  • Sampling and Reconstruction

  • The DFT and Applications

  • The Z-Transform

  • Intro to Filter Design

  • IIR Filter Design

  • FIR Filter Design

  • Random Signal Characterization

  • Basis Representations of Signals

  • Estimation of Power Spectra and Coherence

  • Introduction to Signal Estimation and Detection Theory

  • MMSE Filtering and Least-Squares Problems

Copyright © 2023 ALLSIGNALPROCESSING.COM | Site Design by 3200 Creative

  • Terms of Service
  • Privacy Policy
  • Contact