*by Barry Van Veen*

In the previous post we developed a method for estimating the amplitude and phase of sinusoids from data samples assuming the frequencies are known. This method applies to sinusoids with arbitrary frequencies. The approach we used to develop the estimator was to convert the amplitude and phase of each sinusoid into a pair of amplitude parameters that are inserted in a vector . We then defined an -by- matrix whose columns are cosine and sine vectors corresponding to the frequencies of interest. This allows the -by-1 vector of data samples to be expressed as

The matrix is known since the frequencies are known. The unknown amplitudes and phases are represented by the vector .

We showed how to obtain from using the equation

(1)

All the quantities on the right-hand side of Eqn. (1) are known. This implies that we can recover the amplitude and phase exactly for arbitrary frequencies.

In practice noise will introduce errors into the estimation. The effect of noise is the topic of this post.

## An Example of Estimating the Amplitude and Phase of Sinusoids

First we illustrate this procedure with an example. The MATLAB code used to generate the example is available at the end of this post. We assumed sinusoids and samples. The frequencies of the sinusoids are chosen as prime numbers 101, 103, 107, 109, 113, 127, 137 149, 157, and 167 Hz. Thus, the only harmonic common to all of them is one Hz. The data was generated assuming integer amplitudes of 3, 2, 1, 4, 1, 3, 2, 1, 4, 1 and phases 0, 30, 45, 60, 90, 0, 30, 45, 60, 90 degrees, respectively. The result of applying Eqn. (1) to this data is shown in Fig. 1. All the amplitudes and phases are recovered exactly.

## Estimation of the Amplitude and Phase of Sinusoids in Noise

There are two effects that make exact estimation impossible in practice: noise and errors in the assumed frequencies. Here we examine the effect of noise since noise is always present - even if only due to quantization errors when the data is sampled.

Suppose we measure where is noise. In the analysis here we assume the noise is white - that is, uncorrelated from sample to sample and has constant power. This implies that the power of the noise is distributed equally across all frequencies. We write our noisy data in vector form as where and are defined analogously to . Now apply the estimation procedure in Eqn. (1) to to obtain

where . The hat over explicitly denotes that this is an estimate and no longer the true value. We see that is a noise corrupted version of the true .

The impact of the noise is typically assessed by considering the statistics of the error term . If the noise is zero mean, then is also zero mean. Thus is an unbiased estimate of . The covariance of the error is

EE

Since we are assuming the noise is white, E where is the noise variance and is the identity matrix. Substituting for the noise covariance gives the estimation error covariance as a relatively simple function of

E (2)

The diagonal elements of E tell us the error variance in the corresponding element of while the off-diagonal elements tell us the correlation between the errors in different elements of .

The impact of the noise depends solely on the properties of matrix which in turn depend on the frequencies of interest. It turns out that if the discrete-time frequencies are separated by integer multiples of rads (that is, integer multiples of Hz where is the sampling frequency), then the columns of are orthogonal and . Thus, E. This means that the errors in all components of are uncorrelated and of equal variance, . This is the lowest possible error variance. The error variance increases as the frequencies of the sinusoids become closer than rads (or Hz). In any case, however, increasing decreases the error variance as we illustrate in the following example.

## An Example of Estimation Error Variance

Here we consider the impact of noise on the estimation of the amplitude and phase of the ten sinusoids we considered in the previous example. Recall the frequencies are 101, 103, 107, 109, 113, 127, 137 149, 157, and 167 Hz and the sampling frequency is Hz. We consider three different values for : 100, 250, and 500. Note that for , Hz, which is the minimum spacing between the sinusoid frequencies. Hence, we expect the variance of each term to be equal in this case and to have a value of .

The variance of the odd-indexed elements of (the in Eqn. (2) of the previous post) are shown in Fig. 2 assuming . These represent the variances associated with estimating the amplitudes of the cosine terms. The variance for the even-indexed elements of (associated with the sine terms) has very similar characteristics and is not shown. Note that the vertical axis is displayed in dB to capture the wide range of variances. The lower bound on the estimatin error variance is or -24 dB.

Several trends are evident. First, the sinusoids that are closely spaced (101, 103 and 107, 109 Hz) have greater estimation noise variance for and . Second, as increases, the noise variance decreases significantly.

The key factor that determines whether it is hard (i.e., large error variance) or easy (i.e., small error variance) to accurately estimate the parameters of a sinusoid is how distinct its frequency is relative to nearby sinusoids. If the frequency separation exceeds Hz, then it is relatively easy. Adjacent sinusoids that are closer than Hz are much more difficult to estimate accurately. For example, the sinusoids at 107 and 109 Hz have the greatest error variance. The 2 Hz separation here is significantly less than Hz when . In contrast, all the sinusoids above 120 Hz are separated by 10 Hz or more and have much lower estimation error variance. When we have Hz and the sinusoids separated by 2 Hz are estimated much more accurately, although still less than those above 120 Hz.

## Is It Possible to Estimate Closely Spaced Sinusoid Amplitudes and Phases?

The estimation error variance for sinusoids spaced by 2 Hz is more than 30 dB greater when compared to . The usefulness of the estimates is not dependent solely on the error variance, but rather on the amplitude of the quantity of interest relative to the error variance. For example, if our error variance is 1000, we still may obtain very useful results if the parameter we are trying to estimate has amplitude . On the other hand, if the parameter we are trying to estimate has amplitude 10, and the estimation error variance is 1000, then the estimation error is dominant and the estimate is likely useless.

Thus, it is possible to reliably estimate amplitudes and phases for closely spaced sinusoids provided the amplitudes of the sinusoids are sufficiently large relative to the estimation error variance. Figure 2 illustrates that as the frequency separation becomes less than Hz, the sinusoid amplitude must increase to obtain reliable estimates due to significant increases in the error variance.

This property is intuitively sensible. If the frequencies are very similar, the two sinusoids are also very similar and separating them relies on relatively subtle differences. The amplitudes need to be large in order for these subtle differences to stand out above the background noise.

## What About the DFT?

The DFT assumes the sinusoids are located at integer multiples of Hz. This implies that estimation of amplitude and phase parameters in our example requires Hz or as all the frequencies of interest are odd integers.

In contrast, the procedure described in this and the previous post applies for arbitrary and arbitrary frequency spacings.