The beauty of physics and mathematics is that everything is somehow related to everything else, and that there are so many ways of looking at the same thing, each providing new insights. Take for instance the Fourier transform. The physics of a harmonic oscillator is described by a differential equation whose solutions are trigonometric functions. A sound wave may excite a given harmonic oscillator by periodically pushing and pulling at it. When we digitize the sound, we turn it into an N-dimensional vector. The amount of excitation this sound impairs on a particular oscillator is calculated by taking the scalar product of this vector with a special basis vector in N dimensions. This basis vector is a result of sampling a trigonometric function that describes the movement of the oscillator.

Trigonometric functions are easy to understand but difficult to handle. Fortunately, there is a better language to describe the solutions to a harmonic oscillator--complex numbers. But before we get there, let's look more closely at the differential equations that describe oscillating and non-oscillating motions.

A population of rabbits has this peculiar property that the more rabbits there are the more they multiply. The rate of growth of the population at any moment of time is proportional to its size. In mathematics, the rate of growth is described by a derivative. The larger the derivative, the faster something grows.

Let's denote the population of rabbits at time *t* by p(t). Its derivative is denoted by dp(t)/dt. What we have just have said about the law of population growth of rabbits can be written in the form of a differential equation:

dp(t)/dt = α*p(t)

wherede^{t}/dt = e^{t}

Our population of rabbits will therefore grow exponentially, that is, it will follow an exponential function:

p(t) = A*e^{αt}

Notice that the *second derivative* (the derivative of the derivative) of the exponential function is still equal to the function itself. If we include an arbitrary exponent, we get:

d^{2}e^{αt}/dt^{2} = α^{2}e^{αt}

Now back to our oscillator. Remember the differential equation it fulfilled?

d^{2}x(t)/dt^{2} = -ω^{2}x(t)

x(t) = A*e^{iωt}

It turns out that mathematicians have thought of numbers whose squares are negative. I will not go into all the (very interesting) mathematical subtleties here. The important thing is how one performs calculations on such numbers.

It is enough to take it at face value that there is such an entity called the imaginary unit, *i* (engineers often call it *j*), whose only interesting property is that its square is -1. In other words, *i* is the square root of -1. Now that you've suspended your disbelief, you can do all kinds of calculations using *complex* numbers of the form

z = x + iy

where x and y are regular (real) numbers. The x number is called thez + w = (x + iy)+(v + iu) = x + v + i(y + u)

or multiply them,z * w = (x + iy)(v + iu) = xv + iyv + ixu + (i*i)yu

= xv - yu +i(yv + xu)

In order to see how division works, let's first introduce an operation called *complex conjugation*. The complex conjugate of z is z^{*} = (x - iy). We just flip the sign in front of i. When you multiply a complex number by its complex conjugate, you always get a real positive number. Check this out:

z z^{*} = (x + iy)(x - iy) = x^{2} + iyx - ixy -(i*i)y^{2} = x^{2} + y^{2}

Dividing by a complex number z is equivalent of multiplying it by its complex conjugate divided by modulus square. Indeed, try multiplying z by this:

z^{*}/(z z^{*})

Now back to our differential equations. The solution of the harmonic oscillator using complex numbers is indeed:

x(t) = A*e^{iωt}

e^{it}(e^{it})^{*} = e^{it}e^{-it} = e^{it-it} = e^{0} = 1

And here's the *tour de force*: e^{it} is a complex number, so, by definition, it must be of the form x + iy. So what are the values of x and y? Guess what, they are the good old trigonometric functions:

e^{it} = cos(t) + i sin(t)

We have made a full circle (there's a subtle pun in here). We knew that the solutions to the equation of the harmonic oscillator could be expressed in terms of trigonometric functions. On the other hand, we were able to solve this equation using a complex exponential. Now we see that this single exponential describes both the sine and the cosine oscillations in one compact package. It's the same physics, but the new notation is a lot easier, once you get used to it.

sin (a + b) = sin (a) cos (b) + cos (a) sin (b)

We used it in our discussion of the harmonic oscillator. Frankly, I can never remember those trigonometric identities. But with complex numbers they are trivial to derive. For instance, you know thate^{i(a + b)} = e^{ia}e^{ib}

e^{i(a + b)} = cos (a+b) + i sin (a+b)

e^{ia}e^{ib} = (cos (a) + i sin (a)) (cos (b) + i sin (b))

= cos (a) cos (b) - sin (a) sin (b)

+ i (sin (a) cos (b) + cos (a) sin (b))

cos (a + b) = cos (a) cos (b) - sin (a) sin (b)

sin (a + b) = sin (a) cos (b) + cos (a) sin (b)

since both the real parts and the imaginary parts have to be independently equal.Finally, since they are described by pairs of real numbers, complex numbers can be viewed as 2-dimensional vectors. A complex number with modulus one is equivalent to a unit vector. In particular, a complex exponential, e^{it}, can be seen as a unit vector at an angle *t* from the x-axis, see Fig. This angle is called the *phase* of the complex number.

In general, any complex number, z = x + i y can be represented as the modulus *r* times the complex exponential of the phase φ,

z = r e^{iφ}.

Using this representation, the most general solution of a harmonic oscillator, A*e^{iωt + φ} can be viewed as a vector moving around a circle of radius A with constant angular speed ω. The initial phase, φ, defines the starting point of the movement.

Next: Digital Fourier Transform