Click here to go back to the previous page
The question of interest is how can we find the values of \(x_{1}, x_{2}, \ldots, x_{n}\) if we know that \(q_{1}, q_{2}, \ldots, q_{n}\) are orthonormal vectors. The answer involves the dot product. The dot product of two orthonormal vectors is equal to 0 and the dot product of an orthonormal vector with itself is 1.
\[ q_{i}\cdot q_{j} = \begin{cases} 1 & \textrm{if \(i = j\)} \\ 0 & \textrm{if \(i\neq j\)} \end{cases} \]Based on this information, we can find \(x_{1}, x_{2}, \ldots, x_{n}\). For instance, to find \(x_{1}\), all we must do it take the dot product of all the terms with \(q_{1}\).
\[ q_{1} \cdot v = q_{1} \cdot (x_{1}q_{1} + x_{2}q_{2} + \cdots + x_{n}q_{n}) \]We can distribute the dot product and factor out the \(x\) terms since they are just constants.
\[ q_{1} \cdot v = x_{1}(q_{1}\cdot q_{1}) + x_{2}(q_{1}\cdot q_{2}) + \cdots + x_{n}(q_{1}\cdot q_{n}) \] \[ q_{1} \cdot v = x_{1} \]Using the same idea, we can solve for \(x_{2}, x_{3}, \ldots, x_{n}\).
Recall that the dot product can also be expressed as in terms of matrix mutliplication.
\[ a \cdot b = a^{T}b \]This allows us to rewrite what we had before as
\[ q_{1}^{T} v = x_{1} \]So what this really is telling us is that
\[ \textbf{Q}^{T} = x \]Let's confirm this. Our initial equation represented as a matrix is
\[ \textbf{Q}x = v \]And to solve for \(x\), we compute
\[ x = \textbf{Q}^{-1}v \]Remember that one of the key facts about orthonormal matrices is that their inverse matrix is also equal to the transpose. \[ \textbf{Q}^{-1} = \textbf{Q}^{T} \]
Therefore,
\[ x = \textbf{Q}^{-1}v = \textbf{Q}^{T}v \] \[ x = \textbf{Q}^{T}v \]This aligns up exactly with what we had found prior. The ideas explained here will play an important role in understanding Fourier series.
The next thing that we have to understand is how to take the inner product (sometimes called dot product) of functions.
Recall the dot product of two vectors \(v\) and \(w\) is
\[ v \cdot w = v_{1}w_{1} + v_{2}w_{2} + \cdots + v_{n}w_{n} \]The inner product of two functions almost analogous to vectors. The idea is that we must multiply the two functions at every \(x\) value and then we add them up. We must integrate the product of the functions to do this. Given two functions \(f(x)\) and \(g(x)\), the inner product, denoted \(\langle f,g \rangle\), is
\[ \langle f, g \rangle = \int_{\textrm{all } x} f(x)g(x)dx \]Taking the inner product of \(\cos (x)\) and \(\sin (x)\), we must compute the following:
\[ \langle \cos (x), \sin (x) \rangle = \int_{0}^{2\pi} \cos (x) \sin (x)dx = 0 \]Notice that \(\sin(x)\) and \(\cos x\) are periodic functions.
\[ \sin(x) = \sin(x + 2\pi) \qquad \cos(x) = \cos(x + 2\pi) \]This is why we would only need to integrate from 0 to \(2\pi\).
So... What is a Fourier series? It is an expansion of a periodic function \(f(x)\) in terms of an infinite sum trigonometric functions, typically sines and cosines.
\[ f(x) = a_{0} + a_{1}\cos x + b_{1}\sin x + a_{2}\cos 2x + b_{2} \sin 2x + \cdots \]Linear algebra is useful in understanding the idea of the series.
Lemma
The product of an even function and an odd function is an odd function.
Proof:
Suppose \(f(x)\) is an even function and \(g(x)\) is an odd function.
Using the definition of even and odd functions,
\[ f(-x) = f(x) \] \[ g(-x) = -g(x) \]Let \(u(x) = f(x)g(x) \)
Then
\[ u(-x) = f(-x)g(-x) = -(f(x)g(x)) = -u(x) \] \[ u(-x) = u(x) \]Therefore is an odd function \(u(x)\) by the definition of odd functions, and hence the product of an even and odd function is an odd function.
Q.E.D.
Other properties
These can be proved in a similar fashion.
Now that we have quickly proved this fact, we can use it to show that the following integral is always 0:
\[ \int_{-\pi}^{\pi} \cos(nx) \sin(mx)dx \]The lemma above tells us that the product of an even function and an odd function is an odd function. We know that this integrand is odd because cosine is an even function and sine is an odd function. That means we can use the integral property that involves bounds of integration that have opposite signs and odd integrands.
\[ \int_{-\pi}^{\pi} \cos(nx) \sin(mx)dx = 0 \]One of the most important applications of linear algebra is the fast fourier transforms. It takes the original fourier transform and optimizes it to reduce the number of computations tremendously. It is a step above the Discrete Fourier Transform. The DFT has a algorithmic complexity of \(O(n^{2})\). The FFT on the other hand has an algorithmic complexity of \(O(n\log n)\).
There are some great videos on the topic of Fourier Transforms published on YouTube. Here are two that I enjoyed:
Exploit symmetries
DRAFT
Fourier Transform -- Discrete Fourier Transform (DFT) -- Fast Fourier Transform (FFT)
Transforms a function of time, \(f(t)\), to a function of frequency, \(F(\omega)\).
Hertz are the SI unit for frequency. They are defined as cycles per second.
Example: 2Hz = 2 cycles per second
Looking at this from the perspective of the period can be easier at times. The period is the reciprocal of the frequency. It tells you how long it takes for one cycle to complete. Using the example above,
\[ 2 \ {\rm cycles/sec} \qquad \frac{1}{2} \ {\rm sec/cycle}\]When we look at a circle, we can interpret one full revolution as a cycle. Recall that a full revolution of a circle is \(2\pi\) radians.
Euler's formula
\[ e^{ix} = \cos x + i\sin x \]Useful for plotting two dimensional figures.
Fourier Transform
\[ F(\omega) = \int_{-\infty}^{\infty} f(t)e^{-2\pi i\omega t}dt \] where \(\omega\) is frequencyThe convention is clockwise rotation, hence the negative sign in the exponent of the exponential.
Finding the centrioid
\[ \frac{1}{t_{2}-t_{1}}\int_{t_{1}}^{t_{2}}x(t)e^{-2\pi ift} \] where \(f\) is the frequencyThe \(\frac{1}{t_{2}-t_{1}}\) is excluded from the Fourier transform. That means the centroid value is scaled by the time interval that is being spanned. For instance, that time interval may be 3 seconds.
The Fourier transform will include complex numbers. When solving it, you can use Euler's formula to split it into its real part and imaginary part.
\[ X(f) = \int_{-\infty}^{\infty} x(t)e^{-2\pi ift}dt = \int_{-\infty}^{\infty} x(t)(\cos(-2\pi ft) + i\sin(-2\pi ft))dt \] \[ = \int_{-\infty}^{\infty} x(t)\cos(2\pi ft)dt - i\int_{-\infty}^{\infty} x(t)\sin(2\pi ft)dt \]This means that you can choose to evalaute just the real part or just the imaginary part.
With the DFT, we are no longer using integrals but instead sums.
Analysis Equation, Synthesis Equation
The synthesis equation is nothing more than the Inverse Fourier Transform (IFT).
Analyze the frequencies of a function and then synthesis/create a function from those given frequencies.
Discrete Fourier Transform Matrices
\[ F_{n} = \frac{1}{\sqrt{n}}\begin{bmatrix} 1 & 1 & 1 & \cdots & 1 \\ 1 & \omega & \omega^{2} & \cdots & \omega^{(n-1)} \\ 1 & \omega^{2} & \omega^{4} & \cdots & \omega^{2(n-1)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & \omega^{(n-1)} & \omega^{2(n-1)} & \cdots & \omega^{(n-1)(n-1)} \end{bmatrix} \] where \(\omega = e^{-i\frac{2\pi}{n}}\)The matrix is divided by \(\sqrt{n}\) so that the columns are normalized. The reason the entire matrix can be divided by this constant stems from the fact that \(|\omega| = 1\).
Example:
\[ F_{4} = \frac{1}{2} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & \omega & \omega^{2} & \omega^{3} \\ 1 & \omega^{2} & \omega^{4} & \omega^{6} \\ 1 & \omega^{3} & \omega^{6} & \omega^{9} \end{bmatrix} \] \[ \omega = e^{-i\frac{\pi}{2}} = -i \] \[ F_{4} = \frac{1}{2} \begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -i & -1 & i \\ 1 & -1 & 1 & -1 \\ 1 & i & -1 & -i \end{bmatrix} \]Notice that DFT matrices are symmetric. However, they are not Hermitian. The symmetry of these matrices is the key to the Fast Fourier Transform (FFT).
More information on the DFT matrix here.
\(P\) is a permutation matrix that reorders the input vector with its even entries first and then its odd entries.
\[ P = \begin{bmatrix} 1 & 0 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 1 & 0 & \cdots & 0 \\ & & & & \vdots & \\ 0 & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & 0 & 1 & \cdots & 0 \\ & & & & \vdots & \\ 0 & 0 & 0 & 0 & \cdots & 1 \end{bmatrix} \]As an example, the 6 by 6 permutation matrix would be as follows:
\[ \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} \]Multiplying this by a vector would having the effect of reordering its components.
\[ \begin{bmatrix} x_{0} \\ x_{2} \\ x_{4} \\ x_{1} \\ x_{3} \\ x_{5} \end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}\begin{bmatrix} x_{0} \\ x_{1} \\ x_{2} \\ x_{3} \\ x_{4} \\ x_{5} \end{bmatrix} \]