## Fourier, Laplace transform questions

**Moderators:** gmalivuk, Moderators General, Prelates

### Fourier, Laplace transform questions

I've heard these terms being used multiple times around this forum, and also in the comics. Particularly Fourier Transforms. I don't really get either of them very well. So x is time and y is frequency for a Fourier Transform. Can anyone give an example to give a bit of detail on how it works? I have no clue how Laplace Transforms work.

I remember something about matrices representing different transformations of functions. Does there exist such a matrix A that can represent a Fourier Transform of a function? What about for a Laplace Transform? Or are they too complicated to represent via a matrix?

Finally, I'm interested in Complex Analysis as a field. Will these transforms be turning up in it very much? Are there any transforms that I should also focus on?

I remember something about matrices representing different transformations of functions. Does there exist such a matrix A that can represent a Fourier Transform of a function? What about for a Laplace Transform? Or are they too complicated to represent via a matrix?

Finally, I'm interested in Complex Analysis as a field. Will these transforms be turning up in it very much? Are there any transforms that I should also focus on?

- agelessdrifter
**Posts:**225**Joined:**Mon Oct 05, 2009 8:10 pm UTC

### Re: Fourier, Laplace transform questions

I dunno much about Fourier transforms, but the LaPlace transform is a linear operator defined by the integral [math]\int_0^{\infty}e^{-st}f(t)dt=F(s)[/math]

and is used to solve differential equations of order n, given initial values of the solution function and it's n-1 derivatives.

Basically you take an equation that's difficult to solve, and, by turning each of its parts into a different function, [imath]L\{f(t)\}[/imath], rearranging, and then undoing the transformations you did, you can solve the problem.

and is used to solve differential equations of order n, given initial values of the solution function and it's n-1 derivatives.

Basically you take an equation that's difficult to solve, and, by turning each of its parts into a different function, [imath]L\{f(t)\}[/imath], rearranging, and then undoing the transformations you did, you can solve the problem.

- jestingrabbit
- Factoids are just Datas that haven't grown up yet
**Posts:**5967**Joined:**Tue Nov 28, 2006 9:50 pm UTC**Location:**Sydney

### Re: Fourier, Laplace transform questions

I'll focus on Fourier transforms, they're a little easier imo.

So, start with a periodic function, [imath]f:R \to R[/imath], and lets say that it has period 2π. If it doesn't have period 2π, you can fiddle it so that what I describe works for it too (just work with g(x)= f(2πx/T) where T is the period). A question that its reasonable to ask is "What is a good way of approximating f, using functions that are periodic that we're already familiar with?". Well, the periodic functions that we're already familiar with are things of the form [imath]\cos(nt)[/imath] or [imath]\sin(nt)[/imath] and the approximations are of the form

[math]a_0 + \sum_{n=1}^N a_n \cos(nt) + b_n \sin(nt)[/math]

So, given that that's what you're doing, there are a few different ways to pick your [imath]a_n[/imath] and [imath]b_n[/imath], but a way that means you don't have to change them for which N you pick, and which also are such that an infinite sum of that form converges to the original f you picked (some of the time) is where you let

[math]a_0 = \frac{1}{2\pi}\int_0^{2\pi} f(t) dt, \quad a_n = \frac{1}{\pi} \int_0^{2\pi} f(t) \cos(nt)\ dt \quad \text{ and } \quad b_n = \frac{1}{\pi} \int_0^{2\pi} f(t) \sin(nt)\ dt[/math]

Another, more pure math, less applied, way to go has

[math]f_{\text{approx}} = \sum_{-\infty}^\infty c_n e^{int} \quad \text{ where } \quad c_n= \frac{1}{2\pi} \int_0^{2\pi} f(t) e^{int}\ dt[/math]

and we say that [imath]\hat{f}[/imath] is the Fourier transform of f and [imath]\hat{f}(n) = c_n.[/imath]

To really get what's going on, you'll need some linear algebra, and you need to understand what Hilbert space, aka [imath]L^2[/imath] (pronounced "ell too") space, is. [imath]L^2[0,2\pi)[/imath] is the set of all functions from [0,2π) to the complex numbers, which have the additional constraint that

[math]|f|_2 = \left(\int_0^{2\pi} (f(t))^2 dt \right)^{1/2} <\infty[/math]

and we say that [imath]|f|_2[/imath] is the [imath]L^2[/imath] norm of f (nb: we also need f to be measurable, but that's more technical, and Hilbert space is really equivalence classes of functions that differ on sets of measure 0, which is more technical still). What's more, we also have an extension of the idea of inner products aka dot products to this set of functions where

[math]\langle f, g \rangle = \int_0^{2\pi} f(t) g(t)\ dt[/math]

is the inner product of f and g.

Now, think back to what you did with the Fourier transform. Your coefficients are dot products of f with some particular functions, with some constants hanging around, and the coefficients are coefficients of the same function you took the dot product with. If you know some linear algebra, you will remember projections. Hilbert space is a vector space, and for [imath]f, g\in L^2[0,2\pi)[/imath],

[math]t\mapsto \frac{1}{(|g|_2)^2} \langle f, g \rangle g(t)[/math]

is the orthogonal projection of f onto g. The inner product gives a meaning to 'orthogonal' (two things are orthogonal if they have an inner product of 0) and it makes working them out just a matter of taking inner products. That's an integral in the case of Hilbert space, but its quite doable.

Our approximation, called a Fourier approximation, or Fourier series if its infinite, associated with f, is nothing more than a sum of orthogonal projections of f onto certain functions. You can check that the functions that we use to project onto are always mutually orthogonal, and they form an orthogonal basis for the Hilbert space in question (not quite a basis in the usual sense, but near enough for our purposes). They're not orthonormal, so you have to multiply them by their norms, which are mostly the same.

So, you can look at f as just a big vector, with a

Just purely as an aside, starting with the trig functions is only one way to go, any orthonormal basis is fine.

So, start with a periodic function, [imath]f:R \to R[/imath], and lets say that it has period 2π. If it doesn't have period 2π, you can fiddle it so that what I describe works for it too (just work with g(x)= f(2πx/T) where T is the period). A question that its reasonable to ask is "What is a good way of approximating f, using functions that are periodic that we're already familiar with?". Well, the periodic functions that we're already familiar with are things of the form [imath]\cos(nt)[/imath] or [imath]\sin(nt)[/imath] and the approximations are of the form

[math]a_0 + \sum_{n=1}^N a_n \cos(nt) + b_n \sin(nt)[/math]

So, given that that's what you're doing, there are a few different ways to pick your [imath]a_n[/imath] and [imath]b_n[/imath], but a way that means you don't have to change them for which N you pick, and which also are such that an infinite sum of that form converges to the original f you picked (some of the time) is where you let

[math]a_0 = \frac{1}{2\pi}\int_0^{2\pi} f(t) dt, \quad a_n = \frac{1}{\pi} \int_0^{2\pi} f(t) \cos(nt)\ dt \quad \text{ and } \quad b_n = \frac{1}{\pi} \int_0^{2\pi} f(t) \sin(nt)\ dt[/math]

Another, more pure math, less applied, way to go has

[math]f_{\text{approx}} = \sum_{-\infty}^\infty c_n e^{int} \quad \text{ where } \quad c_n= \frac{1}{2\pi} \int_0^{2\pi} f(t) e^{int}\ dt[/math]

and we say that [imath]\hat{f}[/imath] is the Fourier transform of f and [imath]\hat{f}(n) = c_n.[/imath]

To really get what's going on, you'll need some linear algebra, and you need to understand what Hilbert space, aka [imath]L^2[/imath] (pronounced "ell too") space, is. [imath]L^2[0,2\pi)[/imath] is the set of all functions from [0,2π) to the complex numbers, which have the additional constraint that

[math]|f|_2 = \left(\int_0^{2\pi} (f(t))^2 dt \right)^{1/2} <\infty[/math]

and we say that [imath]|f|_2[/imath] is the [imath]L^2[/imath] norm of f (nb: we also need f to be measurable, but that's more technical, and Hilbert space is really equivalence classes of functions that differ on sets of measure 0, which is more technical still). What's more, we also have an extension of the idea of inner products aka dot products to this set of functions where

[math]\langle f, g \rangle = \int_0^{2\pi} f(t) g(t)\ dt[/math]

is the inner product of f and g.

Now, think back to what you did with the Fourier transform. Your coefficients are dot products of f with some particular functions, with some constants hanging around, and the coefficients are coefficients of the same function you took the dot product with. If you know some linear algebra, you will remember projections. Hilbert space is a vector space, and for [imath]f, g\in L^2[0,2\pi)[/imath],

[math]t\mapsto \frac{1}{(|g|_2)^2} \langle f, g \rangle g(t)[/math]

is the orthogonal projection of f onto g. The inner product gives a meaning to 'orthogonal' (two things are orthogonal if they have an inner product of 0) and it makes working them out just a matter of taking inner products. That's an integral in the case of Hilbert space, but its quite doable.

Our approximation, called a Fourier approximation, or Fourier series if its infinite, associated with f, is nothing more than a sum of orthogonal projections of f onto certain functions. You can check that the functions that we use to project onto are always mutually orthogonal, and they form an orthogonal basis for the Hilbert space in question (not quite a basis in the usual sense, but near enough for our purposes). They're not orthonormal, so you have to multiply them by their norms, which are mostly the same.

So, you can look at f as just a big vector, with a

_{n}, b_{n}, c_{n}and c_{-n}being values of coordinates in two different bases, telling you how much of f is made up of stuff with period [imath]2\pi n[/imath]. So, [imath]\hat{f}[/imath] tells you which frequencies are most active, at least wrt the trig functions, and how much of f is even or odd.Just purely as an aside, starting with the trig functions is only one way to go, any orthonormal basis is fine.

ameretrifle wrote:Magic space feudalism is therefore a viable idea.

- Yakk
- Poster with most posts but no title.
**Posts:**11129**Joined:**Sat Jan 27, 2007 7:27 pm UTC**Location:**E pur si muove

### Re: Fourier, Laplace transform questions

Rabbit, seeing it work in a really simple system might help.

We have a vector v:=(2,1). We want to represent it in a different system -- say the vectors a := (1,1)/rt(2) and b := (1,-1)/rt(2). Note that these two vectors are at right angles to each other and are unit vectors.

So we take v dot a and v dot b. These are two numbers -- (3/rt(2), 1/rt(2)) -- that end up representing our original vector v in the basis defined by {a,b}. We can reconstitute v from this new representation.

Now, the Fourier transform is just that -- a bunch of dot products between our original element (a function in this case) and orthogonal "basis" elements (our trig functions).

There are a bunch of issues you have to work through to make sure it works, and technicalities to check, but... that's it.

I think, for convergence to work out "nicely", you need more than just that, don't you? Or was that for the more general, less linear algebra phase of that real analysis course? Been a long time!

We have a vector v:=(2,1). We want to represent it in a different system -- say the vectors a := (1,1)/rt(2) and b := (1,-1)/rt(2). Note that these two vectors are at right angles to each other and are unit vectors.

So we take v dot a and v dot b. These are two numbers -- (3/rt(2), 1/rt(2)) -- that end up representing our original vector v in the basis defined by {a,b}. We can reconstitute v from this new representation.

Now, the Fourier transform is just that -- a bunch of dot products between our original element (a function in this case) and orthogonal "basis" elements (our trig functions).

There are a bunch of issues you have to work through to make sure it works, and technicalities to check, but... that's it.

Just purely as an aside, starting with the trig functions is only one way to go, any orthonormal basis is fine.

I think, for convergence to work out "nicely", you need more than just that, don't you? Or was that for the more general, less linear algebra phase of that real analysis course? Been a long time!

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

- jestingrabbit
- Factoids are just Datas that haven't grown up yet
**Posts:**5967**Joined:**Tue Nov 28, 2006 9:50 pm UTC**Location:**Sydney

### Re: Fourier, Laplace transform questions

Yakk wrote:Just purely as an aside, starting with the trig functions is only one way to go, any orthonormal basis is fine.

I think, for convergence to work out "nicely", you need more than just that, don't you? Or was that for the more general, less linear algebra phase of that real analysis course? Been a long time!

To get [imath]L^2[/imath] convergence, all you need is for the the basis to be normalised, orthogonal, and such that the span of the basis is dense. For other kinds of convergence, you might want other things, but for most practical purposes any one kind of convergence is as good as any other.

ameretrifle wrote:Magic space feudalism is therefore a viable idea.

### Re: Fourier, Laplace transform questions

jestingrabbit wrote:I'll focus on Fourier transforms, they're a little easier imo.

So, start with a periodic function, [imath]f:R \to R[/imath], and lets say that it has period 2π. If it doesn't have period 2π, you can fiddle it so that what I describe works for it too (just work with g(x)= f(2πx/T) where T is the period). A question that its reasonable to ask is "What is a good way of approximating f, using functions that are periodic that we're already familiar with?". Well, the periodic functions that we're already familiar with are things of the form [imath]\cos(nt)[/imath] or [imath]\sin(nt)[/imath] and the approximations are of the form

[math]a_0 + \sum_{n=1}^N a_n \cos(nt) + b_n \sin(nt)[/math]

So Fourier transforms are only applicable to periodic functions?

- Yakk
- Poster with most posts but no title.
**Posts:**11129**Joined:**Sat Jan 27, 2007 7:27 pm UTC**Location:**E pur si muove

### Re: Fourier, Laplace transform questions

No, not really. Well, we cheat.

The first trick is to take a function that has 'compact support' -- that goes to zero outside of range. You can see how you can invent a periodic function that has all the information you need for that, right? (just cut and paste copies).

For functions that do not actually hit zero, one approach is to window it. You cut out a fixed bit of it, then model that bit. As our convergence is in L^2 (and not uniform), the "cuts" at the end can lead to "ringing" problems. To fix this, we generate window functions to make the fall-off to zero be "smooth", then model each cut-off bit.

We then stitch them back together.

As noted, sin/cos functions are only one possible way you can do Fourier transforms (if the most popular) -- if your set of basis functions isn't periodic, the output isn't periodic either. Sin/Cos just has other nice properties (like, for example, the fact we are pretty good at hearing frequencies, and crappy at hearing amplitudes).

The first trick is to take a function that has 'compact support' -- that goes to zero outside of range. You can see how you can invent a periodic function that has all the information you need for that, right? (just cut and paste copies).

For functions that do not actually hit zero, one approach is to window it. You cut out a fixed bit of it, then model that bit. As our convergence is in L^2 (and not uniform), the "cuts" at the end can lead to "ringing" problems. To fix this, we generate window functions to make the fall-off to zero be "smooth", then model each cut-off bit.

We then stitch them back together.

As noted, sin/cos functions are only one possible way you can do Fourier transforms (if the most popular) -- if your set of basis functions isn't periodic, the output isn't periodic either. Sin/Cos just has other nice properties (like, for example, the fact we are pretty good at hearing frequencies, and crappy at hearing amplitudes).

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

- jestingrabbit
- Factoids are just Datas that haven't grown up yet
**Posts:**5967**Joined:**Tue Nov 28, 2006 9:50 pm UTC**Location:**Sydney

### Re: Fourier, Laplace transform questions

xepher wrote:So Fourier transforms are only applicable to periodic functions?

I would say that you at least need locally periodic behaviour for it to be worthwhile. The classic example that I'm thinking of is music. Its generally not composed of a single note repeated over and over and over again, but locally it has notes, and doing Fourier analysis on small segments of length some fraction of a second can tell you what those notes are, and can be used to quantitatively flesh out the different timbres and textures that different instruments have.

ameretrifle wrote:Magic space feudalism is therefore a viable idea.

- Yakk
- Poster with most posts but no title.
**Posts:**11129**Joined:**Sat Jan 27, 2007 7:27 pm UTC**Location:**E pur si muove

### Re: Fourier, Laplace transform questions

Jesting, I've used them in situations where there isn't local periodicity.

The Fourier transform has a number of neat properties. First, there is the convolution-pointwise multiplication duality trick, which is used to speed up large "multiplication-like" operations. Second, bandpass filters and other kinds of signal processing: while after doing the math the filter is often implemented as a tap-based one, the Fourier transform mathematics are pretty important in building those tap-based filters. And even if your signal doesn't contain any periodic components, the physical requirements of your wire may require keeping certain frequencies quiet. (The particular application is NTSC signals, where the standard is a hack on black-and-white TV, and high-frequency black-and-white data is interpreted as colour. Admittedly less of a problem with new digital standards!)

Oh, and the known fast quantum factoring algorithm uses a quantum Fourier transform. (This is related to the "first" point above).

Using it on music is also useful, but I'd call that a shallow application of something that weirdly useful in more contexts than it looks like it "should" be.

The Fourier transform has a number of neat properties. First, there is the convolution-pointwise multiplication duality trick, which is used to speed up large "multiplication-like" operations. Second, bandpass filters and other kinds of signal processing: while after doing the math the filter is often implemented as a tap-based one, the Fourier transform mathematics are pretty important in building those tap-based filters. And even if your signal doesn't contain any periodic components, the physical requirements of your wire may require keeping certain frequencies quiet. (The particular application is NTSC signals, where the standard is a hack on black-and-white TV, and high-frequency black-and-white data is interpreted as colour. Admittedly less of a problem with new digital standards!)

Oh, and the known fast quantum factoring algorithm uses a quantum Fourier transform. (This is related to the "first" point above).

Using it on music is also useful, but I'd call that a shallow application of something that weirdly useful in more contexts than it looks like it "should" be.

One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.

### Re: Fourier, Laplace transform questions

agelessdrifter wrote:I dunno much about Fourier transforms, but the LaPlace transform is a linear operator defined by the integral [math]\int_0^{\infty}e^{-st}f(t)dt=F(s)[/math]

and is used to solve differential equations of order n, given initial values of the solution function and it's n-1 derivatives.

Basically you take an equation that's difficult to solve, and, by turning each of its parts into a different function, [imath]L\{f(t)\}[/imath], rearranging, and then undoing the transformations you did, you can solve the problem.

And that's pretty much it for Laplace Transforms?

### Re: Fourier, Laplace transform questions

xepher wrote:agelessdrifter wrote:I dunno much about Fourier transforms, but the LaPlace transform is a linear operator defined by the integral [math]\int_0^{\infty}e^{-st}f(t)dt=F(s)[/math]

and is used to solve differential equations of order n, given initial values of the solution function and it's n-1 derivatives.

Basically you take an equation that's difficult to solve, and, by turning each of its parts into a different function, [imath]L\{f(t)\}[/imath], rearranging, and then undoing the transformations you did, you can solve the problem.

And that's pretty much it for Laplace Transforms?

Here's a useful chart:

http://www.vibrationdata.com/math/Lapla ... sforms.pdf

I wasted a total of twenty minutes trying to transform tan and ln.

- agelessdrifter
**Posts:**225**Joined:**Mon Oct 05, 2009 8:10 pm UTC

### Re: Fourier, Laplace transform questions

There are restrictions on the functions that can be operated on by the laplace transform, and technical things to take into consideration when using them, but what I said before is the general idea. To be honest, my ODE class didn't really get into the nitty-gritty technical stuff at all =/

I am not sure that you can transform ln.

I am not sure that you can transform ln.

### Re: Fourier, Laplace transform questions

agelessdrifter wrote:There are restrictions on the functions that can be operated on by the laplace transform, and technical things to take into consideration when using them, but what I said before is the general idea. To be honest, my ODE class didn't really get into the nitty-gritty technical stuff at all =/

I am not sure that you can transform ln.

You can't. You'll end up with -(e^st)(ln t)/s from 0 to infinity + some integral, and ln(0) is undefined. Or negative infinity. Didn't stop me from wasting ten minutes on that problem, though.

EDIT: That is, trying to integrate by parts when the very first part was undefined.

### Who is online

Users browsing this forum: No registered users and 16 guests