In this article I will show that the cyclic group of order n, that is the set under addition modulo n motivates the discrete Fourier transform on a particular finite dimensional complex inner product space, and gives many of its properties. In a subsequent article I will extend this to the general Fourier transform and its relation to the group of integers and real numbers under addition.

To begin I want to consider linear representations of the cyclic group of order n: that is I want to assign to each element of the group a linear operator on an inner product space in a way consistent with the group structure [or if you prefer, to find a homomorphism from the cyclic group to the group of automorphisms of an inner product space (an orthogonal group)]. There are lots of ways to do this, for lots of different vector spaces – the simplest is to map every group element to the identity (the *trivial (linear) representation*).

It would be nice to have some sort of canonical linear representation. Given a set we can form a vector space by taking all formal linear combinations of its elements (that is we consider the elements of the set to be linearly independent vectors, and the vector space is their span). If a group acts on that set we can extend it to a linear representation of the induced vector space by extending the group linearly; this is called the **permutation representation**.

For example if the set is the vector space is three dimensional and consists of all elements of the form . The group of all permutations on three elements acts on the set, and given such a permutation it is represented by the linear mapping .

Now the group G acts on the set G by left multiplication, and so we can construct a permutation representation. This is called the **regular representation** of G.

What does this look like for a cyclic group of order n? The vector space has a basis of , and the group element 1 is represented by the linear transformation S satisfying (where addition is modulo n). The group element k=1+1+…+1 is represented by .

There is also a natural inner product and this is invariant under S (that is S is unitary). As a matrix .

Now since S is unitary it is normal and hence by the spectral theorem unitarily diagonalizable. So let’s look for it’s eigenvectors and eigenvalues: since it’s clear its eigenvalues must be nth roots of unity, so denote (the choice of sign, and to some extent root, is arbitrary). We can in fact easily see that is a normalised eigenvector of S with eigenvalue (go on, check it!). Actually the normalised eigenvectors are only determined up to an overall phase, so would work equally well, but I’ll stick to these phase conventions for convenience.

The diagonalising matrix is then .

So . In fact F diagonalises every group element by multiplication:

F is precisely the discrete Fourier transform (up to a choice of normalisation): if , then .

Many of the properties of the discrete Fourier transform follow immediately; we know it is unitary by the spectral theorem which is precisely the Plancherel theorem. In particular it is invertible, which gives completeness. One half of the shift theorem is also immediate . One can see from the explicit form for F that and so if we define the operator then (though this would be different if we had chosen a different normalisation condition), so applying F to the half of the shift theorem above gives the other half (is there an easier way to see this?).

What about convolutions? Given that each basis vector corresponds to a group element, there is a natural algebraic structure on the vector space, namely (where as usual addition is modulo n). This is precisely a convolution; Excercise: by requiring to be distributive and expanding in component prove . What about the convolution theorem? Well we don’t really have an idea of a multiplicative structure (yet) so it doesn’t really make sense.

What is the exact structure on V? There’s an inner product, but there’s also a **relative ordering** of the basis elements; it doesn’t matter where we start numbering the basis elements (except in the definition of convolutions) but S defines an order for them relative to each other. So to say the Fourier transform is defined by a complex inner product space is lying a little, because there is this extra structure. [Also, considering the Fourier transform is only defined up to a phase it could be more natural to think of two vectors being equivalent if they differ only by a phase.] Actually there is a much more natural way to introduce this structure.

There is another way to think of a permutation representation. We form the vector space associated to a set as the vector space of all linear functions from the set to the complex numbers. The basis vector corresponding to the element s is the characteristic function of s, which maps s to 1 and every other element to 0. (Exercise: Show this is equivalent to the description given before, at least if the set is finite). An arbitrary function can be decomposed into the basis of characteristic functions: . The action of a group element is .

Now let’s look back at the regular representation of the cyclic group through this lens. We consider functions , with the inner product and we have the shift operator given by . The Discrete Fourier Transform is given by . The diagonalisation property is that is a multiplicative operator, equivalent to pointwise multiplication by the function . (Indeed Halmos notes that any normal operator can be unitarily mapped to a multiplicative operator is one way of viewing the spectral theorem).

A convolution is then . Now taking the Fourier transform of a convolution of basis elements , and using that the pointwise product (no sum) means we can rewrite it as that is . Applying linearity gives one half of the convolution theorem: . The other half is readily obtained using . Thus the Fourier transform maps the additional ring structure given by pointwise multiplication to the convolution structure given by the regular representation.

So what have we got? We started looking at regular linear representations of the cyclic group, and to change to a basis in which the group operations were diagonal we invented the discrete Fourier transform.

The power in this idea is there are many generalisations. We could have a look at more complicated groups or even more general algebraic structures. The representation theory of cyclic groups is very simple since they are abelian, there’s a lot more involved in trying to diagonalize the representations of non-abelian groups. We could then have other notions of convolutions and Fourier-type transforms. We could also look at mapping to other vector spaces or even to different geometric structures. If instead of constructing vector spaces over the complex numbers we constructed it over finite fields we would get (for the right combination of dimension of the vector space and characteristic of the field) the finite Fourier transform which is important in coding theory. One could also look at what happens to direct sums, tensor products and the like of the regular representations.

Did you come up with this yourself?

Well, I saw on Wikipedia a note that the Discrete Fourier Transform can be viewed through the representation theory of the cyclic groups, and all the stuff on permutation and regular representations is straight out of Chapter 1 of Griffiths and Harris representation theory, but I worked out the details myself.

Nice. You said the next one will use the integers and the reals under addition as the groups. Have you tried any of this analysis for other groups?

Not yet; that’s definitely a goal. In a non-abelian group the generators don’t commute so they can’t be simultaneously diagonalized; so the analysis will need to be modified; in particular I’m not sure if there is an analogue of the Fourier transform.

But there’s definitely something to say: the representation theory of SO(3) dictates the decomposition of functions on a sphere into spherical harmonics.

I’m still trying to work out the right way to approach finite and infinite dimensional non-abelian groups.