Symmetry, Lie Algebras and Differential Equations Part 1

There is a deep relationship between being able to solve a differential equation and its symmetries. Much of the theory of second order linear differential equations is really the theory of infinite dimensional linear algebra. In particular Sturm-Liouville theory is the diagonalization of an infinite dimensional Hermitian operator. However there are deeper relationships, as Miller points out in “Lie theory and special functions”; the relationships between special functions such as Rodrigues’ formulae are related to the Lie algebra and symmetries of the system. Even better in some cases the solutions can be found almost entirely algebraically. Some examples from physics come from the Simple Harmonic Oscillator, the theory of Angular Momentum and the Kepler Problem (using the Laplace Runge Lenz vector). The rest of this article will be devoted to exploring a special case of these relations the Quantum Simple Harmonic Oscillator.

We begin with trying to solve the differential equation -\frac{1}{2m} f''(x) + \frac{k}{2} x^2 f(x) = \lambda f(x) for some real positive constants m, k and $latex\lambda$ with the boundary conditions f vanishes at infinity. This is an eigenvalue equation; this can’t be solved for any constants but only for particular values of \lambda for a fixed k and m. By dilations (that is, rescaling units) we can assume without loss of generality m=1 and k=1. It is useful to define the momentum operator p=-i \frac{\rm{ d}}{\rm {d}x} – this makes everything more physics-like. If this isn’t familiar to you just substitute -i \frac{\rm{ d}}{\rm {d}x} wherever you see a p.

(The i is chosen to make the operator Hermitian with respect to the L^2 inner product: \int_{-\infty}^{\infty}{f(x)}^* p g(x) = \int_{-\infty}^{\infty} \left(pf(x)\right)^* g(x) ; where * denotes complex conjugation and f and g are zero at infinity. This identity follows immediately from integration by parts).

Introducing the Hamiltonian operator H=\frac{1}{2}\left(p^2+x^2\right) the differential equation is then the eigenvalue equation H f(x) = \lambda f(x) (a form familiar to physicists). The Hamiltonian operator has an obvious symmetry to it: it is invariant under rotations in x-p space. That is it is invariant under transformations of the form:

\begin{bmatrix} x' \\ p' \end{bmatrix} = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \begin{bmatrix} x \\ p \end{bmatrix}.

Following the ideas of Sophus Lie we look at the infinitesimal transformations generating this, by taking the derivative at the identity \theta=0 this gives x \to -p, p \to x; in x-p space it is given by the matrix \begin{bmatrix} 0 & -1 \\ 1 & 0\end{bmatrix}

This transformation is precisely the Fourier transform: f(x) \to \widehat{f}(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} f(x) e^{-i k x}. In particular integration by parts and differentiating under the integral respectively it follows \widehat{xf(x)}=i\widehat{f}'(k) and \widehat{-if'(x)} = k \widehat{f}(k), so as an operator on functions x \to -p and p \to x.

Now since the square of the Fourier transform in x-p space is negative the identity it has eigenvalues -i and +i and corresponding eigenvectors a = \frac{1}{\sqrt{2}}(x+ip) and a^{\dag} = \frac{1}{\sqrt{2}} (x- ip).

Now we introduce the commutator of operators [A,B]=AB-BA=-[B,A] and in particular [x,p]=i \mbox{Id} (since x and its derivative don’t commute). Consequently by linearity [a,a^{\dag}]=1.

Simple calculations show that a^{\dag}a = H -1/2, a a^{\dag} = H + 1/2, [H,a]=-a and [H,a^{\dag}]=a^{\dag}. These last two relations allow us to find the spectrum of H, that is the values of \lambda for which the differential equation is solvable!

If the differential equation can be solved for some \lambda, H f = \lambda f then using the commutation relations shows H(a f) = (\lambda-1) (af) and H(a^{\dag}f) = (\lambda +1) (a^{\dag}f). Thus a lowers the eigenvalue by 1 and is called a lowering operator, and a^{\dag} raises the eigenvalue by 1 and is called a raising operator.

However we can not lower indefinitely: H is positive semidefinite, H=\frac{1}{2} (x^{\dag}x + p^{\dag}p) (where the dagger indicates Hermitian conjugation with respect to the L^2 inner product), so $\lambda$ must be non-negative. Thus there is a function f_0(x) for which a f_0(x) = 0 (which of course satisfies the differential equation trivially). On this state H f_0(x) = (a^{\dag} a + 1/2) f_0(x) = 1/2 f_0(x).

Moreover since any arbitrary solution can be brought to f_0 by repeated lowerings (applications of a), and lowering then raising gives a multiple of the original function every solution can be obtained by raising f_0. Thus the only possible eigenvalues are n+1/2 for n=0,1,2,….

What are the corresponding eigenvectors? Well a f_0 = 0 implies that x f_0(x) + f_0'(x) =0, which has solutions f_0(x) = A e^{-\frac{x^2}{2}} for some constant A. Then the solution with \lambda = n + 1/2 is up to a constant factor (a^{\dag})^n f_0(x) = A_n \left(x - \frac{\rm{d}}{\rm{d}x}\right)^n e^{-\frac{x^2}{2}} = H_n(x) e^{-\frac{x^2}{2}} where H_n(x) are the Hermite polynomials. Consequently we have found all solutions of the second order differential equation just by solving a first order differential equation! (They can also easily be normalized algebraically; that is without doing any integrals, but I won’t show that here).

It is interesting to note all these solutions are invariant under Fourier transform. This is of course a consequence of the Hamiltonian being invariant under Fourier transform, F; if Hf = \lambda f then F H f = (FHF^{-1}F)f = HFf and thus \lambda F f = H (Ff).

From an abstract point of view what have we done? We have taken an algebra of operators on some Hilbert space generated by self-adjoint operators x and p satisfying xp-px=i \text{id} (notice that this implies the vector space can’t be finite dimensional; take the trace of each side). Using this we have shown that the positive definite Hermitian operator H = \frac{1}{2} (x^2 + p^2) has eigenvalues n + 1/2 for n=0,1,2,….

We could choose an explicit representation: the Hilbert space is the space of square integrable functions, x is the multiplication operator and p = -i \frac{\rm{d}}{\rm{d}x}, then in this basis the eigenequation is the differential equation we started with. The solutions in this basis are the Hermite polynomials multiplied by a Gaussian; notice that these functions are orthogonal and complete in L^2 being all the eigenfunctions of a Hermitian operator. The formula for the eigenfunctions in terms of raising operators gives rise to a Rodrigues formula for the Hermite polynomials.

However there is nothing canonical about this choice of representation, a different representation is given by the Fourier transform, which acts as a change of basis. That the Hamiltonian is invariant under the Fourier transform means FHF^{-1}=H or [F,H]=0.

The nicest choice of basis is the one in which H is the (countably infinite dimensional) diagonal matrix with entries 1/2,3/2,5/2,…. It is easy to see that a is the matrix with 1s one row below the diagonal and zeros everywhere else a=\begin{bmatrix} 0 & 0 & 0 & \ldots \\ 1 & 0 & 0 & \ldots \\ 0 & 1 & 0 & \ldots \\ &&& \ddots \end{bmatrix} and a^{\dag} is its transpose. Representations for x and p can be obtained from x = \frac{1}{\sqrt{2}}(a + a^{\dag}) and p = \frac{1}{2i} (a - a^{\dag}).

It is worth noting that in this derivation it wasn’t enough to have a Lie algebra, that is a Lie bracket, we also needed a multiplication over which the Lie bracket is the commutator – that is a representation.

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to Symmetry, Lie Algebras and Differential Equations Part 1

  1. Stephen Wade says:

    I get stuck at the point where you introduce ‘x-p’ space. I think by ‘x-p’ space you mean to take the cartesian product of two sets, each set being some kind of (vector space) of operators (presumably of which ‘x’ and ‘p’ are members) right?

    Then you define a linear transformation say which maps elements of this new two-dimensional vector space to itself, and applying this map to the operators ‘x’ and ‘p’ defined earlier, gives us some new operators such that 1/2(x’^2 + p’^2) = 1/2(x^2 + p^2),

    Then you look at the infinitesimal transformations that generate the linear transformation, and apparently this gives us the Fourier transform, but at this point I can’t see how to relate the idea of a transformation of operators to a transformation such as the Fourier transform?

    • physjam says:

      Ok, I wasn’t super clear. x and p=-i \frac{d}{dx} are (self-adjoint) linear operators. The set of all (self-adjoint) linear operators forms a vector space; when I talk about x-p space I am talking about the 2 dimensional subspace generated by x and p, that is vectors of the form ax+bp. Consequently I’m ignoring all the product structure.

      Then, yes, I define clockwise rotations in the x-p plane, which the operator x^2+p^2 is invariant under.

      Then I look at infinitesimal transformations that generate the linear transformation. On the positive x-axis a small clockwise rotation is a displacement along the -p axis (it lies tangent to the unit circle), and on the positive p-axis a vanishingly small clockwise rotation is a displacement along the x-axis. In fact the generators of the transformation map x to -p and map p to x. It’s only when we bring back the algebraic structure of p=-i\frac{d}{dx} that it looks like a fourier transform: multiplication by x is mapped to i\frac{d}{dx} and \frac{d}{dx} is mapped to ix which is a defining property of the Fourier transform. [x and p as an algebra, that is adding compositions like xp, generate all the self-adjoint linear operators in this space – this is more of a definition of our linear operators and our vector space than a consequence].

      It sounds a bit artificial really; all I am really saying at the end is that x^2 - \frac{d^2}{dx^2} is invariant under a Fourier transform (or more generally a map x \to \cos(\theta) x + i \sin(\theta) \frac{d}{dx}, \frac{d}{dx} \to \cos(\theta) \frac{d}{dx}  + i \sin(\theta) x, which can be given by \exp(\theta F) = \sum_{n=0}^{\infty} \theta^n F^n/n! where F is the Fourier transform and powers denote composition – to see why this works substitute F^2=- Id into the power series.)

      This symmetry then in some way carries on to the solutions – in particular a Fourier transform must map a solution to another solution. I find the eigenvectors in x-p space, and then play around with them to get the eigenvalues of x^2+p^2. There’s something missing; I think it may boil down to a simple representation theory calculation, but I don’t know what group I would try to represent.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s