So consider the one-dimensional time-independent Schrodinger equation:

In some ways it's not really an equation as such, because you have to plug in some function V(x) that describes the potential in the problem you're solving. When you first learn quantum mechanics you'll learn the big ones: V(x) = 0, V(x) = V > E, V(x) = 1/2 kx^2, V(x) = k/x, etc. One that's conspicuously absent is usually one of the simplest looking potentials: V(x) = -Fx. That's just the potential of a constant force, say a particle in a uniform gravitational field. Rarely do textbook authors bother to work out a solution - they'll just say "This can be solved with the Airy function", and maybe give the solution as an accomplished fact.

Which isn't too bad - in most cases if you're the kind of person who knows what an Airy function is you can probably work it out yourself. But there's something to be said for a walkthrough for those who're just getting to the point of understanding how to solve differential equations in terms of special functions. So let's do it:

For the Schrodinger equation with a linear potential, we have to solve:

Rearrange and cancel the negative signs:

This looks kinda sorta familiar. The Airy function - by defintion - is the function solving the differential equation:

Our equation looks kinda sorta like that. What we'd like to do is to finagle our Schrodinger equation into that form, and then we'd know that the Airy function of some argument would be our solution. We might suspect the substitution u = Fx + E would be a good start. But that can't quite be it, because we'd still have that hbar/2m term floating out in front of the derivative. There may be some coefficient that can simultaneously get rid of the stuff in front of the derivative and turn Fx + E into just u. Let's call that coefficient (if it exists) A. We then have:

By the chain rule,

If the u-x relationship weren't linear we'd have to be a little careful here, but it is and so it's clear that:

So we need the coefficient in front of the u to be -1, so go ahead and multiply through by -A:

Plug that sucker in and we have:

The solution to which is, by definition:

Well, there's also the Airy function of the second kind Bi(x), but that diverges for positive arguments and usually isn't relevant. In occasional circumstances it is, and you should be aware of it.

I hereby declare us done. Now in practice you have to apply some boundary conditions. For instance, in solving the problem of a quantum ball bouncing on a hard surface we have the boundary condition psi(x = 0) = 0, and only particular values of E will satisfy those conditions. Those are the eigenvalues that make up the energy spectrum. That's just details though.

Details which you'll have to know if you're studying for exams - which are rapidly approaching if you're a student. Good luck! It's also interesting to compare this quantum result to the classical one, which we'll do shortly.

- Log in to post comments

"...in solving the problem of a quantum ball bouncing on a hard surface..."

Ok, but that sounds like you're describing a time-dependent problem. So you'd need to solve an initial value problem for the time dependent S. equation instead. Then your initial condition would be some linear combination of all these different Airy function eigenfunctions with different energies and then you have to evolve them all and add them all up. Or is that your next post?

Please show your readers some graphs of the Airy function, and explain what those graphs tell us about the meanings of the solution that you found.

In your clear blog post, you did some mathematics. You have not done any physics.

If you would do some physics, you would also be explaining to csrster that finding the time-independent solution to Schrodinger's equation is not the same as finding a solution in which nothing is moving.

A learned something new today. A graph of the solution might help to visualize what's going on, or perhaps some illustration of the correspondence with the classical problem.

Nitpicky: should the partial derivative be total derivatives since \Psi is a function of x alone?

Left Coast Bernard:

If you knew some undergraduate-level quantum mechanics, you would also be explaining to crster that the best method for finding the time-dependent solution to Schrodinger's equation for arbitrary initial conditions IS by finding the solutions to the Time-Independent Schrodinger Equation. (For a static V(x), which is the case here.) Which is just what this blog post does.

>>The solution to which is, by definition is Ai(x).

Ick. That's the same as saying the solution to y''=y is sin(x), just apply boundary conditions. To what? You haven't got any degrees of freedom.

The solution to y''=xy is a linear combination, (a superposition!) of Ai(x) and Bi(x).

a*Ai(x)+b*Bi(x), which gives you two degrees of freedom to assign boundary conditions with.

(You know all of this, and probably deliberately glossed over it for simplicity. But you're saying something outright wrong, in my eyes.)

I agree with Douglas and LCB. This is a botched job. You fail your professorship exam. :p

As a mathematician, I find the easiest way to introduce the Airy function is to use Fourier transforms. Kind of.

The airy equation is

y''-uy=0

Let z be the Fourier transform of y; F(y)=z, where F stands for Fourier transform and z=z(w) is the transformed function in the frequency frequency. Then it can be shown F(y'')=-w^2 z and F(uy)=-iz' And so

-w^2 z +i z' =0

Being careful about the i's and signs, this is a first order differential equation for z and is solved by

z(w)=K exp(-i*w^3/3)

And taking the inverse transform gives

y(u)=1/2*pi int exp(i( uw- w^3/3)) dw

Which gives you the right phase term in your trignometric integral, but comes up short on providing your linearly independent solution. Oh the joys of Fourier analysis.

Gotta disagree with Left Coast and Raskolnikov. Yes, "bouncing ball" doesn't intuitively sound like words that you'd pick to describe a stationary state, which this is. But QM ain't always intuitive. If you actually take a basketball and drop it, all you're doing is adding a bunch of these solutions together and letting their phases develop in time.

Now on the other hand, Douglas has got a point. I don't think it's actually wrong to gloss over the issue of linear combinations of Ai and Bi, but it is a simplification. On the other hand, anyone who knows enough for the simplification to make a difference already knows enough to see what the simplification is - vis., only certain Es and certain constant coefficients for each Ai_E will give you the linear combination you need for your particular problem.

And yes, I'm revisiting this and comparing to the classical case in a post later this week.