Schrödinger equation
From Wikipedia, the free encyclopedia
This article needs additional citations for verification. Please help improve this article by adding reliable references. Unsourced material may be challenged and removed. (March 2008) |
In physics, especially quantum mechanics, the Schrödinger equation is an equation that describes how the quantum state of a physical system changes in time. It is as central to quantum mechanics as Newton's laws are to classical mechanics.
In the standard interpretation of quantum mechanics, the quantum state, also called a wavefunction or state vector, is the most complete description that can be given to a physical system. Solutions to Schrödinger's equation describe atomic and subatomic systems, electrons and atoms, but also macroscopic systems, possibly even the whole universe. The equation is named after Erwin Schrödinger who discovered it in 1926.[1]
Schrodinger's equation can be mathematically transformed into the Heisenberg formalism, and into the Feynman path integral. The Schrödinger equation describes time in a way that is inconvenient for relativistic theories, a problem which is less severe in Heisenberg's formulation and completely absent in the path integral.
Contents |
[edit] Historical background and development
Einstein interpreted Planck's quanta as photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, a mysterious wave-particle duality. Since energy and momentum are related in the same way as frequency and wavenumber in relativity, it followed that the momentum of a photon is proportional to its wavenumber.
DeBroglie hypothesized that this is true for all particles, for electrons as well as photons, that the energy and momentum of an electron are the frequency and wavenumber of a wave. Assuming that the waves travel roughly along classical paths, he showed that they form standing waves only for certain discrete frequencies, discrete energy levels which reproduced the old quantum condition.
Following up on these ideas, Schrödinger decided to find a proper wave equation for the electron. He was guided by Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system--- the trajectories of light rays become sharp tracks which obey a principle of least action. Hamilton believed that mechanics was the zero-wavelength limit of wave propagation, but did not formulate an equation for those waves. This is what Schrödinger did, and a modern version of his reasoning is contained in the box below.
Assumptions:
- The particle is described by a wave.
- The frequency of the wave is the energy E of the particle, while the momentum p is the wavenumber k. (In fact, this is a consequence of special relativity and the first assumption.)
with serving as a unit conversion factor.
- The total energy is the same function of momentum and position as in classical mechanics:
-
-
- the first term is the kinetic energy and the second term is the potential energy.
Schrodinger required that a Wave packet at position x with wavenumber k will move along the trajectory determined by Newton's laws in the limit that the wavelength is small.
Consider first the case without a potential, V=0.
Replacing energy/momentum by frequency/wavenumber:
A plane wave is a wave with a wavenumber k, and it has the following form:
The sign convention for the frequency is chosen to make the quantity in the exponential the relativistic dot-product.
Taking a time derivative multiplies by the frequency
Taking a spatial derivative multiplies by the wavenumber:
so that a plane wave with the right energy/frequency relationship obeys the free Schrodinger equation:
Since every function is a linear combination of plane waves, this equation is obeyed by an arbitrary wave describing a free particle.
Since there is no potential, a wavepacket should travel in a straight line at the correct classical velocity. The velocity v of such a wavepacket is given by:
which is the momentum over the mass as it should be. This reproduces one of Hamilton's equations from mechanics:
after identifying the energy and momentum of a wavepacket as the frequency and wavenumber.
To include a potential energy, consider that as a particle moves the energy is conserved, so that for a wavepacket with approximate wavenumber k at approximate position x the quantity
must be constant. The frequency doesn't change as a wave moves, but the wavenumber does. So where there is a potential energy, it must add in the same way:
This is the time dependent schrodinger equation. It is the equation for the energy in classical mechanics, turned into a differential equation by substituting:
Schrödinger studied the standing wave solutions, since these were the energy levels. Standing waves have a complicated dependence on space, but vary in time in a simple way:
substituting, the time-dependent equation becomes the standing wave equation:
Which is the original time-independent Schrodinger equation.
In a potential gradient, the k-vector of a short-wavelength wave must vary from point to point, to keep the total energy constant. Sheets perpendicular to the k-vector are the wavefronts, and they gradually change direction, because the wavelength is not everywhere the same. A wavepacket follows the shifting wavefronts with the classical velocity, with the acceleration equal to the force divided by the mass.
an easy modern way to verify that Newton's second law holds for wavepackets is to take the Fourier transform of the time dependent Schrodinger equation. For an arbitrary polynomial potential this is called the Schrodinger equation in the momentum representation:
The group-velocity relation for the fourier trasformed wave-packet gives the second of Hamilton's equations.
Assumptions:
- 1- The total energy E of a particle is
-
-
- This is the classical mechanics expression for a particle with mass m where the total energy E is the sum of the kinetic energy, , and the potential energy V. The momentum of the particle is p, or mass times velocity. The potential energy is assumed to vary with position, and possibly time as well.
-
Note that the energy E and momentum p appear in the following two relations:
- 2- Einstein's light quanta hypothesis of 1905:
-
-
- where the frequency f of the quanta of radiation (photons) are related by Planck's constant h.
-
- 3- The de Broglie hypothesis of 1924:
-
-
- where is the wavelength of the wave. This hypothesis also requires:
-
- 4- The association of a wave (with wavefunction ψ) with any particle.
Combining the above assumptions yields Schrödinger's equation:
Expressing frequency f in terms of angular frequency and wavelength in terms of wavenumber , with we get:
and
where we have expressed p and k as vectors.
Schrödinger's great insight, late in 1925, was to express the phase of a plane wave as a complex phase factor:
and to realize that since
then
and similarly since:
then
and hence:
so that, again for a plane wave, he obtained:
And by inserting these expressions for the energy and momentum into the classical mechanics formula we started with we get Schrödinger's famed equation for a single particle in the 3-dimensional case in the presence of a potential V:
Using this equation, Schrödinger computed the spectral lines for hydrogen by treating a hydrogen atom's single negatively charged electron as a wave, , moving in a potential well, V, created by the positively charged proton. This computation reproduced the energy levels of the Bohr model.
But this was not enough, since Sommerfeld had already seemingly correctly reproduced relativistic corrections. Schrödinger used the relativistic energy momentum relation to find what is now known as the Klein-Gordon equation in a Coulomb potential (in natural units):
He found the standing-waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself in an isolated mountain cabin with a lover.
While there, Schrödinger decided that the earlier nonrelativistic calculations were novel enough to publish, and decided to leave off the problem of relativistic corrections for the future. He put together his wave equation and the spectral analysis of hydrogen in a paper in 1926.[2]. The paper was enthusiastically endorsed by Einstein, who saw the matter-waves as the visualizable antidote to what he considered to be the overly formal matrix mechanics.
The Schrödinger equation defines the behaviour of , but does not interpret what is. Schrödinger tried unsuccessfully, in his fourth paper, to interpret it as a charge density.[3] In 1926 Max Born, just a few days after Schrödinger's fourth and final paper was published, successfully interpreted as a probability amplitude[4]. Schrödinger, though, always opposed a statistical or probabilistic approach, with its associated discontinuities; like Einstein, who believed that quantum mechanics was a statistical approximation to an underlying deterministic theory, Schrödinger was never reconciled to the Copenhagen interpretation.[5]
[edit] Mathematical forms
There are various closely related equations which go under Schrödinger's name,
[edit] Time-dependent Schrödinger equation
The time-dependent Schrödinger equation for a system with energy operator is,
where ψ is the wavefunction, is the reduced Planck's constant and i is the imaginary unit. The form of the Hamiltonian is different for different systems.
For a non-relativistic particle moving in a potential, the Hamiltonian operator is the sum of the kinetic and potential energies :
And the Schrödinger equation is a partial differential equation
[edit] Time-independent Schrödinger equation
When the Hamiltonian does not depend on time (which, in the case of a single particle, is when the potential energy does not change in time) there are special solutions of the time-dependent equation which form standing waves. These waves have a constant energy/frequency, oscillating in time without changing shape. They obey what is sometimes called the time-independent Schrödinger equation
which means that
so that ψ has constant frequency.
where is the value of ψ at t = 0. Such a solution describes a stationary state in quantum mechanics, a state with a definite value of the energy. In such a state, all the probabilities for the outcomes of any measurement do not depend on time.
For a particle in a one-dimensional potential, the standing-wave condition is:
In more dimensions, the only difference is more space derivatives:
where is the Laplacian.
[edit] Bra-ket versions
In the mathematical formulation of quantum mechanics, a physical system is fully described by a vector in a complex Hilbert space, the collection of all possible normalizable wavefunctions. A normalizable wavefunction is just an alternate name for a vector in Hilbert space, the two terms are synonyms. This is true even though in general the vectors do not describe the probability amplitudes for a particle to be in a certain position, so they don't "wave" in any physical sense. The only wavefunction that is a wave in space and time is the wavefunction for a single particle in the position representation.
Two nonzero vectors which are multiples of each other, two wavefunctions which are the same up to rescaling, represent the same physical state. The Schrödinger equation is the rate of change of the state vector.
In bra-ket notation:
where is a ket, an infinite component complex vector. and is the Hamiltonian, a linear map from kets to kets. The Hamiltonian should be a self-adjoint operator, so that its eigenvalues are real.
The nonzero elements of a Hilbert space are by definition normalizable and it is convenient to represent a state by an element of the ray which has unit length. For every time-independent Hamiltonian operator, , there exists a set of quantum states, , known as energy eigenstates, and corresponding real numbers En satisfying the eigenvalue equation,
Alternatively, ψ is said to be an eigenstate (eigenket) of with eigenvalue E. Such a state possesses a definite total energy, whose value En is the eigenvalue of the Hamiltonian. The corresponding eigenvector is normalizable to unity. This eigenvalue equation is referred to as the time-independent Schrödinger equation. We purposely left out the variable(s) on which the wavefunction depends.
Self-adjoint operators, such as the Hamiltonian, have the property that their eigenvalues are always real numbers, as we would expect, since the energy is a physically observable quantity. Sometimes more than one linearly independent state vector correspond to the same energy En. If the maximum number of linearly independent eigenvectors corresponding to En equals k, we say that the energy level En is k-fold degenerate. When k=1 the energy level is called non-degenerate.
On inserting a solution of the time-independent Schrödinger equation into the full Schrödinger equation, we get
It is relatively easy to solve this equation. One finds that the energy eigenstates (i.e., solutions of the time-independent Schrödinger equation) change as a function of time only trivially, namely, only by a complex phase:
It immediately follows that the probability amplitude,
is time-independent. Because of a similar cancellation of phase factors in bra and ket, all average (expectation) values of time-independent observables (physical quantities) computed from are time-independent.
Energy eigenstates are convenient to work with because they form a complete set of states. That is, the eigenvectors form a basis for the state space. We introduced here the short-hand notation . Then any state vector that is a solution of the time-dependent Schrödinger equation (with a time-independent ) can be written as a linear superposition of energy eigenstates:
(The last equation enforces the requirement that , like all state vectors, may be normalized to a unit vector.) Applying the Hamiltonian operator to each side of the first equation, the time-dependent Schrödinger equation in the left-hand side and using the fact that the energy basis vectors are by definition linearly independent, we readily obtain
Therefore, if we know the decomposition of into the energy basis at time t = 0, its value at any subsequent time is given simply by
Note that when some values are not equal to zero for differing energy values , the left-hand side is not an eigenvector of the energy operator . The left-hand is an eigenvector when the only -values not equal to zero belong the same energy, so that can be factored out. In many real-world application this is the case and the state vector (containing time only in its phase factor) is then a solution of the time-independent Schrödinger equation.
Let and be degenerate eigenstates of the time-independent Hamiltonian :
Suppose a solution of the full (time-dependent) Schrödinger equation of has the form at t = 0:
Hence, because of the discussion above, at t > 0 :
which shows that only depends on time in a trivial way (through its phase), also in the case of degeneracy.
Apply now :
Conclusion: The wavefunction with the given initial condition (its form at t = 0), remains a solution of the time-independent Schrödinger equation for all times t > 0.
[edit] Properties
[edit] Linearity
The Schrödinger equation (in any form) is linear in the wavefunction, meaning that if ψ(x,t) and φ(x,t) are solutions, then so is aψ + bφ, where a and b are any complex numbers. This property of the Schrödinger equation has important consequences.
Assumptions:
- The Schrödinger equation:
- and are solutions of the Schrödinger equation.
- (as the Hamiltonian is a linear operator)
[edit] Conservation of probability
In order to describe how probability density changes with time, we define a probability current or probability flux. The probability flux represents a flowing of probability across space.
For example, consider a Gaussian probability curve centered around x0 with x0 moving at speed v to the right. One may say that the probability is flowing towards the right, i.e., there is a probability flux directed to the right.
The probability flux is defined as:
and measured in units of (probability)/(area × time) = r−2t−1.
The probability flux satisfies the required continuity equation for a conserved quantity, i.e.:
where is the probability density and measured in units of (probability)/(volume) = r−3. This equation is the mathematical equivalent of the probability conservation law.
A standard calculation shows that for a plane wave described by the wavefunction,
the probability flux is given by
showing that not only is the probability of finding the particle in a plane wave state the same everywhere at all times, but also that it is moving at constant speed everywhere.
[edit] Correspondence principle
The Schrödinger equation satisfies the correspondence principle.
[edit] Solutions
Analytical solutions of the time-independent Schrödinger equation can be obtained for a variety of relatively simple conditions. These solutions provide insight into the nature of quantum phenomena and sometimes provide a reasonable approximation of the behavior of more complex systems (e.g., in statistical mechanics, molecular vibrations are often approximated as harmonic oscillators). Several of the more common analytical solutions can be found in the list of quantum mechanical systems with analytical solutions.
For many systems, however, there is no analytic solution to the Schrödinger equation. In these cases, one must resort to approximate solutions. Some of the common techniques are:
- Perturbation theory
- The variational principle underpins many approximate methods (like the popular Hartree-Fock method which is the basis of the post Hartree-Fock methods)
- Quantum Monte Carlo methods
- Density functional theory
- The WKB approximation
- Discrete delta-potential method
[edit] Free Schrödinger equation
When the potential is zero, the Schrödinger equation is linear with constant coefficients:
where has been set to 1. The solution ψt(x) for any initial condition ψ0(x) can be found by Fourier transforms. Because the coefficients are constant, an initial plane wave:
stays a plane wave. Only the coefficient changes. Substituting:
So that A is also oscillating in time:
and the solution is:
Where ω = k2 / 2m, a restatement of DeBroglie's relations.
To find the general solution, write the initial condition as a sum of plane waves by taking its Fourier transform:
The equation is linear, so each plane waves evolves independently:
Which is the general solution. When complemented by an effective method for taking Fourier transforms, it becomes an efficient algorithm for finding the wavefunction at any future time--- Fourier transform the initial conditions, multiply by a phase, and transform back.
[edit] Gaussian Wavepacket
An easy and instructive example is the Gaussian wavepacket:
where a is a positive real number, the square of the width of the wavepacket. The total normalization of this wavefunction is:
The Fourier transform is a Gaussian again in terms of the wavenumber k:
With the physics convention which puts the factors of 2π in Fourier transforms in the k-measure.
Each separate wave only phase-rotates in time, so that the time dependent Fourier-transformed solution is:
The inverse Fourier transform is still a Gaussian, but the parameter a has become complex, and there is an overall normalization factor.
The branch of the square root is determined by continuity in time--- it is the value which is nearest to the positive square root of a. It is convenient to rescale time to absorb m, replacing t/m by t.
The integral of ψ over all space is invariant, because it is the inner product of ψ with the state of zero energy, which is a wave with infinite wavelength, a constant function of space. For any energy state, with wavefunction η(x), the inner product:
-
- ,
only changes in time in a simple way: its phase rotates with a frequency determined by the energy of η. When η has zero energy, like the infinite wavelength wave, it doesn't change at all.
The sum of the absolute square of ψ is also invariant, which is a statement of the conservation of probability. Explicitly in one dimension:
Which gives the norm:
which has preserved its value, as it must.
The width of the Gaussian is the interesting quantity, and it can be read off from the form of | ψ2 | :
-
- .
The width eventually grows linearly in time, as . This is wave-packet spreading--- no matter how narrow the initial wavefunction, a Schrodinger wave eventually fills all of space. The linear growth is a reflection of the momentum uncertainty--- the wavepacket is confined to a narrow width and so has a momentum which is uncertain by the reciprocal amount , a spread in velocity of , and therefore in the future position by , where the factor of m has been restored by undoing the earlier rescaling of time.
[edit] Galilean Invariance
Galilean boosts are transformations which look at the system from the point of view of an observer moving with a steady velocity -v. A boost must change the physical properties of a wavepacket in the same way as in classical mechanics:
So that the phase factor of a free Schrodinger plane wave:
is only different in the boosted coordinates by a phase which depends on x and t, but not on p.
An arbitrary superposition of plane wave solutions with different values of p is the same superposition of boosted plane waves, up to an overall x,t dependent phase factor. So any solution to the free Schrodinger equation, ψt(x), can be boosted into other solutions:
Boosting a constant wavefunction produces a plane-wave. More generally, boosting a plane-wave:
produces a boosted wave:
Boosting the spreading Gaussian wavepacket:
produces the moving Gaussian:
Which spreads in the same way.
[edit] Free Propagator
The narrow-width limit of the Gaussian wavepacket solution is the propagator K. For other differential equations, this is sometimes called the Green's function, but in quantum mechanics it is traditional to reserve the name Green's function for the time Fourier transform of K. When a is the infinitesimal quantity ε, the Gaussian initial condition, rescaled so that its integral is one:
becomes a delta function, so that its time evolution:
gives the propagator.
Note that a very narrow initial wavepacket instantly becomes infinitely wide, with a phase which is more rapidly oscillatory at large values of x. This might seem strange--- the solution goes from being concentrated at one point to being everywhere at all later times, but it is a reflection of the momentum uncertainty of a localized particle. Also note that the norm of the wavefunction is infinite, but this is also correct since the square of a delta function is divergent in the same way.
The factor of ε is an infinitesimal quantity which is there to make sure that integrals over K are well defined. In the limit that ε becomes zero, K becomes purely oscillatory and integrals of K are not absolutely convergent. In the remainder of this section, it will be set to zero, but in order for all the integrations over intermediate states to be well defined, the limit is to only to be taken after the final state is calculated.
The propagator is the amplitude for reaching point x at time t, when starting at the origin, x=0. By translation invariance, the amplitude for reaching a point x when starting at point y is the same function, only translated:
In the limit when t is small, the propagator converges to a delta function:
but only in the sense of distributions. The integral of this quantity multiplied by an arbitrary differentiable test function gives the value of the test function at zero. To see this, note that the integral over all space of K is equal to 1 at all times:
since this integral is the inner-product of K with the uniform wavefunction. But the phase factor in the exponent has a nonzero spatial derivative everywhere except at the origin, and so when the time is small there are fast phase cancellations at all but one point. This is rigorously true when the limit is taken after everything else.
So the propagation kernel is the future time evolution of a delta function, and it is continuous in a sense, it converges to the initial delta function at small times. If the initial wavefunction is an infinitely narrow spike at position x0:
it becomes the oscillatory wave:
Since every function can be written as a sum of narrow spikes:
the time evolution of every function is determined by the propagation kernel:
And this is an alternate way to express the general solution. The intepretation of this expression is that the amplitude for a particle to be found at point x at time t is the amplitude that it started at x0 times the amplitude that it went from x0 to x, summed over all the possible starting points. In other words, it is a convolution of the kernel K with the initial condition.
Since the amplitude to travel from x to y after a time t + t' can be considered in two steps, the propagator obeys the identity:
Which can be interpreted as follows: the amplitude to travel from x to z in time t+t' is the sum of the amplitude to travel from x to y in time t multiplied by the amplitude to travel from y to z in time t', summed over all possible intermediate states y. This is a property of an arbitrary quantum system, and by subdividing the time into many segments, it allows the time evolution to be expressed as a path integral.
[edit] Analytic Continuation to Diffusion
The spreading of wavepackets in quantum mechanics is directly related to the spreading of probability densities in diffusion. For a particle which is random walking, the probability density function at any point satisfies the diffusion equation:
where the factor of 2, which can be removed by a rescaling either time or space, is only for convenience.
A solution of this equation is the spreading gaussian:
and since the integral of ρt, is constant, while the width is becoming narrow at small times, this function approaches a delta function at t=0:
again, only in the sense of distributions, so that
for any smooth test function f.
The spreading Gaussian is the propagation kernel for the diffusion equation, and it obeys the identity:
Which allows diffusion to be expressed as a path integral. The propagation is the exponential of an operator H:
which is the infinitesimal diffusion operator.
The exponential can be defined over a range of t's which include complex values, so long as integrals over the propagation kernel stay convergent.
As long as the real part of z is positive, for large values of x K is exponentially decreasing and integrals over K are absolutely convergent.
The limit of this expression for z coming close to the pure imaginary axis is the Schrodinger propagator:
and this gives a more conceptual explanation for the time evolution of Gaussians. From the fundamental identity of exponentiation, or path integration:
Holds for all complex z values where the integrals are absolutely convergent so that the operators are well defined.
So that quantum evolution starting from a Gaussian:
gives the time evolved state:
This explains the diffusive form of the Gaussian solutions:
[edit] Variational Principle
The variational principle asserts that for any any Hermitian matrix A, the lowest eigenvalue minimizes the quantity:
on the unit sphere < v,v > = 1. This follows by the method of Lagrange multipliers, at the minimum the gradient of the function is parallel to the gradient of the constraint:
which is the eigenvalue condition
so that the extreme values of a quadratic form A are the eigenvalues of A, and the value of the function at the extreme values is just the corresponding eigenvalue:
When the hermitian matrix is the Hamiltonian, the minimum value is the lowest energy level.
In the space of all wavefunctions, the unit sphere is the space of all normalized wavefunctions ψ, the ground state minimizes
or, after an integration by parts,
All the stationary points are real, since the integrand is real. In general, when a wavefunction ψ is an eigenstate of the Schrodinger equation in a potential, the real and imaginary part of psi are both separately eigenstates with the same eigenvalue.
The lowest energy state has a positive definite wavefunction, because given a ψ which minimizes the integral, | ψ | , the absolute value, is also a minimizer. But this minimizer has sharp corners at places where ψ changes sign, and these sharp corners can be rounded out to reduce the gradient contribution.
[edit] Potential and Ground State
For a particle in a positive definite potential, the ground state wavefunction is real and positive, and has a dual interpretation as the probability density for a diffusion process. The analogy between diffusion and nonrelativistic quantum motion, originally discovered and exploited by Schrodinger, has led to many exact solutions.
A positive definite wavefunction:
is a solution to the time-independent Schrodinger equation with m=1 and potential:
with zero total energy. W, the logarithm of the ground state wavefunction. The second derivative term is higher order in , and ignoring it gives the semiclassical approximation.
The form of the ground state wavefunction is motivated by the observation that the ground state wavefunction is the Boltzmann probability for a different problem, the probability for finding a particle diffusing in space with the free-energy at different points given by W. If the diffusion obeys detailed balance and the diffusion constant is everywhere the same, the Fokker Planck equation for this diffusion is the Schrodinger equation when the time parameter is allowed to be imaginary. This analytic continuation gives the eigenstates a dual interpretation--- either as the energy levels of a quantum system, or the relaxation times for a stochastic equation.
[edit] Harmonic Oscillator
W should grow at infinity, so that the wavefunction has a finite integral. The simplest analytic form is:
with an arbitrary constant ω, which gives the potential:
This potential describes a Harmonic oscillator, with the ground state wavefunction:
The total energy is zero, but the potential is shifted by a constant. The ground state energy of the usual unshifted Harmonic oscillator potential:
is then the additive constant:
which is the zero point energy of the oscillator.
[edit] Coulomb Potential
Another simple but useful form is
where W is proportional to the radial coordinate. This is the ground state for two different potentials, depending on the dimension. In one dimension, the corresponding potential is singular at the origin, where it has some nonzero density:
and, up to some rescaling of variables, this is the lowest energy state for a delta function potential, with the bound state energy added on.
with the ground state energy:
and the ground state wavefunction:
In higher dimensions, the same form gives the potential:
which can be identified as the attractive Coulomb law, up to an additive constant which is the ground state energy. This is the superpotential that describes the lowest energy level of the Hydrogen atom, once the mass is restored by dimensional analysis:
where r0 is the Bohr radius, with energy
The ansatz
modifies the Coulomb potential to include a quadratic term proportional to 1 / r2, which is useful for nonzero angular momentum.
[edit] Operator Formalism
[edit] Galilean Invariance
Galilean symmetry requires that H(p) is quadratic in p in both the classical and quantum Hamiltonian formalism. In order for Galilean boosts to produce a p-independent phase factor, px - Ht must have a very special form--- translations in p need to be compensated by a shift in H. This is only true when H is quadratic.
The infinitesimal generator of Boosts in both the classical and quantum case is:
where the sum is over the different particles, and B,x,p are vectors.
The poisson bracket/commutator of with x and p generate infinitesimal boosts, with v the infinitesimal boost velocity vector:
Iterating these relations is simple, since they add a constant amount at each step. By iterating, the dv's incrementally sum up to the finite quantity V:
B divided by the total mass is the current center of mass position minus the time times the center of mass velocity:
In other words, B/M is the current guess for the position that the center of mass had at time zero.
The statement that B doesn't change with time is the center of mass theorem. For a Galilean invariant system, the center of mass moves with a constant velocity, and the total kinetic energy is the sum of the center of mass kinetic energy and the kinetic energy measured relative to the center of mass.
Since B is explicitly time dependent, H does not commute with B, rather:
this gives the transformation law for H under infinitesimal boosts:
the interpretation of this formula is that the change in H under an infinitesimal boost is entirely given by the change of the center of mass kinetic energy, which is the dot product of the total momentum with the infinitesimal boost velocity.
The two quantities (H,P) form a representation of the Galilean group with central charge M, where only H and P are classical functions on phase-space or quantum mechanical operators, while M is a parameter. The transformation law for infinitesimal v:
can be iterated as before--- P goes from P to P+MV in infinitesimal increments of v, while H changes at each step by an amount proportional to P, which changes linearly. The final value of H is then changed by the value of P halfway between the starting value and the ending value:
The factors proportional to the central charge M are the extra wavefunction phases.
Boosts give too much information in the single-particle case, since Galilean symmetry completely determines the motion of a single particle. Given a multi-particle time dependent solution:
with a potential that depends only on the relative positions of the particles, it can be used to generate the boosted solution:
For the standing wave problem, the motion of the center of mass just adds an overall phase. When solving for the energy levels of multiparticle systems, Galilean invariance allows the center of mass motion to be ignored.
[edit] Relativistic generalisations
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
The Schrödinger equation does not take into account relativistic effects, meaning that the Schrödinger equation is invariant under a Galilean transformation, but not under a Lorentz transformation.
Relativistically valid generalisations incorporating ideas from special relativity include the Klein-Gordon equation and the Dirac equation.
[edit] Applications
Please help improve this section by expanding it. Further information might be found on the talk page or at requests for expansion. |
[edit] See also
- Basic quantum mechanics
- Dirac equation
- Klein-Gordon equation
- Pauli equation
- Quantum number
- Schrödinger's cat
- Schrödinger field
- Schrödinger picture
- Theoretical and experimental justification for the Schrödinger equation
[edit] References
- ^ Schrödinger, Erwin (December 1926). "An Undulatory Theory of the Mechanics of Atoms and Molecules" (PDF). Phys. Rev. 28 (6): 1049–1070. doi: .
- ^ Erwin Schrödinger, Annalen der Physik, (Leipzig) (1926), Main paper
- ^ Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 219 (hardback version)
- ^ Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 220
- ^ Schrödinger: Life and Thought by Walter John Moore, Cambridge University Press 1992 ISBN 0-521-43767-9, page 479 (hardback version) makes it clear that even in his last year of life, in a letter to Max Born, he never accepted the Copenhagen Interpretation. cf pg 220
[edit] Modern reviews
- David J. Griffiths (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 013805326X.
[edit] External links
- Quantum Physics - a textbook with a treatment of the time-independent Schrödinger equation
- Linear Schrödinger Equation at EqWorld: The World of Mathematical Equations.
- Nonlinear Schrödinger Equation at EqWorld: The World of Mathematical Equations.
- The Schrödinger Equation in One Dimension as well as the directory of the book.
- Mathematical aspects of Schrödinger equation's are discussed on the Dispersive PDE Wiki.