The Quantum Hall Effect
TIFR Infosys Lectures
Abstract:
There are surprisingly few dedicated books on the quantum Hall effect. Two prominent ones are

Prange and Girvin, “The Quantum Hall Effect”
This is a collection of articles by most of the main players circa 1990. The basics are described well but there’s nothing about ChernSimons theories or the importance of the edge modes.

J. K. Jain, “Composite Fermions”
As the title suggests, this book focuses on the composite fermion approach as a lens through which to view all aspects of the quantum Hall effect. It has many good explanations but doesn’t cover the more field theoretic aspects of the subject.
There are also a number of good multipurpose condensed matter textbooks which contain extensive descriptions of the quantum Hall effect. Two, in particular, stand out:

Eduardo Fradkin, Field Theories of Condensed Matter Physics

XiaoGang Wen, Quantum Field Theory of ManyBody Systems: From the Origin of Sound to an Origin of Light and Electrons
Several excellent lecture notes covering the various topics discussed in these lectures are available on the web. Links can be found on the course webpage: http://www.damtp.cam.ac.uk/user/tong/qhe.html.
There are also a number of good multipurpose condensed matter textbooks which contain extensive descriptions of the quantum Hall effect. Two, in particular, stand out:

Eduardo Fradkin, Field Theories of Condensed Matter Physics

XiaoGang Wen, Quantum Field Theory of ManyBody Systems: From the Origin of Sound to an Origin of Light and Electrons
Several excellent lecture notes covering the various topics discussed in these lectures are available on the web. Links can be found on the course webpage: http://www.damtp.cam.ac.uk/user/tong/qhe.html.
Several excellent lecture notes covering the various topics discussed in these lectures are available on the web. Links can be found on the course webpage: http://www.damtp.cam.ac.uk/user/tong/qhe.html.
Acknowledgements
These lectures were given in TIFR, Mumbai. I’m grateful to the students, postdocs, faculty and director for their excellent questions and comments which helped me a lot in understanding what I was saying.
To first approximation, these lecture notes contain no references to original work. I’ve included some footnotes with pointers to review articles and a handful of key papers. More extensive references can be found in the review articles mentioned earlier, or in the book of reprints, “Quantum Hall Effect”, edited by Michael Stone.
To first approximation, these lecture notes contain no references to original work. I’ve included some footnotes with pointers to review articles and a handful of key papers. More extensive references can be found in the review articles mentioned earlier, or in the book of reprints, “Quantum Hall Effect”, edited by Michael Stone.
My thanks to everyone in TIFR for their warm hospitality. Thanks also to Bart Andrews for comments and typospotting.
These lecture notes were written as preparation for research funded by the European Research Council under the European Union s Seventh
Framework Programme (FP7/20072013), ERC grant agreement STG 279943, “Strongly Coupled Systems”.
My thanks to everyone in TIFR for their warm hospitality. Thanks also to Bart Andrews for comments and typospotting. These lecture notes were written as preparation for research funded by the European Research Council under the European Union s Seventh Framework Programme (FP7/20072013), ERC grant agreement STG 279943, “Strongly Coupled Systems”.
Magnetic Scales
1 The Basics
1.1 Introduction
Take a bunch of electrons, restrict them to move in a twodimensional plane and turn on a strong magnetic field. This simple setup provides the setting for some of the most wonderful and surprising results in physics. These phenomena are known collectively as the quantum Hall effect.
The name comes from the most experimentally visible of these surprises. The Hall conductivity (which we will define below) takes quantised values
Originally it was found that is, to extraordinary precision, integer valued. Of course, we’re very used to things being quantised at the microscopic, atomic level. But this is something different: it’s the quantisation of an emergent, macroscopic property in a dirty system involving many many particles and its explanation requires something new. It turns out that this something new is the role that topology can play in quantum manybody systems. Indeed, ideas of topology and geometry will be a constant theme throughout these lectures.
The name comes from the most experimentally visible of these surprises. The Hall conductivity (which we will define below) takes quantised values
Originally it was found that is, to extraordinary precision, integer valued. Of course, we’re very used to things being quantised at the microscopic, atomic level. But this is something different: it’s the quantisation of an emergent, macroscopic property in a dirty system involving many many particles and its explanation requires something new. It turns out that this something new is the role that topology can play in quantum manybody systems. Indeed, ideas of topology and geometry will be a constant theme throughout these lectures.
Subsequently, it was found that is not only restricted to take integer values, but can also take very specific rational values. The most prominent fractions experimentally are and but there are many dozens of different fractions that have been seen.
This needs yet another ingredient. This time, it is the interactions between electrons which result in a highly correlated quantum state that is now recognised as a new state of matter. It is here that the most remarkable things happen. The charged particles that roam around these systems carry a fraction of the charge of the electron, as if the electron has split itself into several pieces. Yet this occurs despite the fact that the electron is (and remains!) an indivisible constituent of matter.
Subsequently, it was found that is not only restricted to take integer values, but can also take very specific rational values. The most prominent fractions experimentally are and but there are many dozens of different fractions that have been seen. This needs yet another ingredient. This time, it is the interactions between electrons which result in a highly correlated quantum state that is now recognised as a new state of matter. It is here that the most remarkable things happen. The charged particles that roam around these systems carry a fraction of the charge of the electron, as if the electron has split itself into several pieces. Yet this occurs despite the fact that the electron is (and remains!) an indivisible constituent of matter.
In fact, it is not just the charge of the electron that fractionalises: this happens to the “statistics” of the electron as well. Recall that the electron is a fermion, which means that the distribution of many electrons is governed by the FermiDirac distribution function. When the electron splits, so too does its fermionic nature. The individual constituents are no longer fermions, but neither are they bosons. Instead they are new entities known as anyons which, in the simplest cases, lie somewhere between bosons and fermions. In more complicated examples even this description breaks down: the resulting objects are called nonAbelian anyons and provide physical embodiment of the kind of nonlocal entanglement famous in quantum mechanics.
In fact, it is not just the charge of the electron that fractionalises: this happens to the “statistics” of the electron as well. Recall that the electron is a fermion, which means that the distribution of many electrons is governed by the FermiDirac distribution function. When the electron splits, so too does its fermionic nature. The individual constituents are no longer fermions, but neither are they bosons. Instead they are new entities known as anyons which, in the simplest cases, lie somewhere between bosons and fermions. In more complicated examples even this description breaks down: the resulting objects are called nonAbelian anyons and provide physical embodiment of the kind of nonlocal entanglement famous in quantum mechanics.
Because of this kind of striking behaviour, the quantum Hall effect has been a constant source of new ideas, providing hints of where to look for interesting and novel phenomena, most of them related to the ways in which the mathematics of topology impinges on quantum physics. Important examples include the subject of topological insulators, topological order and topological quantum computing. All of them have their genesis in the quantum Hall effect.
Because of this kind of striking behaviour, the quantum Hall effect has been a constant source of new ideas, providing hints of where to look for interesting and novel phenomena, most of them related to the ways in which the mathematics of topology impinges on quantum physics. Important examples include the subject of topological insulators, topological order and topological quantum computing. All of them have their genesis in the quantum Hall effect.
Underlying all of these phenomena is an impressive theoretical edifice, which involves a tour through some of the most beautiful and important developments in theoretical and mathematical physics over the past decades. The first attack on the problem focussed on the microscopic details of the electron wavefunctions. Subsequent approaches looked at the system from a more coarsegrained, fieldtheoretic perspective where a subtle construction known as ChernSimons theory plays the key role. Yet another perspective comes from the edge of the sample where certain excitations live that know more about what’s happening inside than you might think. The main purpose of these lectures is to describe these different approaches and the intricate and surprising links between them.
Underlying all of these phenomena is an impressive theoretical edifice, which involves a tour through some of the most beautiful and important developments in theoretical and mathematical physics over the past decades. The first attack on the problem focussed on the microscopic details of the electron wavefunctions. Subsequent approaches looked at the system from a more coarsegrained, fieldtheoretic perspective where a subtle construction known as ChernSimons theory plays the key role. Yet another perspective comes from the edge of the sample where certain excitations live that know more about what’s happening inside than you might think. The main purpose of these lectures is to describe these different approaches and the intricate and surprising links between them.
1.2 The Classical Hall Effect
The original, classical Hall effect was discovered in 1879 by Edwin Hall. It is a simple consequence of the motion of charged particles in a magnetic field. We’ll start these lectures by reviewing the underlying physics of the Hall effect. This will provide a useful background for our discussion of the quantum Hall effect.
Here’s the setup. We turn on a constant magnetic field, pointing in the direction. Meanwhile, the electrons are restricted to move only in the plane. A constant current is made to flow in the direction. The Hall effect is the statement that this induces a voltage ( is for “Hall”) in the direction. This is shown in the figure to the right.
Here’s the setup. We turn on a constant magnetic field, pointing in the direction. Meanwhile, the electrons are restricted to move only in the plane. A constant current is made to flow in the direction. The Hall effect is the statement that this induces a voltage ( is for “Hall”) in the direction. This is shown in the figure to the right.
1.2.1 Classical Motion in a Magnetic Field
The Hall effect arises from the fact that a magnetic field causes charged particles to move in circles. Let’s recall the basics. The equation of motion for a particle of mass and charge in a magnetic field is
When the magnetic field points in the direction, so that , and the particle moves only in the transverse plane, so , the equations of motion become two, coupled differential equations
(1.1) 
The general solution is
(1.2) 
We see that the particle moves in a circle which, for , is in an anticlockwise direction. The centre of the circle, , the radius of the circle and the phase are all arbitrary. These are the four integration constants from solving the two second order differential equations. However, the frequency with which the particle goes around the circle is fixed, and given by
(1.3) 
This is called the cyclotron frequency.
1.2.2 The Drude Model
Let’s now repeat this calculation with two further ingredients. The first is an electric field, . This will accelerate the charges and, in the absence of a magnetic field, would result in a current in the direction of . The second ingredient is a linear friction term, which is supposed to capture the effect of the electron bouncing off whatever impedes its progress, whether impurities, the underlying lattice or other electrons. The resulting equation of motion is
(1.4) 
The coefficient in the friction term is called the scattering time. It can be thought of as the average time between collisions.
The equation of motion (1.4) is the simplest model of charge transport, treating the mobile electrons as if they were classical billiard balls. It is called the Drude model and we met it already in the lectures on Electromagnetism.
The equation of motion (1.4) is the simplest model of charge transport, treating the mobile electrons as if they were classical billiard balls. It is called the Drude model and we met it already in the lectures on Electromagnetism.
We’re interested in equilibrium solutions of (1.4) which have
. The velocity of the particle must then solve
(1.5)
The current density is related to the velocity by
where is the density of charge carriers. In matrix notation, (1.5) then becomes
We can invert this matrix to get an equation of the form
This equation is known as Ohm’s law: it tells us how the current flows in response to an electric field. The proportionality constant is the conductivity. The slight novelty is that, in the presence of a magnetic field, is not a single number: it is a matrix. It is sometimes called the conductivity tensor. We write it as
(1.6)
The structure of the matrix, with identical diagonal components, and equal but opposite offdiagonal components, follows from rotational invariance. From the Drude model, we get the explicit expression for the conductivity,
Here is the DC conductivity in the absence of a magnetic field. (This is the same result that we derived in the Electromagnetism lectures). The offdiagonal terms in the matrix are responsible for the Hall effect: in equilibrium, a current in the direction requires an electric field with a component in the direction.
We’re interested in equilibrium solutions of (1.4) which have . The velocity of the particle must then solve
(1.5) 
The current density is related to the velocity by
where is the density of charge carriers. In matrix notation, (1.5) then becomes
We can invert this matrix to get an equation of the form
This equation is known as Ohm’s law: it tells us how the current flows in response to an electric field. The proportionality constant is the conductivity. The slight novelty is that, in the presence of a magnetic field, is not a single number: it is a matrix. It is sometimes called the conductivity tensor. We write it as
(1.6) 
The structure of the matrix, with identical diagonal components, and equal but opposite offdiagonal components, follows from rotational invariance. From the Drude model, we get the explicit expression for the conductivity,
Here is the DC conductivity in the absence of a magnetic field. (This is the same result that we derived in the Electromagnetism lectures). The offdiagonal terms in the matrix are responsible for the Hall effect: in equilibrium, a current in the direction requires an electric field with a component in the direction.
Although it’s not directly relevant for our story, it’s worth pausing to think about how we actually approach equilibrium in the Hall effect. We start by putting an electric field in the direction. This gives rise to a current density , but this current is deflected due to the magnetic field and bends towards the direction. In a finite material, this results in a build up of charge along the edge and an associated electric field . This continues until the electric field cancels the bending of due to the magnetic field, and the electrons then travel only in the direction. It’s this induced electric field which is responsible for the Hall voltage .
Although it’s not directly relevant for our story, it’s worth pausing to think about how we actually approach equilibrium in the Hall effect. We start by putting an electric field in the direction. This gives rise to a current density , but this current is deflected due to the magnetic field and bends towards the direction. In a finite material, this results in a build up of charge along the edge and an associated electric field . This continues until the electric field cancels the bending of due to the magnetic field, and the electrons then travel only in the direction. It’s this induced electric field which is responsible for the Hall voltage .
Resistivity vs Resistance
The resistivity is defined as the inverse of the conductivity. This remains true when both are matrices,
(1.7) 
From the Drude model, we have
(1.8) 
The offdiagonal components of the resistivity tensor, , have a couple of rather nice properties. First, they are independent of the scattering time . This means that they capture something fundamental about the material itself as opposed to the dirty messy stuff that’s responsible for scattering.
The second nice property is to do with what we measure.
Usually we measure the resistance , which differs from the resistivity by geometric factors. However, for , these two things coincide.
To see this, consider a sample of material of length in the direction. We drop a voltage in the direction and measure the resulting current in the direction. The transverse resistance is
This has the happy consequence that what we calculate, , and what we measure, , are, in this case, the same. In contrast, if we measure the longitudinal resistance then we’ll have to divide by the appropriate lengths to extract the resistivity . Of course, these lectures are about as theoretical as they come. We’re not actually going to measure anything. Just pretend.
The second nice property is to do with what we measure. Usually we measure the resistance , which differs from the resistivity by geometric factors. However, for , these two things coincide. To see this, consider a sample of material of length in the direction. We drop a voltage in the direction and measure the resulting current in the direction. The transverse resistance is
This has the happy consequence that what we calculate, , and what we measure, , are, in this case, the same. In contrast, if we measure the longitudinal resistance then we’ll have to divide by the appropriate lengths to extract the resistivity . Of course, these lectures are about as theoretical as they come. We’re not actually going to measure anything. Just pretend.
While we’re throwing different definitions around, here’s one more. For a current flowing in the direction, and the associated electric field in the direction, the Hall coefficient is defined by
So in the Drude model, we have
As promised, we see that the Hall coefficient depends only on microscopic information about the material: the charge and density of the conducting particles. The Hall coefficient does not depend on the scattering time ; it is insensitive to whatever friction processes are at play in the material.
While we’re throwing different definitions around, here’s one more. For a current flowing in the direction, and the associated electric field in the direction, the Hall coefficient is defined by
So in the Drude model, we have
As promised, we see that the Hall coefficient depends only on microscopic information about the material: the charge and density of the conducting particles. The Hall coefficient does not depend on the scattering time ; it is insensitive to whatever friction processes are at play in the material.
We now have all we need to make an experimental prediction! The two resistivities should be
Note that only depends on the scattering time , and as scattering processes become less important and .
If we plot the two resistivities as a function of the magnetic field, then our classical expectation is that they should look the figure on the right.
We now have all we need to make an experimental prediction! The two resistivities should be
Note that only depends on the scattering time , and as scattering processes become less important and . If we plot the two resistivities as a function of the magnetic field, then our classical expectation is that they should look the figure on the right.
1.3 Quantum Hall Effects
Now we understand the classical expectation. And, of course, this expectation is borne out whenever we can trust classical mechanics. But the world is governed by quantum mechanics. This becomes important at low temperatures and strong magnetic fields where more interesting things can happen.
It’s useful to distinguish between two different quantum Hall effects which are associated to two related phenomena. These are called the integer and fractional quantum Hall effects. Both were first discovered experimentally and only subsequently understood theoretically. Here we summarise the basic facts about these effects. The goal of these lectures is to understand in more detail what’s going on.
It’s useful to distinguish between two different quantum Hall effects which are associated to two related phenomena. These are called the integer and fractional quantum Hall effects. Both were first discovered experimentally and only subsequently understood theoretically. Here we summarise the basic facts about these effects. The goal of these lectures is to understand in more detail what’s going on.
1.3.1 Integer Quantum Hall Effect
The first experiments exploring the quantum regime of the Hall effect were performed in 1980 by von Klitzing, using samples prepared by Dorda and Pepper^{1}^{1}1K. v Klitzing, G. Dorda, M. Pepper, “New Method for HighAccuracy Determination of the FineStructure Constant Based on Quantized Hall Resistance”, Phys. Rev. Lett. 45 494.. The resistivities look like this:
This is the integer quantum Hall effect. For this, von Klitzing was awarded the 1985 Nobel prize.
Both the Hall resistivity and the longitudinal resistivity exhibit interesting behaviour. Perhaps the most striking feature in the data is the that the Hall resistivity sits on a plateau for a range of magnetic field, before jumping suddenly to the next plateau. On these plateau, the resistivity takes the value
(1.9)
The value of is measured to be an integer to an extraordinary accuracy — something like one part in . The quantity is called the quantum of resistivity (with , the electron charge). It is now used as the standard for measuring of resistivity. Moreover, the integer quantum Hall effect is now used as the basis for measuring the ratio of fundamental constants sometimes referred to as the von Klitzing constant. This means that, by definition, the state in (1.9) is exactly integer!
Both the Hall resistivity and the longitudinal resistivity exhibit interesting behaviour. Perhaps the most striking feature in the data is the that the Hall resistivity sits on a plateau for a range of magnetic field, before jumping suddenly to the next plateau. On these plateau, the resistivity takes the value
(1.9) 
The value of is measured to be an integer to an extraordinary accuracy — something like one part in . The quantity is called the quantum of resistivity (with , the electron charge). It is now used as the standard for measuring of resistivity. Moreover, the integer quantum Hall effect is now used as the basis for measuring the ratio of fundamental constants sometimes referred to as the von Klitzing constant. This means that, by definition, the state in (1.9) is exactly integer!
The centre of each of these plateaux occurs when the magnetic field takes the value
where is the electron density and is known as the flux quantum. As we will review in Section 2, these are the values of the magnetic field at which the first Landau levels are filled. In fact, as we will see, it is very easy to argue that the Hall resistivity should take value (1.9) when Landau levels are filled. The surprise is that the plateau exists, with the quantisation persisting over a range of magnetic fields.
The centre of each of these plateaux occurs when the magnetic field takes the value
where is the electron density and is known as the flux quantum. As we will review in Section 2, these are the values of the magnetic field at which the first Landau levels are filled. In fact, as we will see, it is very easy to argue that the Hall resistivity should take value (1.9) when Landau levels are filled. The surprise is that the plateau exists, with the quantisation persisting over a range of magnetic fields.
There is a clue in the experimental data about the origin of the plateaux. Experimental systems are typically dirty, filled with impurities. The technical name for this is disorder. Usually one wants to remove this dirt to get at the underlying physics. Yet, in the quantum Hall effect, as you increase the amount of disorder (within reason) the plateaux become more prominent, not less. In fact, in the absence of disorder, the plateaux are expected to vanish completely. That sounds odd: how can the presence of dirt give rise to something as exact and pure as an integer? This is something we will explain in Section 2.
There is a clue in the experimental data about the origin of the plateaux. Experimental systems are typically dirty, filled with impurities. The technical name for this is disorder. Usually one wants to remove this dirt to get at the underlying physics. Yet, in the quantum Hall effect, as you increase the amount of disorder (within reason) the plateaux become more prominent, not less. In fact, in the absence of disorder, the plateaux are expected to vanish completely. That sounds odd: how can the presence of dirt give rise to something as exact and pure as an integer? This is something we will explain in Section 2.
The longitudinal resistivity also exhibits a surprise. When sits on a plateau, the longitudinal resistivity vanishes: . It spikes only when jumps to the next plateau.
The longitudinal resistivity also exhibits a surprise. When sits on a plateau, the longitudinal resistivity vanishes: . It spikes only when jumps to the next plateau.
Usually we would think of a system with as a perfect conductor. But there’s something a little counterintuitive about vanishing resistivity in the presence of a magnetic field. To see this, we can return to the simple definition (1.7) which, in components, reads
(1.10)
If then we get the familiar relation between conductivity and resistivity: . But if , then we have the more interesting relation above. In particular, we see
While we would usually call a system with a perfect conductor, we would usually call a system with a perfect insulator! What’s going on?
Usually we would think of a system with as a perfect conductor. But there’s something a little counterintuitive about vanishing resistivity in the presence of a magnetic field. To see this, we can return to the simple definition (1.7) which, in components, reads
(1.10) 
If then we get the familiar relation between conductivity and resistivity: . But if , then we have the more interesting relation above. In particular, we see
While we would usually call a system with a perfect conductor, we would usually call a system with a perfect insulator! What’s going on?
This particular surprise has more to do with the words we use to describe the phenomena than the underlying physics. In particular, it has nothing to do with quantum mechanics: this behaviour occurs in the Drude model in the limit where there is no scattering.
In this situation, the current is flowing perpendicular to the applied electric field, so . But recall that has the interpretation as the work done in accelerating charges. The fact that this vanishes means that we have a steady current flowing without doing any work and, correspondingly, without any dissipation. The fact that is telling us that no current is flowing in the longitudinal direction (like an insulator) while the fact that is telling us that there is no dissipation of energy (like in a perfect conductor).
This particular surprise has more to do with the words we use to describe the phenomena than the underlying physics. In particular, it has nothing to do with quantum mechanics: this behaviour occurs in the Drude model in the limit where there is no scattering. In this situation, the current is flowing perpendicular to the applied electric field, so . But recall that has the interpretation as the work done in accelerating charges. The fact that this vanishes means that we have a steady current flowing without doing any work and, correspondingly, without any dissipation. The fact that is telling us that no current is flowing in the longitudinal direction (like an insulator) while the fact that is telling us that there is no dissipation of energy (like in a perfect conductor).
1.3.2 Fractional Quantum Hall Effect
As the disorder is decreased, the integer Hall plateaux become less prominent. But other plateaux emerge at fractional values. This was discovered in 1982 by Tsui and Störmer using samples prepared by Gossard^{2}^{2}2D. C. Tsui, H. L. Stormer, and A. C. Gossard, “TwoDimensional Magnetotransport in the Extreme Quantum Limit”, Phys. Rev. Lett. 48 (1982)1559.. The resistivities look like this:
The is the fractional quantum Hall effect. On the plateaux, the Hall resistivity again takes the simple form (1.9), but now with a rational number
Not all fractions appear. The most prominent plateaux sit at (not shown above) and but there are many more. The vast majority of these have denominators which are odd. But there are exceptions: in particular a clear plateaux has been observed at . As the disorder is decreased, more and more plateaux emerge. It seems plausible that in the limit of a perfectly clean sample, we would get an infinite number of plateaux which brings us back to the classical picture of a straight line for !
The integer quantum Hall effect can be understood using free electrons. In contrast, to explain the fractional quantum Hall effect we need to take interactions between electrons into account. This makes the problem much harder and much richer. The basics of the theory were first laid down by Laughlin^{3}^{3}3R. B. Laughlin, “The Anomalous Quantum Hall Effect: An Incompressible Quantum Fluid with Fractionally Charged Excitations,” Phys. Rev. Lett. 50, 1395 (1983)., but the subject has since expanded in a myriad of different directions. The 1998 Nobel prize was awarded to Tsui, Störmer and Laughlin. Sections 3 onwards will be devoted to aspects of the fractional quantum Hall effect.
The integer quantum Hall effect can be understood using free electrons. In contrast, to explain the fractional quantum Hall effect we need to take interactions between electrons into account. This makes the problem much harder and much richer. The basics of the theory were first laid down by Laughlin^{3}^{3}3R. B. Laughlin, “The Anomalous Quantum Hall Effect: An Incompressible Quantum Fluid with Fractionally Charged Excitations,” Phys. Rev. Lett. 50, 1395 (1983)., but the subject has since expanded in a myriad of different directions. The 1998 Nobel prize was awarded to Tsui, Störmer and Laughlin. Sections 3 onwards will be devoted to aspects of the fractional quantum Hall effect.
Materials
These lectures are unabashedly theoretical. We’ll have nothing to say about how one actually constructs these phases of matter in the lab. Here I want to merely throw out a few technical words in an attempt to breed familiarity.
The integer quantum Hall effect was originally discovered in a MOSFET (this stands for “metaloxidesemiconductor fieldeffect transistor”). This is a metalinsulatorsemiconductor sandwich, with electrons trapped in the “inversion band” of width between the insulator and semiconductor.
Meanwhile the fractional quantum Hall effect was discovered in a  heterostructure. A lot of the subsequent work was done on this system, and it usually goes by the name (Gallium Arsenide if your chemistry is rusty). In both these systems, the density of electrons is around .
The integer quantum Hall effect was originally discovered in a MOSFET (this stands for “metaloxidesemiconductor fieldeffect transistor”). This is a metalinsulatorsemiconductor sandwich, with electrons trapped in the “inversion band” of width between the insulator and semiconductor. Meanwhile the fractional quantum Hall effect was discovered in a  heterostructure. A lot of the subsequent work was done on this system, and it usually goes by the name (Gallium Arsenide if your chemistry is rusty). In both these systems, the density of electrons is around .
More recently, both quantum Hall effects have been discovered in graphene, which is a two dimensional material with relativistic electrons. The physics here is similar in spirit, but differs in details.
More recently, both quantum Hall effects have been discovered in graphene, which is a two dimensional material with relativistic electrons. The physics here is similar in spirit, but differs in details.
1.4 Landau Levels
It won’t come as a surprise to learn that the physics of the quantum Hall effect involves quantum mechanics. In this section, we will review the quantum mechanics of free particles moving in a background magnetic field and the resulting phenomenon of Landau levels. We will look at these Landau levels in a number of different ways. Each is useful to highlight different aspects of the physics and they will all be important for describing the quantum Hall effects.
Throughout this discussion, we will neglect the spin of the electron. This is more or less appropriate for most physically realised quantum Hall systems. The reason is that in the presence of a magnetic field there is a Zeeman splitting between the energies of the up and down spins given by where is the Bohr magneton. We will be interested in large magnetic fields where large energies are needed to flip the spin. This means that, if we restrict to low energies, the electrons act as if they are effectively spinless. (We will, however, add a caveat to this argument below.)
Throughout this discussion, we will neglect the spin of the electron. This is more or less appropriate for most physically realised quantum Hall systems. The reason is that in the presence of a magnetic field there is a Zeeman splitting between the energies of the up and down spins given by where is the Bohr magneton. We will be interested in large magnetic fields where large energies are needed to flip the spin. This means that, if we restrict to low energies, the electrons act as if they are effectively spinless. (We will, however, add a caveat to this argument below.)
Before we get to the quantum theory, we first need to briefly review some of the structure of classical mechanics in the presence of a magnetic field. The Lagrangian for a particle of charge and mass moving in a background magnetic field is
Under a gauge transformation, , the Lagrangian changes by a total derivative: . This is enough to ensure that the equations of motion (1.1) remain unchanged under a gauge transformation.
Before we get to the quantum theory, we first need to briefly review some of the structure of classical mechanics in the presence of a magnetic field. The Lagrangian for a particle of charge and mass moving in a background magnetic field is
Under a gauge transformation, , the Lagrangian changes by a total derivative: . This is enough to ensure that the equations of motion (1.1) remain unchanged under a gauge transformation.
The canonical momentum arising from this Lagrangian is
This differs from what we called momentum when we were in high school, namely . We will refer to as the mechanical momentum.
The canonical momentum arising from this Lagrangian is
This differs from what we called momentum when we were in high school, namely . We will refer to as the mechanical momentum.
We can compute the Hamiltonian
If we write the Hamiltonian in terms of the mechanical momentum then it looks the same as it would in the absence of a magnetic field: . This is the statement that a magnetic field does no work and so doesn’t change the energy of the system. However, there’s more to the Hamiltonian framework than just the value of . We need to remember which variables are canonical. This information is encoded in the Poisson bracket structure of the theory (or, in fancy language, the symplectic structure on phase space) and, in the quantum theory, is transferred onto commutation relations between operators. The fact that and are canonical means that
(1.11)
Importantly, is not gauge invariant. This means that the numerical value of can’t have any physical meaning since it depends on our choice of gauge. In contrast, the mechanical momentum is gauge invariant; it measures what you would physically call “momentum”. But it doesn’t have canonical Poisson structure. Specifically, the Poisson bracket of the mechanical momentum with itself is nonvanishing,
(1.12)
We can compute the Hamiltonian
If we write the Hamiltonian in terms of the mechanical momentum then it looks the same as it would in the absence of a magnetic field: . This is the statement that a magnetic field does no work and so doesn’t change the energy of the system. However, there’s more to the Hamiltonian framework than just the value of . We need to remember which variables are canonical. This information is encoded in the Poisson bracket structure of the theory (or, in fancy language, the symplectic structure on phase space) and, in the quantum theory, is transferred onto commutation relations between operators. The fact that and are canonical means that
(1.11) 
Importantly, is not gauge invariant. This means that the numerical value of can’t have any physical meaning since it depends on our choice of gauge. In contrast, the mechanical momentum is gauge invariant; it measures what you would physically call “momentum”. But it doesn’t have canonical Poisson structure. Specifically, the Poisson bracket of the mechanical momentum with itself is nonvanishing,
(1.12) 
Quantisation
Our task is to solve for the spectrum and wavefunctions of the quantum Hamiltonian,
(1.13) 
Note that we’re not going to put hats on operators in this course; you’ll just have to remember that they’re quantum operators. Since the particle is restricted to lie in the plane, we write . Meanwhile, we take the magnetic field to be constant and perpendicular to this plane, . The canonical commutation relations that follow from (1.11) are
We will first derive the energy spectrum using a purely algebraic method. This is very similar to the algebraic solution of the harmonic oscillator and has the advantage that we don’t need to specify a choice of gauge potential . The disadvantage is that we don’t get to write down specific wavefunctions in terms of the positions of the electrons. We will rectify this in Sections 1.4.1 and 1.4.3.
To proceed, we work with the commutation relations for the mechanical momentum. We’ll give it a new name (because the time derivative in suggests that we’re working in the Heisenberg picture which is not necessarily true). We write
(1.14)
Then the commutation relations following from the Poisson bracket (1.12) are
(1.15)
At this point we introduce new variables. These are raising and lowering operators, entirely analogous to those that we use in the harmonic oscillator. They are defined by
The commutation relations for then tell us that and obey
which are precisely the commutation relations obeyed by the raising and lowering operators of the harmonic oscillator. Written in terms of these operators, the Hamiltonian (1.13) even takes the same form as that of the harmonic oscillator
where is the cyclotron frequency that we met previously (1.3).
To proceed, we work with the commutation relations for the mechanical momentum. We’ll give it a new name (because the time derivative in suggests that we’re working in the Heisenberg picture which is not necessarily true). We write
(1.14) 
Then the commutation relations following from the Poisson bracket (1.12) are
(1.15) 
At this point we introduce new variables. These are raising and lowering operators, entirely analogous to those that we use in the harmonic oscillator. They are defined by
The commutation relations for then tell us that and obey
which are precisely the commutation relations obeyed by the raising and lowering operators of the harmonic oscillator. Written in terms of these operators, the Hamiltonian (1.13) even takes the same form as that of the harmonic oscillator
where is the cyclotron frequency that we met previously (1.3).
Now it’s simple to finish things off. We can construct the Hilbert space in the same way as the harmonic oscillator: we first introduce a ground state obeying and build the rest of the Hilbert space by acting with ,
The state has energy
(1.16)
We learn that in the presence of a magnetic field, the energy levels of a particle become equally spaced, with the gap between each level proportional to the magnetic field . The energy levels are called Landau levels. Notice that this is not a small change: the spectrum looks very very different from that of a free particle in the absence of a magnetic field.
Now it’s simple to finish things off. We can construct the Hilbert space in the same way as the harmonic oscillator: we first introduce a ground state obeying and build the rest of the Hilbert space by acting with ,
The state has energy
(1.16) 
We learn that in the presence of a magnetic field, the energy levels of a particle become equally spaced, with the gap between each level proportional to the magnetic field . The energy levels are called Landau levels. Notice that this is not a small change: the spectrum looks very very different from that of a free particle in the absence of a magnetic field.
There’s something a little disconcerting about the above calculation. We started with a particle moving in a plane. This has two degrees of freedom. But we ended up writing this in terms of the harmonic oscillator which has just a single degree of freedom. It seems like we lost something along the way! And, in fact, we did. The energy levels (1.16) are the correct spectrum of the theory but, unlike for the harmonic oscillator, it turns out that each energy level does not have a unique state associated to it. Instead there is a degeneracy of states. A wild degeneracy. We will return to the algebraic approach in Section 1.4.3 and demonstrate this degeneracy. But it’s simplest to first turn to a specific choice of the gauge potential , which we do shortly.
There’s something a little disconcerting about the above calculation. We started with a particle moving in a plane. This has two degrees of freedom. But we ended up writing this in terms of the harmonic oscillator which has just a single degree of freedom. It seems like we lost something along the way! And, in fact, we did. The energy levels (1.16) are the correct spectrum of the theory but, unlike for the harmonic oscillator, it turns out that each energy level does not have a unique state associated to it. Instead there is a degeneracy of states. A wild degeneracy. We will return to the algebraic approach in Section 1.4.3 and demonstrate this degeneracy. But it’s simplest to first turn to a specific choice of the gauge potential , which we do shortly.
A Quick Aside: The role of spin
The splitting between Landau levels is . But, for free electrons, this precisely coincides with the Zeeman splitting between spins, where is the Bohr magneton and, famously, . It looks as if the spin up particles in Landau level have exactly the same energy as the spin down particles in level . In fact, in real materials, this does not happen. The reason is twofold. First, the true value of the cyclotron frequency is , where is the effective mass of the electron moving in its environment. Second, the factor can also vary due to effects of band structure. For , the result is that the Zeeman energy is typically about 70 times smaller than the cyclotron energy. This means that first the spinup Landau level fills, then the spindown, then the spinup and so on. For other materials (such as the interface between and ) the relative size of the energies can be flipped and you can fill levels in a different order. This results in different fractional quantum Hall states. In these notes, we will mostly ignore these issues to do with spin. (One exception is Section 3.3.4 where we discuss wavefunctions for particles with spin).
1.4.1 Landau Gauge
To find wavefunctions corresponding to the energy eigenstates, we first need to specify a gauge potential such that
There is, of course, not a unique choice. In this section and the next we will describe two different choices of .
In this section, we work with the choice
(1.17)
This is called Landau gauge. Note that the magnetic field is invariant under both translational symmetry and rotational symmetry in the plane. However, the choice of is not; it breaks translational symmetry in the direction (but not in the direction) and rotational symmetry. This means that, while the physics will be invariant under all symmetries, the intermediate calculations will not be manifestly invariant. This kind of compromise is typical when dealing with magnetic field.
In this section, we work with the choice
(1.17) 
This is called Landau gauge. Note that the magnetic field is invariant under both translational symmetry and rotational symmetry in the plane. However, the choice of is not; it breaks translational symmetry in the direction (but not in the direction) and rotational symmetry. This means that, while the physics will be invariant under all symmetries, the intermediate calculations will not be manifestly invariant. This kind of compromise is typical when dealing with magnetic field.
The Hamiltonian (1.13) becomes
Because we have manifest translational invariance in the direction, we can look for energy eigenstates which are also eigenstates of . These, of course, are just plane waves in the direction. This motivates an ansatz using the separation of variables,
(1.18)
Acting on this wavefunction with the Hamiltonian, we see that the operator just gets replaced by its eigenvalue ,
But this is now something very familiar: it’s the Hamiltonian for a harmonic oscillator in the direction, with the centre displaced from the origin,
(1.19)
The frequency of the harmonic oscillator is again the cyloctron frequency , and we’ve also introduced a length scale . This is a characteristic length scale which governs any quantum phenomena in a magnetic field. It is called the magnetic length.
To give you some sense for this, in a magnetic field of Tesla, the magnetic length for an electron is .
The Hamiltonian (1.13) becomes
Because we have manifest translational invariance in the direction, we can look for energy eigenstates which are also eigenstates of . These, of course, are just plane waves in the direction. This motivates an ansatz using the separation of variables,
(1.18) 
Acting on this wavefunction with the Hamiltonian, we see that the operator just gets replaced by its eigenvalue ,
But this is now something very familiar: it’s the Hamiltonian for a harmonic oscillator in the direction, with the centre displaced from the origin,
(1.19) 
The frequency of the harmonic oscillator is again the cyloctron frequency , and we’ve also introduced a length scale . This is a characteristic length scale which governs any quantum phenomena in a magnetic field. It is called the magnetic length.
To give you some sense for this, in a magnetic field of Tesla, the magnetic length for an electron is .
Something rather strange has happened in the Hamiltonian (1.19): the momentum in the direction, , has turned into the position of the harmonic oscillator in the direction, which is now centred at .
Something rather strange has happened in the Hamiltonian (1.19): the momentum in the direction, , has turned into the position of the harmonic oscillator in the direction, which is now centred at .
Just as in the algebraic approach above, we’ve reduced the problem to that of the harmonic oscillator. The energy eigenvalues are
But now we can also write down the explicit wavefunctions. They depend on two quantum numbers, and ,
(1.20)
with the usual Hermite polynomial wavefunctions of the harmonic oscillator. The reflects the fact that we have made no attempt to normalise these these wavefunctions.
Just as in the algebraic approach above, we’ve reduced the problem to that of the harmonic oscillator. The energy eigenvalues are
But now we can also write down the explicit wavefunctions. They depend on two quantum numbers, and ,
(1.20) 
with the usual Hermite polynomial wavefunctions of the harmonic oscillator. The reflects the fact that we have made no attempt to normalise these these wavefunctions.
The wavefunctions look like strips, extended in the direction but exponentially localised around in the direction.
However, the large degeneracy means that by taking linear combinations of these states, we can cook up wavefunctions that have pretty much any shape you like. Indeed, in the next section we will choose a different and see very different profiles for the wavefunctions.
The wavefunctions look like strips, extended in the direction but exponentially localised around in the direction. However, the large degeneracy means that by taking linear combinations of these states, we can cook up wavefunctions that have pretty much any shape you like. Indeed, in the next section we will choose a different and see very different profiles for the wavefunctions.
Degeneracy
One advantage of this approach is that we can immediately see the degeneracy in each Landau level. The wavefunction (1.20) depends on two quantum numbers, and but the energy levels depend only on . Let’s now see how large this degeneracy is.
To do this, we need to restrict ourselves to a finite region of the plane. We pick a rectangle with sides of lengths and . We want to know how many states fit inside this rectangle.
To do this, we need to restrict ourselves to a finite region of the plane. We pick a rectangle with sides of lengths and . We want to know how many states fit inside this rectangle.
Having a finite size is like putting the system in a box in the direction. We know that the effect of this is to quantise the momentum in units of .
Having a finite size is like putting the system in a box in the direction. We know that the effect of this is to quantise the momentum in units of .
Having a finite size is somewhat more subtle. The reason is that, as we mentioned above, the gauge choice (1.17) does not have manifest translational invariance in the direction. This means that our argument will be a little heuristic. Because the wavefunctions (1.20) are exponentially localised around , for a finite sample restricted to we would expect the allowed values to range between . The end result is that the number of states is
(1.21)
where is the area of the sample. Despite the slight approximation used above, this turns out to be the exact answer for the number of states on a torus. (One can do better taking the wavefunctions on a torus to be elliptic theta functions).
Having a finite size is somewhat more subtle. The reason is that, as we mentioned above, the gauge choice (1.17) does not have manifest translational invariance in the direction. This means that our argument will be a little heuristic. Because the wavefunctions (1.20) are exponentially localised around , for a finite sample restricted to we would expect the allowed values to range between . The end result is that the number of states is
(1.21) 
where is the area of the sample. Despite the slight approximation used above, this turns out to be the exact answer for the number of states on a torus. (One can do better taking the wavefunctions on a torus to be elliptic theta functions).
The degeneracy (1.21) is very very large. There are a macroscopic number of states in each Landau level. The resulting spectrum looks like the figure on the right, with labelling the Landau levels and the energy independent of . This degeneracy will be responsible for much of the interesting physics of the fractional quantum Hall effect that we will meet in Section 3.
The degeneracy (1.21) is very very large. There are a macroscopic number of states in each Landau level. The resulting spectrum looks like the figure on the right, with labelling the Landau levels and the energy independent of . This degeneracy will be responsible for much of the interesting physics of the fractional quantum Hall effect that we will meet in Section 3.
It is common to introduce some new notation to describe the degeneracy (1.21). We write
(1.22)
is called the quantum of flux. It can be thought of as the magnetic flux contained within the area . It plays an important role in a number of quantum phenomena in the presence of magnetic fields.
It is common to introduce some new notation to describe the degeneracy (1.21). We write
(1.22) 
is called the quantum of flux. It can be thought of as the magnetic flux contained within the area . It plays an important role in a number of quantum phenomena in the presence of magnetic fields.
1.4.2 Turning on an Electric Field
The Landau gauge is useful for working in rectangular geometries. One of the things that is particularly easy in this gauge is the addition of an electric field in the direction. We can implement this by the addition of an electric potential . The resulting Hamiltonian is
(1.23) 
We can again use the ansatz (1.18). We simply have to complete the square to again write the Hamiltonian as that of a displaced harmonic oscillator. The states are related to those that we had previously, but with a shifted argument
(1.24) 
and the energies are now given by
(1.25) 
This is interesting. The degeneracy in each Landau level has now been lifted. The energy in each level now depends linearly on , as shown in the figure.
Because the energy now depends on the momentum, it means that states now drift in the direction. The group velocity is
(1.26)
This result is one of the surprising joys of classical physics: if you put an electric field perpendicular to a magnetic field then the cyclotron orbits of the electron drift. But they don’t drift in the direction of the electric field! Instead they drift in the direction . Here we see the quantum version of this statement.
Because the energy now depends on the momentum, it means that states now drift in the direction. The group velocity is
(1.26) 
This result is one of the surprising joys of classical physics: if you put an electric field perpendicular to a magnetic field then the cyclotron orbits of the electron drift. But they don’t drift in the direction of the electric field! Instead they drift in the direction . Here we see the quantum version of this statement.
The fact that the particles are now moving also provides a natural interpretation of the energy (1.25). A wavepacket with momentum is now localised at position ; the middle term above
can be thought of as the potential energy of this wavepacket. The final term can be thought of as the kinetic energy for the particle: .
The fact that the particles are now moving also provides a natural interpretation of the energy (1.25). A wavepacket with momentum is now localised at position ; the middle term above can be thought of as the potential energy of this wavepacket. The final term can be thought of as the kinetic energy for the particle: .
1.4.3 Symmetric Gauge
Having understood the basics of Landau levels, we’re now going to do it all again. This time we’ll work in symmetric gauge, with
(1.27) 
This choice of gauge breaks translational symmetry in both the and the directions. However, it does preserve rotational symmetry about the origin. This means that angular momentum is a good quantum number.
The main reason for studying Landau levels in symmetric gauge is that this is most convenient language
for describing the fractional quantum Hall effect. We shall look at this in Section 3. However, as we now see, there are also a number of pretty things that happen in symmetric gauge.
The main reason for studying Landau levels in symmetric gauge is that this is most convenient language for describing the fractional quantum Hall effect. We shall look at this in Section 3. However, as we now see, there are also a number of pretty things that happen in symmetric gauge.
The Algebraic Approach Revisited
At the beginning of this section, we provided a simple algebraic derivation of the energy spectrum (1.16) of a particle in a magnetic field. But we didn’t provide an algebraic derivation of the degeneracies of these Landau levels. Here we rectify this. As we will see, this derivation only really works in the symmetric gauge.
Recall that the algebraic approach uses the mechanical momenta . This is gauge invariant, but noncanonical. We can use this to build ladder operators which obey . In terms of these creation operators, the Hamiltonian takes the harmonic oscillator form,
To see the degeneracy in this language, we start by introducing yet another kind of “momentum”,
(1.28)
This differs from the mechanical momentum (1.14) by the minus sign. This means that, in contrast to , this new momentum is not gauge invariant. We should be careful when interpreting the value of since it can change depending on choice of gauge potential .
Recall that the algebraic approach uses the mechanical momenta . This is gauge invariant, but noncanonical. We can use this to build ladder operators which obey . In terms of these creation operators, the Hamiltonian takes the harmonic oscillator form,
To see the degeneracy in this language, we start by introducing yet another kind of “momentum”,
(1.28) 
This differs from the mechanical momentum (1.14) by the minus sign. This means that, in contrast to , this new momentum is not gauge invariant. We should be careful when interpreting the value of since it can change depending on choice of gauge potential .
The commutators of this new momenta differ from (1.15) only by a minus sign
(1.29)
However, the lack of gauge invariance shows up when we take the commutators of and . We find
This is unfortunate. It means that we cannot, in general, simultaneously diagonalise and the Hamiltonian which, in turn, means that we can’t use to tell us about other quantum numbers in the problem.
The commutators of this new momenta differ from (1.15) only by a minus sign
(1.29) 
However, the lack of gauge invariance shows up when we take the commutators of and . We find
This is unfortunate. It means that we cannot, in general, simultaneously diagonalise and the Hamiltonian which, in turn, means that we can’t use to tell us about other quantum numbers in the problem.
There is, however, a happy exception to this. In symmetric gauge (1.27) all these commutators vanish and we have
We can now define a second pair of raising and lowering operators,
These too obey
It is this second pair of creation operators that provide the degeneracy of the Landau levels. We define the ground state to be annihilated by both lowering operators, so that . Then the general state in the Hilbert space is defined by
The energy of this state is given by the usual Landau level expression (1.16); it depends on but not on .
There is, however, a happy exception to this. In symmetric gauge (1.27) all these commutators vanish and we have
We can now define a second pair of raising and lowering operators,
These too obey
It is this second pair of creation operators that provide the degeneracy of the Landau levels. We define the ground state to be annihilated by both lowering operators, so that . Then the general state in the Hilbert space is defined by
The energy of this state is given by the usual Landau level expression (1.16); it depends on but not on .
The Lowest Landau Level
Let’s now construct the wavefunctions in the symmetric gauge. We’re going to focus attention on the lowest Landau level, , since this will be of primary interest when we come to discuss the fractional quantum Hall effect. The states in the lowest Landau level are annihilated by , meaning The trick is to convert this into a differential equation. The lowering operator is
At this stage, it’s useful to work in complex coordinates on the plane. We introduce
Note that this is the opposite to how we would normally define these variables! It’s annoying but it’s because we want the wavefunctions below to be holomorphic rather than antiholomorphic. (An alternative would be to work with magnetic fields in which case we get to use the usual definition of holomorphic. However, we’ll stick with our choice above throughout these lectures). We also introduce the corresponding holomorphic and antiholomorphic derivatives
which obey and . In terms of these holomorphic coordinates, takes the simple form
and, correspondingly,
which we’ve chosen to write in terms of the magnetic length . The lowest Landau level wavefunctions are then those which are annihilated by this differential operator. But this is easily solved: they are
for any holomorphic function .
We can construct the specific states in the lowest Landau level by similarly writing and as differential operators. We find
The lowest state is annihilated by both and . There is a unique such state given by
We can now construct the higher states by acting with . Each time we do this, we pull down a factor of . This gives us a basis of lowest Landau level wavefunctions in terms of holomorphic monomials
(1.30)
This particular basis of states has another advantage: these are eigenstates of angular momentum. To see this, we define angular momentum operator,
(1.31)
Then, acting on these lowest Landau level states we have
The wavefunctions (1.30) provide a basis for the lowest Landau level. But it is a simple matter to extend this to write down wavefunctions for all high Landau levels; we simply need to act with the raising operator . However, we won’t have any need for the explicit forms of these higher Landau level wavefunctions in what follows.
We can construct the specific states in the lowest Landau level by similarly writing and as differential operators. We find
The lowest state is annihilated by both and . There is a unique such state given by
We can now construct the higher states by acting with . Each time we do this, we pull down a factor of . This gives us a basis of lowest Landau level wavefunctions in terms of holomorphic monomials
(1.30) 
This particular basis of states has another advantage: these are eigenstates of angular momentum. To see this, we define angular momentum operator,
(1.31) 
Then, acting on these lowest Landau level states we have
The wavefunctions (1.30) provide a basis for the lowest Landau level. But it is a simple matter to extend this to write down wavefunctions for all high Landau levels; we simply need to act with the raising operator . However, we won’t have any need for the explicit forms of these higher Landau level wavefunctions in what follows.
Degeneracy Revisited
In symmetric gauge, the profiles of the wavefunctions (1.30) form concentric rings around the origin. The higher the angular momentum , the further out the ring. This, of course, is very different from the striplike wavefunctions that we saw in Landau gauge (1.20). You shouldn’t read too much into this other than the fact that the profile of the wavefunctions is not telling us anything physical as it is not gauge invariant.
However, it’s worth seeing how we can see the degeneracy of states in symmetric gauge. The wavefunction with angular momentum is peaked on a ring of radius . This means that in a disc shaped region of area , the number of states is roughly (the integer part of)
which agrees with our earlier result (1.21).
However, it’s worth seeing how we can see the degeneracy of states in symmetric gauge. The wavefunction with angular momentum is peaked on a ring of radius . This means that in a disc shaped region of area , the number of states is roughly (the integer part of)
which agrees with our earlier result (1.21).
There is yet another way of seeing this degeneracy that makes contact with the classical physics. In Section 1.2, we reviewed the classical motion of particles in a magnetic field. They go in circles. The most general solution to the classical equations of motion is given by (1.2),
(1.32)
Let’s try to tally this with our understanding of the exact quantum states in terms of Landau levels. To do this, we’ll think about the coordinates labelling the centre of the orbit as quantum operators. We can rearrange (1.32) to give
(1.33)
This feels like something of a slight of hand, but the end result is what we wanted: we have the centre of mass coordinates in terms of familiar quantum operators. Indeed, one can check that under time evolution, we have
(1.34)
confirming the fact that these are constants of motion.
There is yet another way of seeing this degeneracy that makes contact with the classical physics. In Section 1.2, we reviewed the classical motion of particles in a magnetic field. They go in circles. The most general solution to the classical equations of motion is given by (1.2),
(1.32) 
Let’s try to tally this with our understanding of the exact quantum states in terms of Landau levels. To do this, we’ll think about the coordinates labelling the centre of the orbit as quantum operators. We can rearrange (1.32) to give
(1.33) 
This feels like something of a slight of hand, but the end result is what we wanted: we have the centre of mass coordinates in terms of familiar quantum operators. Indeed, one can check that under time evolution, we have
(1.34) 
confirming the fact that these are constants of motion.
The definition of the centre of the orbit given above holds in any gauge. If we now return to symmetric gauge we can replace the and coordinates appearing here with the gauge potential (1.27). We end up with
where, finally, we’ve used the expression (1.28) for the “alternative momentum” . We see that, in symmetric gauge, this has the alternative momentum has the nice interpretation of the centre of the orbit! The commutation relation (1.29) then tells us that the positions of the orbit in the plane fail to commute with each other,
(1.35)
The lack of commutivity is precisely the magnetic length . The Heisenberg uncertainty principle now means that we can’t localise states in both the coordinate and the coordinate: we have to find a compromise. In general, the uncertainty is given by
A naive semiclassical count of the states then comes from taking the plane and parcelling it up into regions of area . The number of states in an area is then
which is the counting that we’ve already seen above.
The definition of the centre of the orbit given above holds in any gauge. If we now return to symmetric gauge we can replace the and coordinates appearing here with the gauge potential (1.27). We end up with
where, finally, we’ve used the expression (1.28) for the “alternative momentum” . We see that, in symmetric gauge, this has the alternative momentum has the nice interpretation of the centre of the orbit! The commutation relation (1.29) then tells us that the positions of the orbit in the plane fail to commute with each other,
(1.35) 
The lack of commutivity is precisely the magnetic length . The Heisenberg uncertainty principle now means that we can’t localise states in both the coordinate and the coordinate: we have to find a compromise. In general, the uncertainty is given by
A naive semiclassical count of the states then comes from taking the plane and parcelling it up into regions of area . The number of states in an area is then
which is the counting that we’ve already seen above.
1.5 Berry Phase
There is one last topic that we need to review before we can start the story of the quantum Hall effect. This is the subject of Berry phase or, more precisely, the Berry holonomy^{4}^{4}4An excellent review of this subject can be found in the book Geometric Phases in Physics by Wilczek and Shapere. This is not a topic which is relevant just in quantum Hall physics: it has applications in many areas of quantum mechanics and will arise over and over again in different guises in these lectures. Moreover, it is a topic which perhaps captures the spirit of the quantum Hall effect better than any other, for the Berry phase is the simplest demonstration of how geometry and topology can emerge from quantum mechanics. As we will see in these lectures, this is the heart of the quantum Hall effect.
1.5.1 Abelian Berry Phase and Berry Connection
We’ll describe the Berry phase arising for a general Hamiltonian which we write as
As we’ve illustrated, the Hamiltonian depends on two different kinds of variables. The are the degrees of freedom of the system. These are the things that evolve dynamically, the things that we want to solve for in any problem. They are typically things like the positions or spins of particles.
In contrast, the other variables are the parameters of the Hamiltonian. They are fixed, with their values determined by some external apparatus that probably involves knobs and dials and flashing lights and things as shown above. We don’t usually exhibit the dependence of on these variables^{5}^{5}5One exception is the classical subject of adiabatic invariants, where we also think about how depends on parameters . See section 4.6 of the notes on Classical Dynamics..
In contrast, the other variables are the parameters of the Hamiltonian. They are fixed, with their values determined by some external apparatus that probably involves knobs and dials and flashing lights and things as shown above. We don’t usually exhibit the dependence of on these variables^{5}^{5}5One exception is the classical subject of adiabatic invariants, where we also think about how depends on parameters . See section 4.6 of the notes on Classical Dynamics..
Here’s the game. We pick some values for the parameters and place the system in a specific energy eigenstate which, for simplicity, we will take to be the ground state. We assume this ground state is unique (an assumption which we will later relax in Section 1.5.4). Now we very slowly vary the parameters . The Hamiltonian changes so, of course, the ground state also changes; it is .
Here’s the game. We pick some values for the parameters and place the system in a specific energy eigenstate which, for simplicity, we will take to be the ground state. We assume this ground state is unique (an assumption which we will later relax in Section 1.5.4). Now we very slowly vary the parameters . The Hamiltonian changes so, of course, the ground state also changes; it is .
There is a theorem in quantum mechanics called the adiabatic theorem. This states that if we place a system in a nondegenerate energy eigenstate and vary parameters sufficiently slowly, then the system will cling to that energy eigenstate. It won’t be excited to any higher or lower states.
There is a theorem in quantum mechanics called the adiabatic theorem. This states that if we place a system in a nondegenerate energy eigenstate and vary parameters sufficiently slowly, then the system will cling to that energy eigenstate. It won’t be excited to any higher or lower states.
There is one caveat to the adiabatic theorem.
How slow you have to be in changing the parameters depends on the energy gap from the state you’re in to the nearest other state. This means that if you get level crossing, where another state becomes degenerate with the one you’re in, then all bets are off. When the states separate again, there’s no simple way to tell which linear combinations of the state you now sit in. However, level crossings are rare in quantum mechanics. In general, you have to tune three parameters to specific values in order to get two states to have the same energy. This follows by thinking about the a general Hermitian matrix which can be viewed as the Hamiltonian for the two states of interest. The general Hermitian matrix depends on 4 parameters, but its eigenvalues only coincide if it is proportional to the identity matrix. This means that three of those parameters have to be set to zero.
There is one caveat to the adiabatic theorem. How slow you have to be in changing the parameters depends on the energy gap from the state you’re in to the nearest other state. This means that if you get level crossing, where another state becomes degenerate with the one you’re in, then all bets are off. When the states separate again, there’s no simple way to tell which linear combinations of the state you now sit in. However, level crossings are rare in quantum mechanics. In general, you have to tune three parameters to specific values in order to get two states to have the same energy. This follows by thinking about the a general Hermitian matrix which can be viewed as the Hamiltonian for the two states of interest. The general Hermitian matrix depends on 4 parameters, but its eigenvalues only coincide if it is proportional to the identity matrix. This means that three of those parameters have to be set to zero.
The idea of the Berry phase arises in the following situation: we vary the parameters but, ultimately, we put them back to their starting values. This means that we trace out a closed path in the space of parameters. We will assume that this path did not go through a point with level crossing. The question is: what state are we now in?
The idea of the Berry phase arises in the following situation: we vary the parameters but, ultimately, we put them back to their starting values. This means that we trace out a closed path in the space of parameters. We will assume that this path did not go through a point with level crossing. The question is: what state are we now in?
The adiabatic theorem tells us most of the answer. If we started in the ground state, we also end up in the ground state. The only thing left uncertain is the phase of this new state
We often think of the overall phase of a wavefunction as being unphysical. But that’s not the case here because this is a phase difference. For example, we could have started with two states and taken only one of them on this journey while leaving the other unchanged. We could then interfere these two states and the phase would have physical consequence.
The adiabatic theorem tells us most of the answer. If we started in the ground state, we also end up in the ground state. The only thing left uncertain is the phase of this new state
We often think of the overall phase of a wavefunction as being unphysical. But that’s not the case here because this is a phase difference. For example, we could have started with two states and taken only one of them on this journey while leaving the other unchanged. We could then interfere these two states and the phase would have physical consequence.
So what is the phase ? There are two contributions. The first is simply the dynamical phase that is there for any energy eigenstate, even if the parameters don’t change. But there is also another, less obvious contribution to the phase. This is the Berry phase.
So what is the phase ? There are two contributions. The first is simply the dynamical phase that is there for any energy eigenstate, even if the parameters don’t change. But there is also another, less obvious contribution to the phase. This is the Berry phase.
Computing the Berry Phase
The wavefunction of the system evolves through the timedependent Schrödinger equation
(1.36) 
For every choice of the parameters , we introduce a ground state with some fixed choice of phase. We call these reference states . There is no canonical way to do this; we just make an arbitrary choice. We’ll soon see how this choice affects the final answer. The adiabatic theorem means that the ground state obeying (1.36) can be written as
(1.37) 
where is some time dependent phase. If we pick the then we have . Our task is then to determine after we’ve taken around the closed path and back to where we started.
There’s always the dynamical contribution to the phase, given by where is the ground state energy. This is not what’s interesting here and we will ignore it simply by setting . However, there is an extra contribution. This arises by
plugging the adiabatic ansatz into (1.36), and taking the overlap with . We have
where we’ve used the fact that, instantaneously, to get zero on the righthand side. (Note: this calculation is actually a little more subtle than it looks. To do a better job we would have to look more closely at corrections to the adiabatic evolution (1.37)). This gives us an expression for the time dependence of the phase ,
(1.38)
It is useful to define the Berry connection
(1.39)
so that (1.38) reads
But this is easily solved. We have
Our goal is to compute the phase after we’ve taken a closed path in parameter space. This is simply
(1.40)
This is the Berry phase. Note that it doesn’t depend on the time taken to change the parameters. It does, however, depend on the path taken through parameter space.
There’s always the dynamical contribution to the phase, given by where is the ground state energy. This is not what’s interesting here and we will ignore it simply by setting . However, there is an extra contribution. This arises by plugging the adiabatic ansatz into (1.36), and taking the overlap with . We have
where we’ve used the fact that, instantaneously, to get zero on the righthand side. (Note: this calculation is actually a little more subtle than it looks. To do a better job we would have to look more closely at corrections to the adiabatic evolution (1.37)). This gives us an expression for the time dependence of the phase ,
(1.38) 
It is useful to define the Berry connection
(1.39) 
so that (1.38) reads
But this is easily solved. We have
Our goal is to compute the phase after we’ve taken a closed path in parameter space. This is simply
(1.40) 
This is the Berry phase. Note that it doesn’t depend on the time taken to change the parameters. It does, however, depend on the path taken through parameter space.
The Berry Connection
Above we introduced the idea of the Berry connection (1.39). This is an example of a kind of object that you’ve seen before: it is like the gauge potential in electromagnetism! Let’s explore this analogy a little further.
In the relativistic form of electromagnetism, we have a gauge potential where and are coordinates over Minkowski spacetime. There is a redundancy in the description of the gauge potential: all physics remains invariant under the gauge transformation
(1.41)
for any function . In our course on electromagnetism, we were taught that if we want to extract the physical information contained in , we should compute the field strength
This contains the electric and magnetic fields. It is invariant under gauge transformations.
In the relativistic form of electromagnetism, we have a gauge potential where and are coordinates over Minkowski spacetime. There is a redundancy in the description of the gauge potential: all physics remains invariant under the gauge transformation
(1.41) 
for any function . In our course on electromagnetism, we were taught that if we want to extract the physical information contained in , we should compute the field strength
This contains the electric and magnetic fields. It is invariant under gauge transformations.
Now let’s compare this to the Berry connection . Of course, this no longer depends on the coordinates of Minkowski space; instead it depends on the parameters . The number of these parameters is arbitrary; let’s suppose that we have of them. This means that . In the language of differential geometry is said to be a oneform over the space of parameters, while is said to be a oneform over Minkowski space.
Now let’s compare this to the Berry connection . Of course, this no longer depends on the coordinates of Minkowski space; instead it depends on the parameters . The number of these parameters is arbitrary; let’s suppose that we have of them. This means that . In the language of differential geometry is said to be a oneform over the space of parameters, while is said to be a oneform over Minkowski space.
There is also a redundancy in the information contained in the Berry connection . This follows from the arbitrary choice we made in fixing the phase of the reference states . We could just as happily have chosen a different set of reference states which differ by a phase. Moreover, we could pick a different phase for every choice of parameters ,
for any function . If we compute the Berry connection arising from this new choice, we have
(1.42)
This takes the same form as the gauge transformation (1.41).
There is also a redundancy in the information contained in the Berry connection . This follows from the arbitrary choice we made in fixing the phase of the reference states . We could just as happily have chosen a different set of reference states which differ by a phase. Moreover, we could pick a different phase for every choice of parameters ,
for any function . If we compute the Berry connection arising from this new choice, we have
(1.42) 
This takes the same form as the gauge transformation (1.41).
Following the analogy with electromagnetism, we might expect that the physical information in the Berry connection can be found in the gauge invariant field strength which, mathematically, is known as the curvature of the connection,
It’s certainly true that contains some physical information about our quantum system and we’ll have use of this in later sections. But it’s not the only gauge invariant quantity of interest. In the present context, the most natural thing to compute is the Berry phase (1.40). Importantly, this too is independent of the arbitrariness arising from the gauge transformation (1.42). This is because . In fact, it’s possible to write the Berry phase in terms of the field strength using the higherdimensional version of Stokes’ theorem
(1.43)
where is a twodimensional surface in the parameter space bounded by the path .
Following the analogy with electromagnetism, we might expect that the physical information in the Berry connection can be found in the gauge invariant field strength which, mathematically, is known as the curvature of the connection,
It’s certainly true that contains some physical information about our quantum system and we’ll have use of this in later sections. But it’s not the only gauge invariant quantity of interest. In the present context, the most natural thing to compute is the Berry phase (1.40). Importantly, this too is independent of the arbitrariness arising from the gauge transformation (1.42). This is because . In fact, it’s possible to write the Berry phase in terms of the field strength using the higherdimensional version of Stokes’ theorem
(1.43) 
where is a twodimensional surface in the parameter space bounded by the path .
1.5.2 An Example: A Spin in a Magnetic Field
The standard example of the Berry phase is very simple. It is a spin, with a Hilbert space consisting of just two states. The spin is placed in a magnetic field , with Hamiltonian which we take to be
with the triplet of Pauli matrices and . The offset ensures that the ground state always has vanishing energy. Indeed, this Hamiltonian has two eigenvalues: and . We denote the ground state as and the excited state as ,
Note that these two states are nondegenerate as long as .
We are going to treat the magnetic field as the parameters, so that in this example. Be warned: this means that things are about to get confusing because we’ll be talking about Berry connections and curvatures over the space of magnetic fields. (As opposed to electromagnetism where we talk about magnetic fields over actual space).
We are going to treat the magnetic field as the parameters, so that in this example. Be warned: this means that things are about to get confusing because we’ll be talking about Berry connections and curvatures over the space of magnetic fields. (As opposed to electromagnetism where we talk about magnetic fields over actual space).
The specific form of and will depend on the orientation of . To provide more explicit forms for these states, we write the magnetic field in spherical polar coordinates
with and
The Hamiltonian then reads
In these coordinates, two normalised eigenstates are given by
These states play the role of our that we had in our general derivation. Note, however, that they are not well defined for all values of . When we have , the angular coordinate is not well defined. This means that and don’t have well defined phases. This kind of behaviour is typical of systems with nontrivial Berry phase.
The specific form of and will depend on the orientation of . To provide more explicit forms for these states, we write the magnetic field in spherical polar coordinates
with and The Hamiltonian then reads
In these coordinates, two normalised eigenstates are given by
These states play the role of our that we had in our general derivation. Note, however, that they are not well defined for all values of . When we have , the angular coordinate is not well defined. This means that and don’t have well defined phases. This kind of behaviour is typical of systems with nontrivial Berry phase.
We can easily compute the Berry phase arising from these states (staying away from to be on the safe side). We have
The resulting Berry curvature in polar coordinates is
This is simpler if we translate it back to cartesian coordinates where the rotational symmetry is more manifest. It becomes
But this is interesting. It is a magnetic monopole! Of course, it’s not a real magnetic monopole of electromagnetism: those are forbidden by the Maxwell equation. Instead it is, rather confusingly, a magnetic monopole in the space of magnetic fields.
We can easily compute the Berry phase arising from these states (staying away from to be on the safe side). We have
The resulting Berry curvature in polar coordinates is
This is simpler if we translate it back to cartesian coordinates where the rotational symmetry is more manifest. It becomes
But this is interesting. It is a magnetic monopole! Of course, it’s not a real magnetic monopole of electromagnetism: those are forbidden by the Maxwell equation. Instead it is, rather confusingly, a magnetic monopole in the space of magnetic fields.
Note that the magnetic monopole sits at the point where the two energy levels coincide. Here, the field strength is singular. This is the point where we can no longer trust the Berry phase computation. Nonetheless, it is the presence of this level crossing and the resulting singularity which is dominating the physics of the Berry phase.
Note that the magnetic monopole sits at the point where the two energy levels coincide. Here, the field strength is singular. This is the point where we can no longer trust the Berry phase computation. Nonetheless, it is the presence of this level crossing and the resulting singularity which is dominating the physics of the Berry phase.
The magnetic monopole has charge , meaning that the integral of the Berry curvature over any twosphere which surrounds the origin is
(1.44)
Using this, we can easily compute the Berry phase for any path that we choose to take in the space of magnetic fields . We only insist that the path avoids the origin.
Suppose that the surface , bounded by , makes a solid angle . Then, using the form (1.43) of the Berry phase, we have
(1.45)
Note, however, that there is an ambiguity in this computation. We could choose to form as shown in the left hand figure. But we could equally well choose the surface to go around the back of the sphere, as shown in the righthand figure. In this case, the solid angle formed by is . Computing the Berry phase using gives
The magnetic monopole has charge , meaning that the integral of the Berry curvature over any twosphere which surrounds the origin is
(1.44) 
Using this, we can easily compute the Berry phase for any path that we choose to take in the space of magnetic fields . We only insist that the path avoids the origin. Suppose that the surface , bounded by , makes a solid angle . Then, using the form (1.43) of the Berry phase, we have
(1.45) 
Note, however, that there is an ambiguity in this computation. We could choose to form as shown in the left hand figure. But we could equally well choose the surface to go around the back of the sphere, as shown in the righthand figure. In this case, the solid angle formed by is . Computing the Berry phase using gives