Lecture Notes for PHY 256:
Introduction to Quantum Physics
https://baraksh.com/
baraksh@gmail.com
Download these notes in PDF format
Table of Contents
 1 Introduction
 2 NonTechnical Overview

3 Mathematical Background
 3.1 Complex Numbers

3.2 Linear Algebra
 3.2.1 Complex Vector Spaces
 3.2.2 Dual Vectors, Inner Products, Norms, and Hilbert Spaces
 3.2.3 Orthonormal Bases
 3.2.4 Matrices and the Adjoint
 3.2.5 The Outer Product
 3.2.6 The Completeness Relation
 3.2.7 Representing Vectors in Different Bases
 3.2.8 Change of Basis
 3.2.9 Multiplication and Inverse of Matrices
 3.2.10 Matrices Inside Inner Products
 3.2.11 Eigenvalues and Eigenvectors
 3.2.12 Hermitian Matrices
 3.2.13 Unitary Matrices
 3.2.14 Normal Matrices
 3.2.15 Representing Matrices in Different Bases
 3.2.16 Diagonalizable Matrices
 3.2.17 The CauchySchwarz Inequality
 3.3 Probability Theory

4 The Foundations of Quantum Theory
 4.1 Axiomatic Definition
 4.2 TwoState Systems, Spin $1/2$ , and Qubits
 4.3 Composite Systems and Quantum Entanglement
 4.4 NonCommuting Observables and the Uncertainty Principle

4.5 Dynamics, Transformations, and Measurements
 4.5.1 Unitary Transformations and Evolution
 4.5.2 Quantum Logic Gates
 4.5.3 The Measurement Axiom (Projective)
 4.5.4 Applications of the Measurement Axiom
 4.5.5 The Measurement Axiom (Simplified)
 4.5.6 Interpretations of Quantum Mechanics and the Measurement Problem
 4.5.7 Superposition Once Again: Schrödinger’s Cat
 4.6 The NoCloning Theorem and Quantum Teleportation
 4.7 The Foundations of Quantum Theory: Summary
 5 Continuous Quantum Systems
1 Introduction
1.1 Course Outline
This course will serve as a comprehensive introduction to the foundations of quantum mechanics, from the modern point of view of 21st century theoretical physics. It will be somewhat different from a traditional first course in quantum mechanics, in that we will develop the theory from scratch in an axiomatic and mathematically rigorous(ish) way. There will be less emphasis on doing calculations, and more on a deep conceptual understanding of the theory.
First, a short nontechnical overview of quantum mechanics will be provided. We will discuss the failures of classical mechanics that prompted the development of the quantum theory, and list the major differences between classical and quantum mechanics.
Next, we will learn the necessary mathematical background, including complex numbers, linear algebra, and probability. Even if you took courses on these subjects before, you should still pay careful attention, since we will learn the material from the quantum point of view and introduce important notation that is unique to quantum mechanics.
Once we have a firm grasp of the mathematical background, we will use it to define quantum mechanics axiomatically. We will learn about fundamental concepts such as Hilbert spaces, states, operators, observables, superposition, probability amplitudes, and expectation values.
Then, we will begin studying simple discrete quantum systems known as qubits, which are the quantum analogue of bits, and are used in quantum computers. We will learn about Schrödinger’s cat, quantum entanglement, Bell’s theorem, the uncertainty principle, unitary evolution, quantum measurements, and quantum teleportation.
In the remainder of the course we will study continuous quantum systems and related concepts, including Hamiltonians, the Schrödinger equation, canonical quantization, the quantum harmonic oscillator, wavefunctions, quantum interference, and solutions to the timeindependent Schrödinger equation, including scattering and tunneling in one dimension.
By the end of the course, the students should expect to have a fairly good understanding of quantum mechanics, and to develop an intuition for this very strange and unintuitive theory. They will also be adequately prepared to dive deeper into the subject, whether by taking more advanced courses or by doing research.
1.2 Exercises and Problems
Throughout these notes, you will find many exercises and problems.

•
Exercises are usually just calculations. They are meant to verify that you understand how to calculate things, and they are usually simple and straightforward.

•
Problems are usually proofbased. They are meant to verify that you understand the more abstract relations between the concepts we will introduce, and they often require some thought.
2 NonTechnical Overview
In this chapter, I will provide a nontechnical overview of quantum physics, and how it compares to classical physics. I won’t go into exactly who discovered what and in which year, because this is not a history course; this is a course about how the universe works. However, if you are interested in the history of quantum mechanics, there are many excellent websites and textbooks on the subject, and you are encouraged to look them up.
Instead, I will focus on two main goals in this chapter:

1.
Introducing some of the fundamental experiments which illustrate why classical mechanics needs to be replaced with a more fundamental theory. This should also convince you that your classical intuition must be replaced with quantum intuition, which is what we will try to develop in this course.

2.
Summarizing the fundamental properties of quantum mechanics and the differences between it and classical mechanics in nontechnical terms, without going into the math. This should give you some idea of what we will study throughout this course in much more detail and with the full, uncensored mathematical framework.
2.1 The Failures of Classical Physics
2.1.1 BlackBody Radiation and the Ultraviolet Catastrophe
A black body is an object that absorbs all incoming light at all frequencies. It absorbs it and does not reflect it – therefore, it is black. More generally, it absorbs not just light, but all electromagnetic radiation. Black bodies also emit radiation, due to their heat. Electromagnetic radiation has a spectrum of wavelengths of different lengths. We are interested in predicting the amount of radiation emitted by the black body at each wavelength, which we will refer to as the black body’s spectrum.
One can try to use classical physics to calculate this spectrum. It turns out that the amount of the radiation is inversely proportional to the wavelength ^{1} ^{1} 1 More precisely, the power emitted per unit area per unit solid angle per unit wavelength is proportional to $1/{\lambda}^{4}$ where $\lambda $ is the wavelength… But fortunately, we don’t need to be very precise here! . This means that as the wavelength approaches zero, the amount of radiation approaches infinity! This is illustrated by the black curve in Figure 2.1 . This result is called the ultraviolet catastrophe, since ultraviolet light has shorter wavelengths than visible light. Obviously, this does not fit well with experimental data, since when we measure the total radiation emitted from a black body, we most definitely do not measure it to be infinity!
To solve this problem, we must use quantum physics. If we assume that radiation can only be emitted in discrete “packets” of energy called quanta, we get the correct spectrum of radiation, which is compatible with experiment. The law describing the amount of radiation at each wavelength is called Planck’s law. In Figure 2.1 , we can see three different curves, calculated using Planck’s law, giving the radiation spectrum at different temperatures (in Kelvin). You can see that the total amount of radiation is no longer infinite. The quanta of electromagnetic radiation are called photons.
2.1.2 The Photoelectric Effect
When light hits a material, it causes the material to emit electrons. This phenomenon is called the photoelectric effect. Using classical physics, and the assumption that light is a wave, we can make the following predictions:

•
Brighter light should have more energy, so it should cause the emitted electrons to have more kinetic energy, and thus move faster.

•
Light with higher frequency should hit the material more often, so it should cause a higher rate of electron emission, resulting in a larger electric current.

•
Assuming there is a certain minimum energy needed to dislodge an electron from the material, sufficiently bright light of any frequency should cause electron emission.
However, what actually happens is the exact opposite:

•
The kinetic energy of the emitted electrons increases with frequency, not brightness.

•
The electric current increases with brightness, not frequency.

•
Electrons are emitted only when the frequency of the light exceeds a certain threshold, regardless of how bright it is.
This is illustrated in Figure 2.2 , where the red light does not cause any electrons to be emitted, but the green and blue lights do, since they have higher frequency. Furthermore, since the blue light has higher frequency than the green light, the kinetic energy of the emitted electrons is larger.
To explain this, we must again use quantum physics. Einstein proposed to use the same model that Planck suggested to solve the ultraviolet catastrophe, where light is made of discrete photons. Each photon has energy proportional to the frequency of the light, and brighter light of the same frequency simply has more photons, each photon still with the same amount of energy. This model fits the predictions perfectly.
So in Figure 2.2 , making the red light brighter will increase the number of photons, but no matter how bright it is, the individual photons it’s made of still do not have enough energy to dislodge an electron on their own. On the other hand, each individual photon of the green and blue lights has, on its own, enough energy to dislodge a photon, and even if the light is very dim, the electrons will still be emitted.
2.1.3 The DoubleSlit Experiment
The previous two experiments may have convinced you that light is not a wave, but a particle. But is that really the case? The doubleslit experiment shows that things are actually more complicated. In this experiment, a light beam hits a plate with two parallel slits. Most of the light is blocked by the plate, but some of it passes through the slits and hits a screen, creating a pattern of bright and dark bands.
This can be most naturally explained by assuming that light is not a particle, but a wave. Each of the slits becomes the origin of a new wave, as illustrated in Figure 2.3 . Each of the two waves has crests and troughs. When a crest of one wave is at the same place as a crest of the other wave, they add up to create a crest with double the magnitude. This is called constructive interference. On the other hand, if a crest of one wave is at the same place as a trough of the other wave, they cancel each other. This is called destructive interference. See Figure 2.4 for an illustration. The pattern on the screen, as seen in Figure 2.3 , is a consequence of this interference.
So the doubleslit experiment seems to prove that light is a wave, in contradiction with blackbody radiation and the photoelectric effect, which seem to prove that light is a particle. It turns out that, in fact, both are correct; the quantum nature of light has the consequence that it sometimes behaves like a classical wave, and other times like a classical particle. This is called waveparticle duality. Contrary to common misconception, this doesn’t mean that light is “both a wave and a particle”; it simply demonstrates that the classical concepts of “wave” and “particle” are not the proper way to describe reality.
Okay, so light exhibits waveparticle duality. Maybe this makes sense. But matter, which is a tangible thing you can touch, is definitely made of particles, right? To check that, we can replace the beam of light with a beam of electrons. Since we think electrons are particles, not waves, we expect to find on the screen not an interference pattern, but just individual dots corresponding to the individual electron particles. And this is indeed what happens, except… If we run the experiment for some time, and let the electrons build up, then after a while we see that an interference pattern emerges nonetheless! This is shown in Figure 2.5 .
What does this mean? It means that, in quantum physics, both light and matter exhibit waveparticle duality. In classical physics, the measurement of the position of the electron on the screen is deterministic; if we know the initial position and velocity of the electron, then we can predict exactly where the electron lands. In quantum physics, we instead have a probability distribution, which gives us the probability for the electron to be measured at each particular point on the screen. This probability distribution turns out to propagate in space like a wave, and interfere with itself constructively and destructively on the way as a wave does, which is what causes the interference pattern on the screen – it is actually a pattern of probabilities! In the end, the probability will be enlarged on some points of the screen and reduced on other points.
To clarify how the measurement of the positions of the electrons on the screen yields a probability distribution, consider instead a 6sided die. If you roll the die just once or twice, you won’t have much information about the probabilities to roll each number on the die. This is analogous to sending just a couple of electrons through the slits. What you need to do is to roll the die a large number of times, let’s say 6,000 times. Then you count how many times the die rolled on each number. For example, if it rolled around 1,000 times on each number, then you know the die is fair; but if it rolled around 2,000 times on 6 and around 800 times on every other number, then you know the die is loaded. Similarly, we need to send a large number of electrons through the slits in order to determine the probability distribution for their positions on the screen. It turns out that the position of the electron is “loaded”!
As an aside, in 21st century terms, the precise answer to the question “is light a wave or a particle?” turns out to be that both of them are different aspects of the same fundamental entity called the quantum electromagnetic field. This field propagates from place to place like a wave, but on the other hand, if you put enough energy into it, you can cause a quantum excitation in the field. It is this excitation that behaves like a particle.
Moreover, it turns out that all elementary particles are quantum fields, and thus all of them exhibit these two aspects. This is called quantum field theory. It neatly unites quantum mechanics with special relativity, and explains elementary particle physics in amazing accuracy – it is actually the most accurate theory in all of science! In this course we will focus on nonrelativistic quantum mechanics, which is to quantum field theory as Newtonian physics is to special relativity. Quantum field theory is much more complicated, and is usually only taught at the graduateschool level.
2.1.4 The SternGerlach Experiment
In the SternGerlach experiment, electrically neutral particles, such as silver atoms, are sent through an inhomogeneous magnetic field and into a screen. For reasons we won’t go into (since they require some knowledge of electrodynamics), the magnetic field will deflect the particle up or down by an amount proportional to its angular momentum. According to classical physics, this angular momentum can have any value, and so we would expect to see the particles hit every possible point along a continuous line on the screen. This is item (4) in Figure 2.6 .
However, what actually happens when we perform the experiment is that the particles are deflected either up or down by the exact same amount each time, and hit only two specific discrete points on the screen. This is item (5) in Figure 2.6 . To explain this, we must again use quantum physics. Quantum particles are not seen as classically spinning objects; instead they are said to have an intrinsic form of angular momentum called spin. For particles like electrons or silver atoms, a measurement of spin can only yield one of two options: “spin up” or “spin down”.
The previous experiments we discussed showed us that something that is classically continuous – light, or more generally, electromagnetic radiation – is quantized in the quantum theory into discrete packets or quanta of energy called photons. Similarly, the SternGerlach experiment tells us that another classically continuous thing, angular momentum, is also quantized in the quantum theory – into discrete spin. This seems to be a general property of most, but not all, quantum systems: something that in classical physics was continuous turns out to actually be discrete in quantum physics.
Finally, let me just mention that one can use spin to create qubits, or “quantum bits”, where “spin up” represents a value of 0 and “spin down” represents a value of 1. Because spin is a quantum quantity, it satisfies all of the weird properties of quantum mechanics that we will discuss later. By taking advantage of these quantum properties, we can potentially do calculations faster with a quantum computer that uses qubits compared to a classical computer that uses classical bits.
2.2 Quantum vs. Classical Mechanics
Let us now summarize, in a nontechnical way, the most important features of quantum mechanics and how they differ from their classicalmechanical counterparts.

1.
Quantum mechanics is, as far as we know, the exact and fundamental theory of reality. Classical mechanics turns out to be just an approximation to this theory. This means that, in general, all modern theories of physics must be quantum theories if they intend to be fundamental. One important exception to that rule is general relativity, which we do not yet know how to describe as a quantum theory; if we did, we would call that theory quantum gravity. However, this is usually not a problem, since general relativity is mostly needed only when describing huge things like planets, stars, galaxies, and so on, in which case we do not need quantum mechanics since we are within the realm of validity of the classical approximation. In fact, this leads us to the next property:

2.
Quantum mechanics is the theory of the smallest things. This includes elementary particles, atoms, and molecules. Since all big things are made of small things, quantum mechanics also describes humans, planets, galaxies, and the whole universe. However, this is exactly where the classical limit comes in; when many small quantum systems make up one big system, classical mechanics generally turns out to be a good enough description for all practical purposes. This is similar to how relativity is always the correct way to describe physics, but at low velocities, much smaller than the speed of light, Newtonian physics is a good enough approximation.

3.
Quantum mechanics usually involves discrete things. This is in contrast with classical mechanics, which usually involves continuous things. In fact, continuous classical things generally turn out to be made of discrete quantum things. We saw an example of this when we discussed how light – a continuous electromagnetic field – is actually made of discrete photons. Similarly, we saw that angular momentum, which is continuous in the classical theory, is replaced by discrete spin in the quantum theory.

4.
Quantum mechanics is a probabilistic theory. Classical mechanics, on the other hand, is a deterministic theory. For example, in classical mechanics, given a particle’s exact position and momentum at any one time, we can (in principle) predict its position and momentum at any other time – with absolute certainty. However, in quantum mechanics, the most we can ever hope to know is the probability distribution to find the particle at a certain position or with a certain momentum. This is illustrated in Figure 2.7 .

5.
Quantum mechanics allows for superposition of states. In classical mechanics, the state of a particle is simply given by the exact values of its position and momentum. In contrast, in quantum mechanics the particle can – in fact, usually must – be in a superposition of possible positions and momenta. Each one of the possibilities in the superposition has a probability assigned to it, and this is where the probability distribution in Figure 2.7 comes from.

6.
Quantum mechanics features uncertainty in measurements. This is called the uncertainty principle. In classical mechanics, at least theoretically, we can precisely know both the position and momentum of the particle. However, in quantum mechanics, the more we know about the position, the less we know about the momentum – and vice versa. If the position probability distribution is narrow and concentrated at a certain region, meaning that there is low uncertainty in the position, then one can prove that the momentum probability distribution must be wide, meaning that there is high uncertainty in the momentum. The opposite is also true. This is again illustrated in Figure 2.7 .

7.
Quantum mechanics has a stronger type of correlation called entanglement. Classical mechanics also allows for correlation. For example, let’s say I have two sealed envelopes with notes inside them, one with the number 0 and the other with the number 1. I give one to Alice and one to Bob. If Alice opens her envelope and sees the number 0, she can be sure that Bob has the envelope with the number 1, and vice versa. The results are clearly correlated. However, if we replace the notes with qubits – quantum bits which are in a superposition of 0 and 1 – then the envelopes are now correlated more strongly via quantum entanglement. We will discuss later in exactly what way quantum entanglement is stronger than classical correlation, but right now we will note that this fact is what gives quantum computers their power.
3 Mathematical Background
Quantum theory is the theoretical framework believed to describe all aspects of our universe at the most fundamental level. Mathematically, as we will see, it is relatively simple, although much more abstract than classical physics. However, conceptually, it is very hard to understand using the classical intuition we have from our daily lives. In these lectures we will learn to develop quantum intuition.
In this chapter we shall learn some basic mathematical concepts, focusing on complex numbers, linear algebra, and probability theory, which will be used extensively throughout the course. Even if the student is already familiar with these concepts, it is still a good idea to go over this chapter, since the unique notation commonly used in quantum mechanics is different than the notation used elsewhere in mathematics and physics.
3.1 Complex Numbers
Complex numbers are at the very core of the mathematical formulation of quantum theory. In this section we will give a review of complex numbers and present some definitions and results that will be used throughout the course.
3.1.1 Motivation
In real life, we only encounter real numbers. These numbers form a field, that is, a set of elements with welldefined operations of addition, subtraction, multiplication, and division. This field is denoted $\mathbb{R}$. Geometrically, we can imagine $\mathbb{R}$ as a 1dimensional line, stretching from $\mathrm{\infty}$ to $+\mathrm{\infty}$ .
Unfortunately, it turns out that the field of real numbers has a serious flaw. One can write down completely reasonablelooking quadratic equations, with only real coefficient, which nonetheless have no solutions in $\mathbb{R}$. Consider the most general quadratic equation:
$$a{x}^{2}+bx+c=0,a,b,c\in \mathbb{R}.$$ 
One can easily prove (by completing the square) that there are two potential solutions, given by
$${x}_{\pm}\equiv \frac{b\pm \sqrt{{b}^{2}4ac}}{2a}.$$ 
Here, one solution corresponds to the choice $+$ and the other one to $$. However, the square root $\sqrt{{b}^{2}4ac}$ poses a problem, because the square of a real number is always nonnegative ^{2} ^{2} 2 Here, $\forall $ means “for all”. :
$${x}^{2}\ge 0,\forall x\in \mathbb{R}.$$ 
The number (and existence) of real solutions is thus determined by the sign of the expression inside the square root, called the discriminant $\mathrm{\Delta}\equiv {b}^{2}4ac$ :
$$ 
It would be very convenient (not to mention more elegant) to have a field of numbers that is algebraically closed, meaning that every nonconstant polynomial (and in particular, a quadratic polynomial) with coefficients in the field has a root in the field.
Since the problem stems from the fact that no real number can square to a negative number, let us simply extend our field with just one number, the imaginary unit, denoted ^{3} ^{3} 3 We use nonitalic font exclusively for $\mathrm{i}$ in order to distinct it from $i$, which will be used for labels and variables. Of course, it is usually a wise idea not to have both $\mathrm{i}$ and $i$ in the same equation in the first place, but sometimes that is unavoidable. $\mathrm{i}$, whose sole purpose is to square to a negative number. The most natural choice is for $\mathrm{i}$ to square to $1$ :
$${\mathrm{i}}^{2}\equiv 1.$$ 
The new field created by extending $\mathbb{R}$ with $\mathrm{i}$ is the field of complex numbers, denoted $\u2102$. A general complex number is written
$$z=a+\mathrm{i}b,z\in \u2102,a,b,\in \mathbb{R},$$ 
where $a$ is called the real part and $b$ is called the imaginary part, both real numbers.
Now, in the quadratic equation, having $\sqrt{\mathrm{\Delta}}$ with a negative $\mathrm{\Delta}$ is no longer a problem, since the number $\mathrm{i}\sqrt{\mathrm{\Delta}}$ squares to $\mathrm{\Delta}$:
$${\left(\mathrm{i}\sqrt{\mathrm{\Delta}}\right)}^{2}={\mathrm{i}}^{2}\left(\mathrm{\Delta}\right)=\left(1\right)\left(\mathrm{\Delta}\right)=\mathrm{\Delta}.$$ 
Therefore, we conclude that every quadratic equation has a solution in the field of complex numbers ^{4} ^{4} 4 Note that real numbers are a special case of complex numbers, so the two real roots are also two complex roots. :

•
$\mathrm{\Delta}>0$ : Two real roots
$${x}_{\pm}=\frac{b\pm \sqrt{\mathrm{\Delta}}}{2a},$$ 
•
$\mathrm{\Delta}=0$ : One real root
$$x=\frac{b}{2a},$$
$$ : Two complex roots
$${x}_{\pm}=\frac{b}{2a}\pm \mathrm{i}\frac{\sqrt{\mathrm{\Delta}}}{2a}.$$ 
As a matter of fact, this is a special case of the fundamental theorem of algebra: any polynomial of degree $n$ with complex coefficients ^{5} ^{5} 5 Again, real numbers are a special case of complex numbers, so the coefficients can be all real. has at least one, and at most $n$, unique complex roots ^{6} ^{6} 6 Or equivalently, it has exactly $n$ not necessarily unique complex roots, accounting for possible degeneracy/multiplicity. For example, for $\mathrm{\Delta}=0$ the quadratic equation has two degenerate roots, or one root of multiplicity 2. . The quadratic equation corresponds to the case $n=2$ .
Exercise 3.1.
A. Solve the quadratic equation
$${x}^{2}6x+25=0.$$ 
B. Find the quadratic equation whose solutions are $z=7\pm 2\mathrm{i}$ .
Problem 3.2.
Above we saw that the equation $a{x}^{2}+bx+c=0$ with $a,b,c\in \mathbb{R}$ can either have two real solutions, one real solution, or two complex solutions that are conjugates of each other.
A. Imaginary numbers ^{7} ^{7} 7 Sometimes also called purely imaginary numbers. are numbers of the form $\mathrm{i}b$ for $b\in \mathbb{R}$ . What kind of equation has two imaginary solutions that are complex conjugates of each other?
B. What kind of equation has two imaginary solutions that are in general not complex conjugates of each other?
C. What kind of equation has two arbitrary complex solutions that are in general not complex conjugates of each other?
Note: In all of the above, don’t just find a specific equation that has this property – find a family of equations with arbitrary parameters of certain types.
3.1.2 Operations on Complex Numbers
Complex numbers can be added and multiplied with other complex numbers. There is really nothing special about these operations, except that it is customary to group the imaginary parts (i.e. anything that is a multiple of $\mathrm{i}$) together and turn ${\mathrm{i}}^{2}$ into $1$ in the final result:
$$\left(a+\mathrm{i}b\right)+\left(c+\mathrm{i}d\right)=\left(a+c\right)+\mathrm{i}\left(b+d\right),$$ 
$$\left(a+\mathrm{i}b\right)\left(c+\mathrm{i}d\right)=\left(acbd\right)+\mathrm{i}\left(ad+bc\right).$$ 
Next, note that the two solutions to a quadratic equation with $$ are the same, up to the sign of $\mathrm{i}$. That is, if we replace $\mathrm{i}$ with $\mathrm{i}$ in one of the solutions, we get the other solution. Such numbers are called complex conjugates, and the process of replacing $\mathrm{i}$ with $\mathrm{i}$ is called complex conjugation. The complex conjugate of $z$ is denoted ${z}^{*}$ :
$$z=a+\mathrm{i}b\mathit{}\u27f9\mathit{}{z}^{*}=a\mathrm{i}b.$$ 
Of course, the conjugate of the conjugate is the original number:
$${\left({z}^{*}\right)}^{*}=z.$$ 
This means that the complex conjugation operation is an involution, that is, its own inverse.
Complex conjugation allows us to write a general formula for the real or imaginary parts of a complex number, denoted $\mathrm{Re}z$ and $\mathrm{Im}z$ respectively:
$$\mathrm{Re}z\equiv \frac{z+{z}^{*}}{2},\mathrm{Im}z\equiv \frac{z{z}^{*}}{2\mathrm{i}}.$$  (3.1) 
You can check that if $z=a+\mathrm{i}b$ then we get $\mathrm{Re}z=a$ and $\mathrm{Im}z=b$ , as expected.
Exercise 3.3.
What are the real and imaginary parts of $47\mathrm{i}$ ? What is its complex conjugate?
Problem 3.4.
If a number is the complex conjugate of itself, can you say anything interesting about that number? What about if a number is minus the complex conjugate of itself?
3.1.3 The Complex Plane and Real 2Vectors
Recall that the field of real numbers $\mathbb{R}$ is geometrically a line. The space ${\mathbb{R}}^{n}$ is an $n$dimensional space which is home to real $n$vectors , that is, ordered lists of $n$ real numbers of the form $({v}_{1},\mathrm{\dots},{v}_{n})$ . In particular, ${\mathbb{R}}^{2}$ is geometrically a plane, with vectors of the form $(x,y)$ .
The complex plane $\mathrm{C}$ is similar to ${\mathbb{R}}^{2}$ , except that instead of the $x$ and $y$ axes we have the real and imaginary axes respectively. The real unit $1$, which squares to $+1$ , defines the positive direction of the real axis, while the imaginary unit $\mathrm{i}$, which squares to $1$ , defines the positive direction of the imaginary axis. This is illustrated in Figure 3.1 .
Since $\u2102$ is a plane, we can define vectors on it, just like on ${\mathbb{R}}^{2}$ . A real 2vector $(a,b)$ is an arrow in ${\mathbb{R}}^{2}$ which points from the origin $(0,0)$ to the point that is $a$ steps in the direction of the $x$ axis and $b$ steps in the direction of the $y$ axis. A complex number $z=a+\mathrm{i}b$ is similarly an arrow in $\u2102$ which points from the origin $0$ to the point that is $a$ steps along the real axis and $b$ steps along the imaginary axis.
The complex conjugate ${z}^{*}=a\mathrm{i}b$ is obtained by replacing $\mathrm{i}$ with $\mathrm{i}$ . Since $\mathrm{i}$ defines the direction of the imaginary axis, this is equivalent to flipping the imaginary axis. In other words, ${z}^{*}$ is the reflection of $z$ along the real axis, as shown in Figure 3.1 .
From the Pythagorean theorem, we know that the magnitude (or length) of the real 2vector $(a,b)$ is $\sqrt{{a}^{2}+{b}^{2}}$ . The magnitude or absolute value $\leftz\right$ of the complex number $z=a+\mathrm{i}b$ is also $\sqrt{{a}^{2}+{b}^{2}}$ . (Inspect Figure 3.1 to see how the Pythagorean theorem fits in.) Furthermore, since ${z}^{*}$ is just a reflection of $z$, they both have the same magnitude. A convenient way to calculate the magnitude of either $z$ or ${z}^{*}$ it to multiply them with each other:
${\leftz\right}^{2}$  $={\left{z}^{*}\right}^{2}\equiv {z}^{*}z$  
$=\left(a+\mathrm{i}b\right)\left(a\mathrm{i}b\right)$  
$={a}^{2}{\mathrm{i}}^{2}{b}^{2}={a}^{2}+{b}^{2},$ 
so
$$\leftz\right=\left{z}^{*}\right=\sqrt{{a}^{2}+{b}^{2}}.$$ 
For an abstract complex number (where we don’t necessarily know the explicit values of the real and imaginary parts) one can also write
$$\leftz\right=\left{z}^{*}\right=\sqrt{{\left(\mathrm{Re}z\right)}^{2}+{\left(\mathrm{Im}z\right)}^{2}}.$$  (3.2) 
We note that there is an isomorphism between complex numbers and real 2vectors. An isomorphism between two spaces is a mapping between the spaces that can be taken in either direction (i.e. is invertible), and preserves the structure of each space. The isomorphism between $\u2102$ and ${\mathbb{R}}^{2}$ is given by:
$$a+\mathrm{i}b\u27f7(a,b).$$ 
We have already seen that the norm operation is preserved. Similarly, addition of complex numbers
$$\left(a+\mathrm{i}b\right)+\left(c+\mathrm{i}d\right)=\left(a+c\right)+\mathrm{i}\left(b+d\right).$$ 
maps into addition of 2vectors
$$(a,b)+(c,d)=(a+c,b+d).$$ 
Exercise 3.5.
Let $z=5+6\mathrm{i}$ and $w=7+8\mathrm{i}$ .
A. Calculate ${z}^{*}$ , ${w}^{*}$ , $\leftz\right$ , $\leftw\right$ , $z+w$ , $zw$ , $\leftz+w\right$ , $\leftzw\right$ , and $zw$ .
B. Find the 2vectors isomorphic to $z$ and $w$.
Problem 3.6.
Show that multiplications of a vector by a real number and reflection of a vector with respect to the $x$ and $y$ axes map to equivalent operations on the corresponding complex numbers.
3.1.4 Polar Coordinates and Complex Phases
A vector in ${\mathbb{R}}^{2}$ can be converted from Cartesian coordinates $(x,y)$ to polar coordinates $(r,\varphi )$ . The $r$ coordinate is the magnitude of the vector, and the $\varphi $ coordinate is the angle that the vector makes with respect to the $x$ axis. The relation between the coordinate systems is given by
$$x=r\mathrm{cos}\varphi ,y=r\mathrm{sin}\varphi ,$$  (3.3) 
$$r=\sqrt{{x}^{2}+{y}^{2}},\varphi =\mathrm{arctan}\frac{y}{x}.$$ 
This simply follows from the definitions of $\mathrm{cos}\varphi $ and $\mathrm{sin}\varphi $ , since the vector creates a right triangle with the $x$ axis (see Figure 3.1 ). For example, the vector $(x,y)=(1,\sqrt{3})$ in Cartesian coordinates corresponds to $r=2$ and $\varphi =\pi /3$ .
$x$ and $y$ can be any real numbers, but $r$ must be nonnegative and $\varphi $ must be in the range $(\pi ,\pi ]$ (in radians) where $\varphi =0$ corresponds to the $x$ axis. However, there is a subtlety here: the range of the $\mathrm{arctan}$ function is $(\pi /2,\pi ,2)$ , so $\varphi $ needs to be further adjusted according to the quadrant. One can instead use a more complicated definition that automatically takes the quadrant into account:
$$ 
This function is sometimes called $\text{atan2}(x,y)$ , and it is implemented in most programming languages. Note that $\varphi $ is undefined at the origin since a vector of length zero does not point in any direction.
Given that complex numbers are isomorphic to real 2vectors, we should be able to write complex numbers in polar coordinates as well. Looking at ( 3.3 ), and replacing $x$ and $y$ with $a$ and $b$, we see that
$$z=a+\mathrm{i}b=r\left(\mathrm{cos}\varphi +\mathrm{i}\mathrm{sin}\varphi \right).$$ 
We can write this more compactly using Euler’s formula:
$${\mathrm{e}}^{\mathrm{i}\varphi}=\mathrm{cos}\varphi +\mathrm{i}\mathrm{sin}\varphi \mathit{}\u27f9\mathit{}z=r{\mathrm{e}}^{\mathrm{i}\varphi}.$$ 
This is illustrated in Figure 3.1 . In this context, the angle $\varphi $ is called the complex phase. It is of extreme importance in quantum mechanics, as we shall see.
Exercise 3.7.
Write $2\mathrm{i}3$ in polar coordinates.
Problem 3.8.
Prove, using Euler’s formula, that $\left{\mathrm{e}}^{\mathrm{i}\varphi}\right=1$ , that is, the magnitude of the complex number ${\mathrm{e}}^{\mathrm{i}\varphi}$ is $1$. If $z=r{\mathrm{e}}^{\mathrm{i}\varphi}$ , what is $\leftz\right$ ?
Problem 3.9.
Prove Euler’s formula. (You may need to use some calculus.)
3.2 Linear Algebra
The most important and fundamental mathematical structure in quantum theory is the Hilbert space, a type of complex vector space. In this section we will define Hilbert spaces and learn about many important concept and results from linear algebra that apply to them.
3.2.1 Complex Vector Spaces
A real $n$vector is an ordered list of $n$ real numbers. Analogously, a complex $n$vector is an ordered list of $n$ complex numbers. For example, a complex 2vector with two complex components ${\mathrm{\Psi}}_{1}$ and ${\mathrm{\Psi}}_{2}$ is written as:
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{2}\hfill \end{array}\right).$$ 
The notation $\mathrm{\Psi}\u27e9$ is unique to quantum mechanics, and it is called braket notation or sometimes Dirac notation. In this notation, we write a straight line $$ and an angle bracket $\u27e9$, and between them, a label. We will usually denote a general vector with the label $\mathrm{\Psi}$; this label, and its lowercase counterpart $\psi $, are very commonly used in quantum mechanics. However, we can use whatever label we want to describe our vector – including letters, numbers, symbols, or even whole words and sentences, for example: $A\u27e9$ , $\beta \u27e9$ , $3\u27e9$ , $\mathrm{\u2663}\u27e9$ , $\text{Bob}\u27e9$ , $\text{Schr\xf6dinger\u2019s Cat Is Alive}\u27e9,$ and so on.
This is a great advantage of the braket notation, as it allows us to be very descriptive in the labels we choose for our vectors – which we can’t do with the notation $\mathbf{v}$ or $\overrightarrow{v}$ commonly used for vectors in mathematics and physics.
A vector space $\mathcal{V}$ over a field ^{8} ^{8} 8 The field is usually taken to be $\mathbb{R}$ or $\u2102$. Naturally, for a complex vector space, it will be $\u2102$. $\mathbb{F}$ is a set of vectors equipped with two operations: addition of vectors and multiplication of vector by scalar, where a scalar is any number from the field $\mathbb{F}$. Vector addition must satisfy the following conditions:

1.
Closed – the sum of two vectors is another vector in the same space:
$$\forall \mathrm{\Psi}\u27e9,\mathrm{\Phi}\u27e9\in \mathcal{V}:$$ $$\mathrm{\Psi}\u27e9+\mathrm{\Phi}\u27e9\in \mathcal{V}.$$ 
2.
Commutative – the order of vectors doesn’t matter:
$$\forall \mathrm{\Psi}\u27e9,\mathrm{\Phi}\u27e9\in \mathcal{V}:$$ $$\mathrm{\Psi}\u27e9+\mathrm{\Phi}\u27e9=\mathrm{\Phi}\u27e9+\mathrm{\Psi}\u27e9.$$ 
3.
Associative – if three vectors are added, it doesn’t matter which two are added first:
$$\forall \mathrm{\Psi}\u27e9,\mathrm{\Phi}\u27e9,\mathrm{\Theta}\u27e9\in \mathcal{V}:$$ $$\left(\mathrm{\Psi}\u27e9+\mathrm{\Phi}\u27e9\right)+\mathrm{\Theta}\u27e9=\mathrm{\Psi}\u27e9+\left(\mathrm{\Phi}\u27e9+\mathrm{\Theta}\u27e9\right).$$ 
4.
Identity vector or zero vector – there is a (unique) vector ^{9} ^{9} 9 Note that here we are using a slight abuse of notation by denoting the zero vector as the number $0$, instead of using braket notation. The reason is that $0\u27e9$ already has a special common meaning in quantum mechanics, as we will see later; in the context of that special meaning, $0\u27e9$ is not the zero vector. $0$ which, when added to any vector, does not change it:
$$\exists 0\in \mathcal{V}:\mathit{}\forall \mathrm{\Psi}\u27e9\in \mathcal{V}:$$ $$\mathrm{\Psi}\u27e9+0=\mathrm{\Psi}\u27e9.$$ 
5.
Inverse vector – for every vector there exists another (unique) vector such that the two vectors sum to the zero vector:
$$\forall \mathrm{\Psi}\u27e9\in \mathcal{V}:\mathit{}\exists (\mathrm{\Psi}\u27e9)\in \mathcal{V}:$$ $$\mathrm{\Psi}\u27e9+\left(\mathrm{\Psi}\u27e9\right)=0.$$
Furthermore, multiplication by a scalar must satisfy the following conditions:

1.
Closed – the product of a vector and a scalar is a vector in the same space:
$$\forall \alpha \in \mathbb{F},\forall \mathrm{\Psi}\u27e9\in \mathcal{V}:$$ $$\alpha \mathrm{\Psi}\u27e9\in \mathcal{V}.$$ 
2.
Associative – if two scalars are multiplied by a vector, it doesn’t matter whether we first multiply the two scalars or we first multiply one of the scalars with the vector:
$$\forall \alpha ,\beta \in \mathbb{F},\forall \mathrm{\Psi}\u27e9\in \mathcal{V}:$$ $$\left(\alpha \beta \right)\mathrm{\Psi}\u27e9=\alpha \left(\beta \mathrm{\Psi}\u27e9\right).$$ 
3.
Distributive over addition of scalars:
$$\forall \alpha ,\beta \in \mathbb{F},\forall \mathrm{\Psi}\u27e9\in \mathcal{V}:$$ $$\left(\alpha +\beta \right)\mathrm{\Psi}\u27e9=\alpha \mathrm{\Psi}\u27e9+\beta \mathrm{\Psi}\u27e9.$$ 
4.
Distributive over addition of vectors:
$$\forall \alpha \in \mathbb{F},\forall \mathrm{\Psi}\u27e9,\mathrm{\Phi}\u27e9\in \mathcal{V}:$$ $$\alpha \left(\mathrm{\Psi}\u27e9+\mathrm{\Phi}\u27e9\right)=\alpha \mathrm{\Psi}\u27e9+\alpha \mathrm{\Phi}\u27e9.$$ 
5.
Identity scalar or unit scalar – there is a (unique) scalar $1$ which, when multiplied by any vector, does not change it:
$$\exists 1\in \mathbb{F}:\mathit{}\forall \mathrm{\Psi}\u27e9\in \mathcal{V}:$$ $$1\mathrm{\Psi}\u27e9=\mathrm{\Psi}\u27e9.$$
We now define a $2$dimensional complex vector space, which we denote ${\u2102}^{2}$ , as the space of complex $2$vectors over $\u2102$, with addition of vectors given by
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{2}\hfill \end{array}\right)\in {\u2102}^{2},\mathrm{\Phi}\u27e9\equiv \left(\begin{array}{c}\hfill {\mathrm{\Phi}}_{1}\hfill \\ \hfill {\mathrm{\Phi}}_{2}\hfill \end{array}\right)\in {\u2102}^{2}$$  
$$\Rightarrow \mathrm{\Psi}\u27e9+\mathrm{\Phi}\u27e9=\left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}+{\mathrm{\Phi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{2}+{\mathrm{\Phi}}_{2}\hfill \end{array}\right),$$ 
and multiplication of vector by scalar given by
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{2}\hfill \end{array}\right)\in {\u2102}^{2},\lambda \in \u2102$$  
$$\Rightarrow \lambda \mathrm{\Psi}\u27e9=\left(\begin{array}{c}\hfill \lambda {\mathrm{\Psi}}_{1}\hfill \\ \hfill \lambda {\mathrm{\Psi}}_{2}\hfill \end{array}\right).$$ 
The $n$dimensional complex vector space ${\u2102}^{n}$ is defined analogously. In this course, we will mostly focus on ${\u2102}^{2}$ for simplicity, in particular when giving explicit examples.
Exercise 3.10.
Let
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill 3+\mathrm{i}\hfill \\ \hfill 9\hfill \end{array}\right),\mathrm{\Phi}\u27e9\equiv \left(\begin{array}{c}\hfill \mathrm{i}1\hfill \\ \hfill 10\mathrm{i}\hfill \end{array}\right),$$ 
$$\alpha =7\mathrm{i}2,\beta =48\mathrm{i}.$$ 
Calculate $\alpha \mathrm{\Psi}\u27e9+\beta \mathrm{\Phi}\u27e9$ .
Problem 3.11.
Check that the addition and multiplication as defined above indeed satisfy all of the required conditions for a vector space. You can do this just for ${\u2102}^{2}$ , for simplicity.
3.2.2 Dual Vectors, Inner Products, Norms, and Hilbert Spaces
A dual vector is defined by writing the vector as a row instead of a column, and replacing each component with its complex conjugate. We denote the dual vector of $\mathrm{\Psi}\u27e9$ as follows:
$$\u27e8\mathrm{\Psi}=\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}\hfill \end{array}\right).$$ 
In terms of notation, there is now an opposite angle bracket $\u27e8$ on the left of the label, and the straight line $$ is on the right. Addition and multiplication by a scalar are defined as for vectors, simply replacing columns with rows. However, you may not add vectors and dual vectors together – adding a row to a column is undefined!
If we are given a dual vector, we can take its dual to get a “normal” (column) vector. In this case, the operation of taking the dual involves writing the vector as a column instead of a row and taking the complex conjugates of the components. This means that the operation of taking the dual is an involution – taking the dual of a vector twice gives back the same vector, since ${\left({z}^{*}\right)}^{*}=z$ .
Using dual vectors, we may define the inner product. This product allows us to take a vector and a dual vector and produce a (complex) number out of them, similarly to the dot product of real vectors ^{10} ^{10} 10 The dot product of the real vectors $\mathbf{v}\equiv ({v}_{1},{v}_{2})$ and $\mathbf{w}\equiv ({w}_{1},{w}_{2})$ in ${\mathbb{R}}^{2}$ is defined as $\mathbf{v}\cdot \mathbf{w}\equiv {v}_{1}{w}_{1}+{v}_{2}{w}_{2}$ . In principle, this definition does secretly involve a dual (row) vector and a (column) vector, but since we do not need to take the complex conjugate, we don’t really need to worry about dual vectors. However, it is important to note that in real vector spaces with curvature, such as those used in general relativity, the dot product must be replaced with a more complicated inner product which involves the metric, and it again becomes crucial to distinguish vectors from dual vectors – which in this context are also called contravariant and covariant vectors respectively. . Importantly, the inner product only works for one vector and one dual vector, not for two vectors or two dual vectors. To calculate it, we multiply the components of both vectors one by one and add them up:
$$\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9=\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}\hfill \end{array}\right)\left(\begin{array}{c}\hfill {\mathrm{\Phi}}_{1}\hfill \\ \hfill {\mathrm{\Phi}}_{2}\hfill \end{array}\right)={\mathrm{\Psi}}_{1}^{*}{\mathrm{\Phi}}_{1}+{\mathrm{\Psi}}_{2}^{*}{\mathrm{\Phi}}_{2}.$$ 
In braket notation, vectors $\mathrm{\Psi}\u27e9$ are called “kets” and dual vectors $\u27e8\mathrm{\Psi}$ are called “bras”. Then the notation for $\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9$ is called a “bra(c)ket”.
We define the normsquared of a vector by taking its inner product with its dual (“squaring” it):
${\parallel \mathrm{\Psi}\parallel}^{2}$  $\equiv \u27e8\mathrm{\Psi}\mathrm{\Psi}\u27e9$  
$=\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}\hfill \end{array}\right)\left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{2}\hfill \end{array}\right)$  
$={\left{\mathrm{\Psi}}_{1}\right}^{2}+{\left{\mathrm{\Psi}}_{2}\right}^{2},$ 
where the magnitudesquared of a complex number $z$ was defined in Section 3.1.3 as ${\leftz\right}^{2}\equiv {z}^{*}z$ . Then we can define the norm as the square root of the normsquared:
$$\parallel \mathrm{\Psi}\parallel \equiv \sqrt{{\parallel \mathrm{\Psi}\parallel}^{2}}=\sqrt{\u27e8\mathrm{\Psi}\mathrm{\Psi}\u27e9}.$$ 
Observe how taking the dual of a vector generalizes taking the complex conjugate of a number, and taking the norm of a vector generalizes taking the magnitude of a number; indeed, for 1dimensional vectors, these operations are the same!
A vector space with an inner product is called a Hilbert space, provided it is also a complete metric space ^{11} ^{11} 11 A vector space is a complete metric space if whenever an infinite series of vectors ${\mathrm{\Psi}}_{i}\u27e9$ converges absolutely, that is, the series of the norms of the vectors converges: $$ then the series of the vectors themselves converges as well, to some vector $\mathrm{\Psi}\u27e9$ in the Hilbert space: $$\sum _{i=0}^{\mathrm{\infty}}{\mathrm{\Psi}}_{i}\u27e9=\mathrm{\Psi}\u27e9.$$ and that the inner product satisfies the same properties (which you will derive in problems 3.13 , 3.14 , and 3.15 ) as the standard inner product on ${\u2102}^{n}$ . In particular, ${\u2102}^{n}$ itself is a Hilbert space, but there are many other Hilbert spaces, some of them much more abstract. The usual notation for a general Hilbert space is $\mathscr{H}$.
Exercise 3.12.
Let
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill 7+7\mathrm{i}\hfill \\ \hfill 72\mathrm{i}\hfill \end{array}\right),\mathrm{\Phi}\u27e9\equiv \left(\begin{array}{c}\hfill 27\mathrm{i}\hfill \\ \hfill \mathrm{i}\hfill \end{array}\right).$$ 
Calculate $\u27e8\mathrm{\Psi}$ , $\u27e8\mathrm{\Phi}$ , $\parallel \mathrm{\Psi}\parallel $ , $\parallel \mathrm{\Phi}\parallel $ , $\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9$ , and $\u27e8\mathrm{\Phi}\mathrm{\Psi}\u27e9$ .
Problem 3.13.
Prove that the normsquared ${\parallel \mathrm{\Psi}\parallel}^{2}$ is always nonnegative, and it is zero if and only if $\mathrm{\Psi}\u27e9$ is the zero vector, that is, the vector whose components are all zero. In other words, the inner product is positivedefinite. As a corollary, explain why we must take the complex conjugate of the components when we convert a vector to a dual vector. (What would have happened if we didn’t?)
Problem 3.14.
Prove that $\u27e8\mathrm{\Phi}\mathrm{\Psi}\u27e9={\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9}^{*}$ , that is, if we swap the order of vectors in the inner product we get the complex conjugate of the original product. Thus, unlike the dot product, the inner product on ${\u2102}^{n}$ is not symmetric. However, it is conjugatesymmetric, and in particular, the magnitude of the inner product remains the same, since $\leftz\right=\left{z}^{*}\right$ .
Problem 3.15.
Prove that if $\alpha ,\beta \in \u2102$ and $\mathrm{\Psi}\u27e9,\mathrm{\Phi}\u27e9,\mathrm{\Theta}\u27e9\in {\u2102}^{n}$ then
$$\u27e8\mathrm{\Psi}\left(\alpha \mathrm{\Phi}\u27e9+\beta \mathrm{\Theta}\u27e9\right)=\alpha \u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9+\beta \u27e8\mathrm{\Psi}\mathrm{\Theta}\u27e9,$$ 
that is, the inner product is linear in its second argument.
3.2.3 Orthonormal Bases
An orthonormal basis of ${\u2102}^{n}$ is a set of $n$ nonzero vectors $\{{B}_{1}\u27e9,\mathrm{\dots},{B}_{n}\u27e9\}$ – which we will usually denote ${B}_{i}\u27e9$ for short, with the implication that $i\in \{1,\mathrm{\dots},n\}$ – such that:

1.
They span ${\u2102}^{n}$ , which means that any vector $\mathrm{\Psi}\u27e9\in {\u2102}^{n}$ can be written uniquely as a linear combination of the basis vectors, that is, a sum of the vectors ${B}_{i}\u27e9$ multiplied by some complex numbers ${\lambda}_{i}\in \u2102$ :
$$\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{\lambda}_{i}{B}_{i}\u27e9.$$ This property ensures that the basis can be used to define any single vector in the space ${\u2102}^{n}$ , not just part of that space.
As a simple example, in ${\mathbb{R}}^{3}$ the vector $\widehat{\mathbf{x}}\equiv (1,0,0)$ pointing along the $x$ axis and the vector $\widehat{\mathbf{y}}\equiv (0,1,0)$ pointing along the $y$ axis span the $xy$ plane, but not all of ${\mathbb{R}}^{3}$ . To get a basis for all of ${\mathbb{R}}^{3}$ , we must add an appropriate third vector, such as the vector $\widehat{\mathbf{z}}\equiv (0,0,1)$ pointing along the $z$ axis. (But other vectors, such as $(1,2,3)$ , would work as well.) 
2.
They are linearly independent, in that if the zero vector is a linear combination of the basis vectors, then the coefficients in the linear combination must all be zero:
$$\sum _{i=1}^{n}{\lambda}_{i}{B}_{i}\u27e9=0\mathit{}\u27f9\mathit{}{\lambda}_{i}=0,\forall i.$$ Linear independence means (as you will show in Problem 3.17 ) that no vector in the set can be written as a linear combination of the other vectors in the set. If we could have done so, then that vector would have been redundant, and we would have needed to remove it in order to obtain a basis.
As a simple example, the set composed of $\widehat{\mathbf{x}}$ , $\widehat{\mathbf{y}}$ , and $(1,2,0)$ is linearly dependent, since $(1,2,0)=\widehat{\mathbf{x}}+2\widehat{\mathbf{y}}$ , but the set $\{\widehat{\mathbf{x}},\widehat{\mathbf{y}},\widehat{\mathbf{z}}\}$ is linearly independent. 
3.
They are all orthogonal to each other, that is, the inner product of any two different vectors evaluates to zero:
$$\u27e8{B}_{i}{B}_{j}\u27e9=0,\forall i\ne j.$$ 
4.
They are all unit vectors, that is, they have a norm (and normsquared) of 1:
$${\parallel {B}_{i}\parallel}^{2}=\u27e8{B}_{i}{B}_{i}\u27e9=1,\forall i.$$
In fact, properties 3 and 4 may be expressed more compactly as:
$$\u27e8{B}_{i}{B}_{j}\u27e9={\delta}_{ij}=\{\begin{array}{cc}0\hfill & \text{if}i\ne j,\hfill \\ 1\hfill & \text{if}i=j,\hfill \end{array}$$  (3.4) 
where ${\delta}_{ij}$ is called the Kronecker delta. If this combined property is satisfied, we say that the vectors are orthonormal ^{12} ^{12} 12 Actually, bases don’t have to be orthonormal in general, but in quantum mechanics they always are, for reasons that will become clear later. .
These requirements become much simpler in $n=2$ dimensions. An orthonormal basis for ${\u2102}^{2}$ is a set of 2 nonzero vectors ${B}_{1}\u27e9,{B}_{2}\u27e9$ such that:

1.
They span ${\u2102}^{2}$ , which means that any vector $\mathrm{\Psi}\u27e9\in {\u2102}^{2}$ can be written as a linear combination of the basis vectors:
$$\mathrm{\Psi}\u27e9={\lambda}_{1}{B}_{1}\u27e9+{\lambda}_{2}{B}_{2}\u27e9,$$ for a unique choice of ${\lambda}_{1},{\lambda}_{2}\in \u2102$ .

2.
They are linearly independent, which means that we cannot write one in terms of a scalar times the other, i.e.:
$${B}_{1}\u27e9\ne \lambda {B}_{2}\u27e9,\lambda \in \u2102.$$ 
3.
They are orthonormal to each other, that is, the inner product between them evaluates to zero and both of them have unit norm:
$$\u27e8{B}_{1}{B}_{2}\u27e9=0,$$ $${\parallel {B}_{1}\parallel}^{2}=\u27e8{B}_{1}{B}_{1}\u27e9=1,{\parallel {B}_{2}\parallel}^{2}=\u27e8{B}_{2}{B}_{2}\u27e9=1.$$
A very important basis, the standard basis of ${\u2102}^{2}$ , is defined as:
$${1}_{1}\u27e9\equiv \left(\begin{array}{c}\hfill 1\hfill \\ \hfill 0\hfill \end{array}\right),{1}_{2}\u27e9\equiv \left(\begin{array}{c}\hfill 0\hfill \\ \hfill 1\hfill \end{array}\right).$$ 
We similarly define the standard basis of ${\u2102}^{n}$ for any $n$ in the obvious way.
Problem 3.16.
Show that the standard basis vectors satisfy the properties above.
Problem 3.17.
Show that linear independence means that no vector in the basis can be written as a linear combination of the other vectors in the basis.
Problem 3.18.
Any basis which is orthogonal but not orthonormal, that is, does not satisfy property 4, can be made orthonormal by normalizing each basis vector, that is, dividing it by its norm:
$${B}_{i}\u27e9\mapsto \frac{{B}_{i}\u27e9}{\parallel {B}_{i}\parallel}.$$ 
Show that if an orthogonal but not orthonormal basis satisfies properties 13, then it still satisfies them after normalizing it in this way.
Exercise 3.19.
Consider the complex vector
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill 1+\mathrm{i}\hfill \\ \hfill 2+2\mathrm{i}\hfill \end{array}\right).$$ 
Normalize $\mathrm{\Psi}\u27e9$ and find another complex vector $\mathrm{\Phi}\u27e9$ such that the set $\{\mathrm{\Psi}\u27e9,\mathrm{\Phi}\u27e9\}$ is a basis of ${\u2102}^{2}$ (i.e. satisfies all of the properties above).
Problem 3.20.
Find an orthonormal basis of ${\u2102}^{3}$ which is not the standard basis or a scalar multiple of the standard basis. Show that it is indeed an orthonormal basis.
3.2.4 Matrices and the Adjoint
A matrix in $n$ dimensions is an $n\times n$ array ^{13} ^{13} 13 In fact, matrices don’t have to be square, they can have a different number of rows and columns, that is, $n\times m$ where $n\ne m$ ; but nonsquare matrices are generally not of much interest in quantum mechanics. of (complex) numbers. In $n=2$ dimensions we have
$$A=\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill \\ \hfill {A}_{21}\hfill & \hfill {A}_{22}\hfill \end{array}\right),$$  
$${A}_{11},{A}_{12},{A}_{21},{A}_{22}\in \u2102.$$ 
A matrix can act on a vector to produce another vector. If it’s a ket (a vertical/column vector), the result is another ket. If it’s a bra (a horizontal/row dual vector), the result is another bra.
If the matrix acts on a ket, then it must act from the left, and the element at row $i$ of the resulting ket is obtained by taking the inner product of row $i$ of the matrix with the ket:
$A\mathrm{\Psi}\u27e9$  $=\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill \\ \hfill {A}_{21}\hfill & \hfill {A}_{22}\hfill \end{array}\right)\left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{2}\hfill \end{array}\right)$  (3.5)  
$=\left(\begin{array}{c}\hfill {A}_{11}{\mathrm{\Psi}}_{1}+{A}_{12}{\mathrm{\Psi}}_{2}\hfill \\ \hfill {A}_{21}{\mathrm{\Psi}}_{1}+{A}_{22}{\mathrm{\Psi}}_{2}\hfill \end{array}\right).$  (3.6) 
If the matrix acts on a bra, then it must act from the right, and the element at column $i$ of the resulting bra is obtained by taking the inner product of column $i$ of the matrix with the bra:
$\u27e8\mathrm{\Psi}A$  $=\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}\hfill \end{array}\right)\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill \\ \hfill {A}_{21}\hfill & \hfill {A}_{22}\hfill \end{array}\right)$  
$=\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}{A}_{11}+{\mathrm{\Psi}}_{2}^{*}{A}_{21}\hfill & \hfill {\mathrm{\Psi}}_{1}^{*}{A}_{12}+{\mathrm{\Psi}}_{2}^{*}{A}_{22}\hfill \end{array}\right).$ 
Note that the dual vector $\u27e8\mathrm{\Psi}A$ is not the dual of the vector $A\mathrm{\Psi}\u27e9$ , as you can see by taking the dual of ( 3.5 ). However, we can define the adjoint of a matrix by transposing rows into columns and then taking the complex conjugate of all the components:
$${A}^{\u2020}=\left(\begin{array}{cc}\hfill {A}_{11}^{*}\hfill & \hfill {A}_{21}^{*}\hfill \\ \hfill {A}_{12}^{*}\hfill & \hfill {A}_{22}^{*}\hfill \end{array}\right),$$ 
where the notation $\u2020$ for the adjoint is called dagger. Then the vector dual to $A\mathrm{\Psi}\u27e9$ is $\u27e8\mathrm{\Psi}{A}^{\u2020}$ , as you will check in Problem 3.22 . Actually, taking the adjoint of a matrix is exactly the same operation as taking the dual of a vector! The only difference is that for a matrix we have $n$ columns to transpose into rows, while for a vector we only have one. Therefore, we have
$${\mathrm{\Psi}\u27e9}^{\u2020}=\u27e8\mathrm{\Psi},{\u27e8\mathrm{\Psi}}^{\u2020}=\mathrm{\Psi}\u27e9,$$ 
and we get the following nice relation:
$${\left(A\mathrm{\Psi}\u27e9\right)}^{\u2020}=\u27e8\mathrm{\Psi}{A}^{\u2020}.$$ 
The identity matrix, which we will write simply as 1, is:
$$1=\left(\begin{array}{cc}\hfill 1\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 1\hfill \end{array}\right).$$ 
Acting with it on any vector or dual vector does not change it: $1\mathrm{\Psi}\u27e9=\mathrm{\Psi}\u27e9$ .
Problem 3.21.
To rotate (real) vectors in ${\mathbb{R}}^{2}$ by an angle $\theta $, we take their product with the (real) rotation matrix:
$$R\left(\theta \right)\equiv \left(\begin{array}{cc}\hfill \mathrm{cos}\theta \hfill & \hfill \mathrm{sin}\theta \hfill \\ \hfill \mathrm{sin}\theta \hfill & \hfill \mathrm{cos}\theta \hfill \end{array}\right).$$ 
A. Calculate the matrix $R\left(\pi /3\right)$ .
B. Write down the vector resulting from rotating $(5,9)$ by $\pi /3$ radians, in both Cartesian and polar coordinates.
C. Repeat (B) for rotating a general 2vector $(x,y)$ by a general angle $\theta $.
D. Find the mapping between rotations of 2vectors in ${\mathbb{R}}^{2}$ and rotations of complex numbers in $\u2102$, and explain what is the analogue of the rotation matrix in terms of complex numbers.
Problem 3.22.
Show that the vector dual to $A\mathrm{\Psi}\u27e9$ is indeed $\u27e8\mathrm{\Psi}{A}^{\u2020}$ .
Exercise 3.23.
Let
$$A\equiv \left(\begin{array}{cc}\hfill 1+5\mathrm{i}\hfill & \hfill 2\hfill \\ \hfill 37\mathrm{i}\hfill & \hfill 4+8\mathrm{i}\hfill \end{array}\right),\u27e8\mathrm{\Psi}\equiv \left(\begin{array}{cc}\hfill \mathrm{i}2\hfill & \hfill \mathrm{i}3\hfill \end{array}\right).$$ 
Calculate $A\mathrm{\Psi}\u27e9$ and $\u27e8\mathrm{\Psi}{A}^{\u2020}$ separately, and then check that they are the dual of each other.
Problem 3.24.
Show that ${\left({A}^{\u2020}\right)}^{\u2020}=A$ . This means that the adjoint operation is an involution, exactly like complex conjugation and taking the dual of a vector. In fact, all three are the exact same operation. By choosing an appropriate matrix, explain how taking the complex conjugate of a number is a special case of taking the adjoint of a matrix.
Problem 3.25.
Show that the action of a matrix on a vector is linear, that is,
$$A\left(\alpha \mathrm{\Psi}\u27e9+\beta \mathrm{\Phi}\u27e9\right)=\alpha A\mathrm{\Psi}\u27e9+\beta A\mathrm{\Phi}\u27e9.$$ 
3.2.5 The Outer Product
We have seen that vectors and dual vectors may be combined to generate a complex number using the inner product. We can similarly combine a vector and a dual vector to generate a matrix, using the outer product. Given
$$\u27e8\mathrm{\Psi}\equiv \left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}\hfill \end{array}\right),\mathrm{\Phi}\u27e9\equiv \left(\begin{array}{c}\hfill {\mathrm{\Phi}}_{1}\hfill \\ \hfill {\mathrm{\Phi}}_{2}\hfill \end{array}\right),$$ 
we define the outer product as the matrix whose component at row $i$, column $j$ is given by multiplying the component at row $i$ of $\mathrm{\Phi}\u27e9$ with the component at column $j$ of $\u27e8\mathrm{\Psi}$ :
$\mathrm{\Phi}\u27e9\u27e8\mathrm{\Psi}$  $=\left(\begin{array}{c}\hfill {\mathrm{\Phi}}_{1}\hfill \\ \hfill {\mathrm{\Phi}}_{2}\hfill \end{array}\right)\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}\hfill \end{array}\right).$  
$=\left(\begin{array}{cc}\hfill {\mathrm{\Psi}}_{1}^{*}{\mathrm{\Phi}}_{1}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}{\mathrm{\Phi}}_{1}\hfill \\ \hfill {\mathrm{\Psi}}_{1}^{*}{\mathrm{\Phi}}_{2}\hfill & \hfill {\mathrm{\Psi}}_{2}^{*}{\mathrm{\Phi}}_{2}\hfill \end{array}\right)$ 
Note how when taking an inner product the straight lines $$ face each other: $\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9$ , while when taking an outer product the angle brackets $\u27e9\u27e8$ face each other. This shows some of the elegance of the Dirac notation! A braket is an inner product, while a ketbra is an outer product.
We can assign a rank to scalars, vectors, and matrices:

•
Scalars have rank 0 since they have ${n}^{0}=1$ component,

•
Vectors have rank 1 since they have ${n}^{1}=n$ components,

•
Matrices have rank 2 since they have ${n}^{2}$ components.
Then the inner product reduces the rank of the vectors from 1 to 0, while the outer product increases the rank from 1 to 2.
Exercise 3.26.
Calculate the outer product $\mathrm{\Psi}\u27e9\u27e8\mathrm{\Phi}$ for
$$\mathrm{\Psi}\u27e9=\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 2+\mathrm{i}\hfill \end{array}\right),\mathrm{\Phi}\u27e9=\left(\begin{array}{c}\hfill 3\mathrm{i}\hfill \\ \hfill 4\mathrm{i}\hfill \end{array}\right).$$ 
Remember that when writing the dual vector, the components are complex conjugated!
3.2.6 The Completeness Relation
Let us write the vector $\mathrm{\Psi}\u27e9$ as a linear combination of basis vectors:
$$\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{\lambda}_{i}{B}_{i}\u27e9.$$  (3.7) 
Taking the inner product of the above equation with $\u27e8{B}_{j}$ and using the fact that the basis vectors are orthonormal,
$$\u27e8{B}_{i}{B}_{j}\u27e9={\delta}_{ij}=\{\begin{array}{cc}0\hfill & \text{if}i\ne j,\hfill \\ 1\hfill & \text{if}i=j,\hfill \end{array}$$ 
we get:
$$\u27e8{B}_{j}\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{\lambda}_{i}\u27e8{B}_{j}{B}_{i}\u27e9=\sum _{i=1}^{n}{\lambda}_{i}{\delta}_{ij}={\lambda}_{j},$$ 
since all of the terms in the sum vanish except the one with $i=j$ . Therefore, the coefficients ${\lambda}_{i}$ in ( 3.7 ) are given, for any vector $\mathrm{\Psi}\u27e9$ and for any basis ${B}_{i}\u27e9$ , by
$${\lambda}_{i}=\u27e8{B}_{i}\mathrm{\Psi}\u27e9.$$  (3.8) 
Now, since ${\lambda}_{i}$ is a scalar, and multiplication by a scalar is commutative (unlike the inner and outer products!), we can move it to the right in ( 3.7 ):
$$\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{B}_{i}\u27e9{\lambda}_{i}.$$ 
We haven’t actually done anything here; where to write the scalar, on the left or right of the vector, is completely arbitrary – it’s just conventional to write it on the left. Then, replacing ${\lambda}_{i}$ with $\u27e8{B}_{i}\mathrm{\Psi}\u27e9$ as per ( 3.8 ), we get
$$\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{B}_{i}\u27e9\u27e8{B}_{i}\mathrm{\Psi}\u27e9.$$ 
To make this even more suggestive, let us add parentheses:
$$\mathrm{\Psi}\u27e9=\left(\sum _{i=1}^{n}{B}_{i}\u27e9\u27e8{B}_{i}\right)\mathrm{\Psi}\u27e9.$$  (3.9) 
Note that what we did here is go from a vector ${B}_{i}\u27e9$ times a complex number $\u27e8{B}_{i}\mathrm{\Psi}\u27e9$ to a matrix ${B}_{i}\u27e9\u27e8{B}_{i}$ times a vector $\mathrm{\Psi}\u27e9$ , for each $i$. The fact that these two different products are actually equal to one another (as you will prove in Problem 3.28 ) is not at all trivial, but it is one of the main reasons we like to use braket notation! The notation now suggests (see Problem 3.29 ) that
$$\sum _{i=1}^{n}{B}_{i}\u27e9\u27e8{B}_{i}=1,$$  (3.10) 
where ${B}_{i}\u27e9\u27e8{B}_{i}$ is the outer product defined above, and the 1 on the righthand side is the identity matrix. This extremely useful result is called the completeness relation.
In ${\u2102}^{2}$ , we simply have
$${B}_{1}\u27e9\u27e8{B}_{1}+{B}_{2}\u27e9\u27e8{B}_{2}=1.$$  (3.11) 
Exercise 3.27.
Given the basis
$${B}_{1}\u27e9=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \end{array}\right),{B}_{2}\u27e9=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \end{array}\right),$$ 
first show that it is indeed an orthonormal basis, and then show that it satisfies the completeness relation given by ( 3.11 ).
Problem 3.28.
Provide a rigorous proof that
$$\sum _{i=1}^{n}{B}_{i}\u27e9\u27e8{B}_{i}\mathrm{\Psi}\u27e9=\left(\sum _{i=1}^{n}{B}_{i}\u27e9\u27e8{B}_{i}\right)\mathrm{\Psi}\u27e9.$$ 
This means that the product is associative.
Problem 3.29.
Importantly, we didn’t “divide ( 3.9 ) by $\mathrm{\Psi}\u27e9$ ” to get ( 3.10 )! You can’t do that with matrices and vectors. Instead, ( 3.10 ) follows from the fact that any matrix $A$ which satisfies $\mathrm{\Psi}\u27e9=A\mathrm{\Psi}\u27e9$ for every vector $\mathrm{\Psi}\u27e9$ must necessarily be the identity matrix. Prove this.
3.2.7 Representing Vectors in Different Bases
Let us consider a complex $n$vector defined as follows:
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill {\mathrm{\Psi}}_{1}\hfill \\ \hfill \mathrm{\vdots}\hfill \\ \hfill {\mathrm{\Psi}}_{n}\hfill \end{array}\right),{\mathrm{\Psi}}_{i}\in \u2102.$$ 
Given an orthonormal basis ${B}_{i}\u27e9$ , we have seen that we can write $\mathrm{\Psi}\u27e9$ as a linear combination of the basis vectors:
$$\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{\lambda}_{i}{B}_{i}\u27e9.$$ 
The coefficients ${\lambda}_{i}\in \u2102$ depend on $\mathrm{\Psi}\u27e9$ and on the basis vectors, as we showed in ( 3.8 ):
$${\lambda}_{i}\equiv \u27e8{B}_{i}\mathrm{\Psi}\u27e9\mathit{}\u27f9\mathit{}\mathrm{\Psi}\u27e9=\sum _{i=1}^{n}{B}_{i}\u27e9\u27e8{B}_{i}\mathrm{\Psi}\u27e9$$ 
With these coefficients, we can represent the vector $\mathrm{\Psi}\u27e9$ in the basis ${B}_{i}\u27e9$ . This representation will be a vector of the same dimension $n$, with the components being the coefficients ${\lambda}_{i}=\u27e8{B}_{i}\mathrm{\Psi}\u27e9$ , and will be denoted as follows:
$$\mathrm{\Psi}\u27e9{}_{B}\equiv \left(\begin{array}{c}\hfill \u27e8{B}_{1}\mathrm{\Psi}\u27e9\hfill \\ \hfill \mathrm{\vdots}\hfill \\ \hfill \u27e8{B}_{n}\mathrm{\Psi}\u27e9\hfill \end{array}\right)=\left(\begin{array}{c}\hfill {\lambda}_{1}\hfill \\ \hfill \mathrm{\vdots}\hfill \\ \hfill {\lambda}_{n}\hfill \end{array}\right).$$ 
We say that ${\lambda}_{i}$ are the coordinates of $\mathrm{\Psi}\u27e9$ with respect to the basis ${B}_{i}\u27e9$ .
The correct way to understand the meaning of a vector is as an abstract entity, like an arrow in space, which does not depend on any particular basis – it is just there. However, if we want to do concrete calculations with a vector, we must somehow represent it numerically. This is done by choosing a basis and writing down the coordinates of the vector in that basis.
Therefore, whenever we define a vector using its components – as we have been doing throughout this chapter – there is always a specific basis in which the vector is represented, with the components being the coordinates in this basis. If no particular basis is explicitly specified, it is implied that it is the standard basis. But no representation is better than the other; we usually choose whatever basis is most convenient to work with. In quantum mechanics, we often choose a basis defined by some physical observable, as we will see below.
Exercise 3.30.
Let a vector $\mathrm{\Psi}\u27e9$ be represented in the standard basis as
$$\mathrm{\Psi}\u27e9\equiv \left(\begin{array}{c}\hfill 19\mathrm{i}\hfill \\ \hfill 7\mathrm{i}2\hfill \end{array}\right).$$ 
Find its representation $\mathrm{\Psi}\u27e9{}_{B}$ in terms of the orthonormal basis
$${B}_{1}\u27e9=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \end{array}\right),{B}_{2}\u27e9=\frac{1}{\sqrt{2}}\left(\begin{array}{c}\hfill 1\hfill \\ \hfill 1\hfill \end{array}\right).$$ 
Problem 3.31.
Prove that the inner product (and thus also the norm) is independent of the choice of basis. That is, for any two vectors $\mathrm{\Psi}\u27e9$ and $\mathrm{\Phi}\u27e9$ and any two bases ${B}_{i}\u27e9$ and ${C}_{i}\u27e9$ ,
$$\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9{}_{B}=\u27e8\mathrm{\Psi}\mathrm{\Phi}\u27e9{}_{C}.$$ 
3.2.8 Change of Basis
Let the representation of a vector $\mathrm{\Psi}\u27e9$ in the basis ${B}_{i}\u27e9$ be
$\mathrm{\Psi}\u27e9{}_{B}$  $=\left(\begin{array}{c}\hfill \u27e8{B}_{1}\mathrm{\Psi}\u27e9\hfill \\ \hfill \mathrm{\vdots}\hfill \\ \hfill \u27e8{B}_{n}\mathrm{\Psi}\u27e9\hfill \end{array}\right)$  
$={\displaystyle \sum _{i=1}^{n}}{B}_{i}\u27e9\u27e8{B}_{i}\mathrm{\Psi}\u27e9.$ 
Given a different basis ${C}_{i}\u27e9$ , we have a different representation