Linear Algebra and Quantum Mechanics Crash Course Conceptual Solutions
- Quantum Mechanics (9 pts.) In your own words, define the following:
- Wave-particle duality (3pts.) Wave-particle duality means that light and small particles have both properties of particles and properties of waves. This can be proven experimentally where both light and particles will behave as a wave in certain experiments but a particle in others
- Superposition (3pts.) Superposition means that a quantum mechanical system can be in a combination of different states at the same time and only must chose one state to be in upon measurement.
- Entanglement (3pts.) When quantum mechanical systems become entangled it means that they are no longer behaving independently but are now dependent on the states of the other systems.
- Statistics (13 pts.) Consider a group of 10 rooms in an old building. Three of the rooms have no spiders, six of the rooms have one spider, and one room has 5 spiders.
- (1 pt.) Is the number of spiders in a room a discrete or continuous variable? Explain. The number of spiders is a discrete variable since it can only come as a whole number (integer)
- (2 pts.) What is the probability that you randomly select a room with 1 spider? Show your work. Number of rooms with one spider: 6 = \(N_1\) Total number of rooms: 3 + 6 + 1 = 10 = N Probability of selecting a room with one spider: \(P_1 = \frac{N_1}{N} = \frac{6}{10} = 0.6\)
- (4 pts.) Show that the total probability of this data set is 1. \[P_1 = 0.6\ (above)\] \[P_0 = \frac{3}{10} = 0.3\] \[P_5 = \frac{1}{10} = 0.1\] \[P_0 + P_1 + P_5 = 0.3 + 0.6 + 0.1 = 1\]
- (2 pts.) What is the average number of spiders in a room? \[\langle s \rangle = \sum_s sP(s) = 0P_0 + 1P_1 + 5P_5 = 0(0.3) + 1(0.6) + 5(0.1) = 1.1 spiders\]
- (4 pts.) If s is the number of spiders in a room, then the number of flies in the room, f, can be described by the equation \(f(s) = 10-2s\). What is the expectation value for the number of flies in the building? \[\langle f \rangle = \sum_s f(s)P(s) = f(0)P_0 + f(1)P_1 + f(5)P_5 = (10-2(0))(0.3) + (10-2(1))(0.6) + (10-2(5))(0.1) \] \[10(0.3) + 8(0.6) + 0(0.1) = 7.8 flies\]
- Wavefunctions (4pts.). Consider a quantum mechanical state that can be in one of three states, \(|1\rangle\), \(|2\rangle\), and \(|3\rangle\). State \(|1\rangle\) has an energy of a, state \(|2\rangle\) has an energy of b, and state \(|3\rangle\) has an energy of c. The quantum mechanical state being studied is described by the following wavefunction. \[|\psi\rangle = \frac{2i}{\sqrt{30}}|1\rangle -\frac{5}{\sqrt{30}}|2\rangle +\frac{1}{\sqrt{30}}|3\rangle\] If you measure the energy of state \(|\psi\rangle\), what is the most likely result? Explain your answer. The most likely energy to be measured is b since the wavefunction has the highest probability of collapsing into state \(|2\rangle\) \[P(|2\rangle) = |\frac{5}{\sqrt{30}}|^2 = \frac{25}{30}\]
- **Linear Algebra for Quantum Mechanics (24 pts.)
- (6 pts.) Consider the two vectors \(|x\rangle\) and \(|y\rangle\) below. Show that \(\langle x | y \rangle ^* = \langle y | x \rangle\), where the \(*\) denotes the complex conjugate. Note that this is a general rule that we will be using many times throughout the course.** \[|x\rangle = \begin{bmatrix}-1 \\ 2i \\ 1 \end{bmatrix}\] \[|y\rangle = \begin{bmatrix} 1 \\ 0 \\ i \end{bmatrix}\]
First find the bra states that correspond to the above kets \[\langle x | = \begin{bmatrix}-1 & -2i & 1 \end{bmatrix}\] \[\langle y | = \begin{bmatrix} 1 & 0 & -i \end{bmatrix}\]
Now to compute the dot products \[\langle x | y \rangle = \begin{bmatrix}-1 & -2i & 1 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \\ i \end{bmatrix}\] \[ = -1(1) + -2i(0) + 1(i) = i-1\]
\[\langle y | x \rangle = \begin{bmatrix} 1 & 0 & -i \end{bmatrix}\begin{bmatrix}-1 \\ 2i \\ 1 \end{bmatrix}\] \[= 1(-1) + 0(2i) + -i(1) = -i-1\]
Note that \(\langle x | y \rangle^* = -i-1 = \langle y | x \rangle\)
- (6 pts.) Given below are the three Pauli matrices which we will be using many times in this course. Show that \(\sigma_x\sigma_x = \sigma_y\sigma_y = \sigma_z\sigma_z = \textbf{I}\) (the identity matrix) and that \([\sigma_x, \sigma_y] = 2i\sigma_z\). \[\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\] \[\sigma_y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}\] \[\sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\]
The below matrix multiplication will be performed using Wolfram alpha: \[\sigma_x\sigma_x = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix} = \text{bf{I}\] \[\sigma_y\sigma_y = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix} = \textbf{I}\] \[\sigma_z\sigma_z = \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix}\]
\[\sigma_x\sigma_y = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix} = \begin{bmatrix}i&0\\0&-i\end{bmatrix}\] \[\sigma_y\sigma_x = \begin{bmatrix} 0 & -i \\ i & 0 \end{bmatrix}\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix}-i&0\\0&i\end{bmatrix}\]
\[[\sigma_x, \sigma_y] = \sigma_x\sigma_y - \sigma_y\sigma_x\] \[= \begin{bmatrix}i&0\\0&-i\end{bmatrix}-\begin{bmatrix}-i&0\\0&i\end{bmatrix} = \begin{bmatrix} 2i&0\\0&-2i\\\end{bmatrix} = 2i\begin{bmatrix} 1&0\\0&-1\\\end{bmatrix} = 2i\sigma_z\]
- (6 pts.) Given N vectors of length N, these vectors are called a basis if any vector of length N can be constructed as a weighted linear sum of the original vectors (i.e. the original vectors multiplied by scalars and added or subtracted from each other). Show that \(|0\rangle = \begin{bmatrix} 1 \\ 0 \end{bmatrix}\) and \(|1\rangle = \begin{bmatrix} 0 \\ 1 \end{bmatrix}\) for a basis for all vectors of length 2.
Consider the generic vector of length 2 \(\begin{bmatrix}\alpha\\\beta\end{bmatrix}\):
\[\begin{bmatrix}\alpha\\\beta\end{bmatrix} = \begin{bmatrix}\alpha\\0\end{bmatrix} + \begin{bmatrix}0\\\beta\end{bmatrix}\] \[= \alpha\begin{bmatrix}1\\0\end{bmatrix} + \beta\begin{bmatrix}0\\1\end{bmatrix} = \alpha|0\rangle + \beta|1\rangle\]
Thus any vector of length 2 can be created using a linear combination of \(|0\rangle\) and \(|1\rangle\) \(\longrightarrow\) Thus \(|0\rangle\) and \(|1\rangle\) are a basis for vectors of length 2.
- (6 pts.) Given \(|0\rangle\) and \(|1\rangle\) defined in the previous problem, show that \(\langle 0 | 0 \rangle = \langle 1 | 1 \rangle = 1\) and that \(\langle 0 | 1 \rangle = \langle 1 | 0 \rangle = 0\). Any basis which has these properties (\(\langle i | j \rangle = \delta_{ij}\), where \(\delta\) is the Kronecker delta) is called an orthonormal basis (it is both normalized and orthogonal).
First to define the bra states: \[\langle 0 | = \begin{bmatrix}1&0\end{bmatrix}\] \[\langle 1 | = \begin{bmatrix}0&1\end{bmatrix}\]
First, let’s do the dot products of each vector with itself:
\[\langle 0 | 0 \rangle = \begin{bmatrix}1&0\end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = 1(1) + 0(0) = 1\] \[\langle 1 | 1 \rangle = \begin{bmatrix}0&1\end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix} = 0(0) + 1(1) = 1\]
Now to show the relationship with the opposite state dot products:
\[\langle 0 | 1 \rangle = \begin{bmatrix}1&0\end{bmatrix}\begin{bmatrix} 0 \\ 1 \end{bmatrix} = 1(0) + 0(1) = 0\] \[\langle 1 | 0 \rangle = \begin{bmatrix}0&1\end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = 0(1) + 1(0) = 0\]