High Dimension Rotation Matrices

Mathematics
Author

Nick Appleton

Published

March 30, 2024

Over the last couple of months, I’ve been learning about rotation matrices in some of my downtime. This came about as I was working on a problem which needed me to understand the conditions for when rotation matrices commute - and I was working with matrices of dimension higher than three. I went down a bit of a rabbit hole and feel like I learned some interesting things which I thought would be fun to share as well as serve as a bit of a reference if I ever need it again.

Planar rotations - the building blocks of all rotations

In any dimension of two or higher, a plane can be defined using two orthogonal vectors \(\mathbf{u}\) and \(\mathbf{v}\) (let’s assume they are unit vectors). Given some column-vector defining a point \(\mathbf{p}\), a planar rotation matrix can be defined as follows where \(\mathbf{R}_2\left(\theta\right)\) is a two-dimensional orthonormal matrix with a determinant of one:

\[ \begin{array}{rccc} \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) \mathbf{p} = & \left( \mathbf{I} - \mathbf{uu}^T - \mathbf{vv}^T \right)\mathbf{p} & + \left(\begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\right)\mathbf{p} \\ & \text{1} & \text{2} \end{array} \]

The numbered sections of the above do the following:

  1. Zeroes the coordinates of \(\mathbf{p}\) pointing in the \(\mathbf{u}\) and \(\mathbf{v}\) directions while preserving all others.
  2. Mixes the components of \(\mathbf{p}\) pointing in the \(\mathbf{u}\) and \(\mathbf{v}\) using \(\mathbf{R}_2\left(\theta\right)\).

Again, \(\mathbf{R}_2\left(\theta\right)\) can be any two dimensional orthonormal matrix with determinant \(1\), but let’s adopt the typical definition:

\[ \mathbf{R}_2\left(\theta\right) = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} \]

Some of its properties are:

  • Setting \(\theta\) to zero (or any integer multiple of \(2\pi\)) gives the identity matrix. \[\mathbf{R}_2\left(0\right)=\mathbf{I}\]
  • Summing the arguments of rotations which are being multiplied gives the argument of the resulting rotation: \[\mathbf{R}_2\left(\theta\right)\mathbf{R}_2\left(\sigma\right)=\mathbf{R}_2\left(\theta+\sigma\right)\]
  • The above implies that all two-dimensional rotations commute: \[\mathbf{R}_2\left(\theta\right)\mathbf{R}_2\left(\sigma\right)=\mathbf{R}_2\left(\theta+\sigma\right)=\mathbf{R}_2\left(\sigma\right)\mathbf{R}_2\left(\theta\right)\]
  • It also implies that the inverse of a rotation matrix is the rotation matrix defined with the argument negated: \[\mathbf{R}_2\left(\theta\right)\mathbf{R}_2\left(-\theta\right)=\mathbf{I}\]
  • Because the matrix is orthonormal, the transpose is the inverse: \[\mathbf{R}_2^{-1}\left(\theta\right)=\mathbf{R}_2^T\left(\theta\right)=\mathbf{R}_2\left(-\theta\right)\]

\(\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\) also has a bucketload of interesting properties when \(\mathbf{R}_2\left(\theta\right)\) is defined as above. All of the following are readily derived from the definition:

  • If \(\mathbf{x}\) is orthogonal to \(\mathbf{u}\) and \(\mathbf{v}\) then for all values of \(\theta\): \[\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\mathbf{x}=\mathbf{x}\]
  • If \(\theta=0\) then: \[\mathbf{R}\left(\mathbf{u}, \mathbf{v}, 0\right)=\mathbf{I}\]
  • Swapping \(\mathbf{u}\) and \(\mathbf{v}\) is the same as negating \(\theta\): \[\mathbf{R}\left(\mathbf{v}, \mathbf{u}, \theta\right) = \mathbf{R}\left(\mathbf{u}, \mathbf{v}, -\theta\right)\]
  • Negating one of \(\mathbf{u}\) or \(\mathbf{v}\) is the same as negating \(\theta\): \[\mathbf{R}\left(\mathbf{u}, -\mathbf{v}, \theta\right) = \mathbf{R}\left(\mathbf{u}, \mathbf{v}, -\theta\right)\]
  • From the previous, negating both \(\mathbf{u}\) and \(\mathbf{v}\) changes the sign twice (i.e. does nothing): \[\mathbf{R}\left(-\mathbf{u}, -\mathbf{v}, \theta\right) = \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\]
  • If \(\mathbf{u}\), \(\mathbf{v}\), \(\mathbf{w}\) and \(\mathbf{x}\) are all orthogonal unit-vectors then: \[\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) \mathbf{R}\left(\mathbf{w}, \mathbf{x}, \Theta\right) = \mathbf{R}\left(\mathbf{w}, \mathbf{x}, \Theta\right) \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\] i.e. when planar rotations operate in orthogonal planes, they commute!
  • Two planar rotations in the same plane also commute: \[\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \Theta\right) = \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \Theta\right) \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\]
  • Two planar rotations in the same plane can be represented by one planar rotation in that plane: \[\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \Theta\right) = \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta+\Theta\right)\]
  • From the previous property, two planar rotations in the same plane with negated angles cancel: \[\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) \mathbf{R}\left(\mathbf{u}, \mathbf{v}, -\theta\right) = \mathbf{I}\]
  • If \(\mathbf{x}\) is any vector, it can be decomposed into \(\mathbf{x}\mathbf{k}+\alpha\mathbf{u}+\beta\mathbf{v}\) where \(\mathbf{k}\) is orthogonal to both \(\mathbf{u}\) and \(\mathbf{v}\). The following table gives some interesting equalities for various \(\theta\): \[ \begin{array}{rl} \mathbf{R}\left(\mathbf{u}, \mathbf{v}, 0\right)\left(\mathbf{k}+\alpha\mathbf{u}+\beta\mathbf{v}\right) &= \mathbf{k}+\alpha\mathbf{u}+\beta\mathbf{v} \\ \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \frac{\pi}{2}\right)\left(\mathbf{k}+\alpha\mathbf{u}+\beta\mathbf{v}\right) &= \mathbf{k}-\beta\mathbf{u}+\alpha\mathbf{v} \\ \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \pi\right)\left(\mathbf{k}+\alpha\mathbf{u}+\beta\mathbf{v}\right) &= \mathbf{k}-\alpha\mathbf{u}-\beta\mathbf{v} \\ \mathbf{R}\left(\mathbf{u}, \mathbf{v}, -\frac{\pi}{2}\right)\left(\mathbf{k}+\alpha\mathbf{u}+\beta\mathbf{v}\right) &= \mathbf{k}+\beta\mathbf{u}-\alpha\mathbf{v} \end{array} \]

All rotation matrices (not only planar rotations) of any dimension can be created by multipling a series of planar rotations operating in orthogonal planes together. This implies that for a rotation matrix of dimension \(N\), at most \(\left\lfloor \frac{N}{2} \right\rfloor\) are needed to form any possible rotation matrix. This also means that every two or three dimension rotation can be defined using a single planar rotation.

In two-dimensions

In two dimensions, there is only one plane. So we should expect that every set of orthogonal two-dimensional unit-vectors \(\mathbf{u}\) and \(\mathbf{v}\) will produce either \(\mathbf{R}_2\left(\theta\right)\) or \(\mathbf{R}_2\left(-\theta\right)\). It can be seen by substitution that:

\[\mathbf{R}\left(\begin{bmatrix}1&0\end{bmatrix}^T, \begin{bmatrix}0&1\end{bmatrix}^T, \theta\right)=\mathbf{R}_2\left(\theta\right)\]

But it can also be shown for any two-dimensional unit-vector \(\mathbf{u}\) that:

\[\mathbf{R}\left(\mathbf{u}, \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}, \theta\right)=\mathbf{R}_2\left(\theta\right)\]

This can be proved from the definition with a little bit more work. Grouping the \(- \mathbf{uu}^T - \mathbf{vv}^T\) terms and making the substitution \(\mathbf{v}=\mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\):

\[ \begin{array}{rl} \mathbf{R}\left(\mathbf{u}, \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}, \theta\right) &= \mathbf{I} + \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix}\left(\mathbf{R}_2\left(\theta\right) - \mathbf{I} \right) \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix}^T \\ &= \mathbf{I} - \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix} \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix}^T + \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix} \mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix}^T \\ &= \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix} \mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix}^T \\ &= \mathbf{R}_2\left(\theta\right) \end{array} \]

Each step in the above is valid because the \(\begin{bmatrix}\mathbf{u} & \mathbf{R}_2\left(\frac{\pi}{2}\right)\mathbf{u}\end{bmatrix}\) term is itself a two dimensional rotation matrix i.e. it is equal to \(\mathbf{R}_2\left(\sigma\right)\) for some \(\sigma\).

What if we were instead to compute \(\mathbf{R}\left(\mathbf{u}, \mathbf{R}_2\left(-\frac{\pi}{2}\right)\mathbf{u}, \theta\right)\)? Given that \(\mathbf{R}_2\left(-\frac{\pi}{2}\right)=-\mathbf{R}_2\left(\frac{\pi}{2}\right)\), we can use the “one parameter negated rule” and deduce that: \[ \mathbf{R}\left(\mathbf{u}, \mathbf{R}_2\left(-\frac{\pi}{2}\right)\mathbf{u}, \theta\right) = \mathbf{R}_2\left(-\theta\right) \]

In three-dimensions

Three-dimensions is an interesting special case because all rotations can still be defined using a single planar rotation. As three-dimensions is such an incredibly common dimension to want to rotate vectors in (computer graphics), it’s going to get some attention now.

Typically, we do not want to define the plane using \(\mathbf{u}\) and \(\mathbf{v}\) as described earlier. Instead, we want to rotate about an axis \(\mathbf{a}\) which is perpendicular to the \(\mathbf{uv}\) plane. It’s worth noting that defining the rotation in this way only makes sense in three-dimensions: in two-dimensions, there is no unit-vector orthogonal to the plane and in four-dimensions or higher, a single vector does not provide enough information to define a unique plane.

We’re now going to switch over to talking about implementations for a bit. Rotations about a vector \(\mathbf{a}\) perpendicular to the plane of rotation in three-dimensions can be dealt with using the Rodrigues rotation formula:

\[ \hat{\mathbf{v}} = \mathbf{v}\cos\theta + \left(\mathbf{a}\times\mathbf{v}\right)\sin\theta + \mathbf{a}\left(\mathbf{a}\cdot\mathbf{v}\right)\left(1-\cos\theta\right) \]

We’ve stated that all rotations in three dimensions can be represented using a single planar rotation. So let’s prove that it can be defined using \(\mathbf{R}_3\left(\mathbf{u},\mathbf{v},\theta\right)\). First, let’s define the formula using matrix-like operations as that is what has been done everywhere else in this blog:

\[ \begin{array}{rl}\hat{\mathbf{v}} &= \mathbf{v}\cos\theta + \left[\mathbf{a}\right]_\times \mathbf{v}\sin\theta + \mathbf{a}\mathbf{a}^T\mathbf{v}\left(1-\cos\theta\right) \\ &= \left(\mathbf{I}\cos\theta + \left[\mathbf{a}\right]_\times \sin\theta + \mathbf{a}\mathbf{a}^T\left(1-\cos\theta\right) \right) \mathbf{v} \end{array} \]

\(\left[\mathbf{a}\right]_\times\) is the cross-product matrix. But how do we find \(\mathbf{u}\) and \(\mathbf{v}\) from \(\mathbf{a}\)? The short answer is: we don’t. Let’s start by expanding out \(\mathbf{R}_3\left(\mathbf{u},\mathbf{v},\theta\right)\):

\[ \begin{array}{rl} \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) &= \mathbf{I} - \mathbf{uu}^T - \mathbf{vv}^T + \begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix} \\ &= \mathbf{I} - \mathbf{uu}^T - \mathbf{vv}^T + \mathbf{u} \left( \mathbf{u}^T \cos\theta - \mathbf{v}^T \sin\theta \right) + \mathbf{v} \left( \mathbf{u}^T \sin\theta + \mathbf{v}^T \cos\theta \right) \\ &= \mathbf{I} - \mathbf{uu}^T - \mathbf{vv}^T + \left( \mathbf{u}\mathbf{u}^T + \mathbf{v}\mathbf{v}^T \right) \cos\theta + \left( \mathbf{v} \mathbf{u}^T - \mathbf{u} \mathbf{v}^T \right) \sin\theta \\ &= \mathbf{I} + \left( \mathbf{u}\mathbf{u}^T + \mathbf{v}\mathbf{v}^T \right) \left(\cos\theta - 1\right) + \left( \mathbf{v} \mathbf{u}^T - \mathbf{u} \mathbf{v}^T \right) \sin\theta \end{array} \]

First we note that if three orthogonal vectors are known in a three-dimensional space, we can say.

\[ \begin{array}{rl} \mathbf{I} &= \mathbf{u}\mathbf{u}^T + \mathbf{v}\mathbf{v}^T + \mathbf{a}\mathbf{a}^T \\ \mathbf{I} - \mathbf{a}\mathbf{a}^T &= \mathbf{u}\mathbf{u}^T + \mathbf{v}\mathbf{v}^T \end{array} \]

This defines the \(\mathbf{u}\mathbf{u}^T + \mathbf{v}\mathbf{v}^T\) term. What about the \(\mathbf{v}\mathbf{u}^T - \mathbf{u}\mathbf{v}^T\) term? It turns out that this term is the definition of the cross-product matrix:

\[ \left[\mathbf{a}\right]_\times = \mathbf{v}\mathbf{u}^T - \mathbf{u}\mathbf{v}^T \]

\(\left[\mathbf{a}\right]_\times \mathbf{x}\) has the following properties:

  • The component of \(\mathbf{x}\) pointing in the \(\mathbf{u}\) direction will point in the \(\mathbf{v}\) direction.
  • The component of \(\mathbf{x}\) pointing in the \(\mathbf{v}\) direction point in the \(-\mathbf{u}\) direction.
  • The component of \(\mathbf{x}\) pointing in the \(\mathbf{a}\) direction will become zero.

Using these properties, it can also be shown that:

\[\left[\mathbf{a}\right]_\times^4 \left(\alpha\mathbf{u}+\beta\mathbf{v}\right) = \left(\alpha\mathbf{u}+\beta\mathbf{v}\right)\]

If we substitute these terms back into the rotation formula, we get our matrix-ified Rodrigues equation in terms of the rotation axis \(\mathbf{a}\): \[ \begin{array}{rl} \mathbf{R}_3\left(\mathbf{a}, \theta\right) &= \mathbf{I} + \left( \mathbf{I} - \mathbf{a}\mathbf{a}^T \right) \left(\cos\theta - 1\right) + \left[\mathbf{a}\right]_\times \sin\theta \\ &= \mathbf{I} \cos\theta + \mathbf{a}\mathbf{a}^T \left(1 - \cos\theta \right) + \left[\mathbf{a}\right]_\times \sin\theta \end{array} \]

The three elemental rotations can be derived as:

\[ \begin{array}{rl} \mathbf{R}_x\left(\begin{bmatrix} 0&1&0 \end{bmatrix}^T, \begin{bmatrix} 0&0&1 \end{bmatrix}^T, \theta\right) &= \begin{bmatrix} 1&0&0 \\ 0&\cos\theta&-\sin\theta \\ 0&\sin\theta&\cos\theta \end{bmatrix} \\ \mathbf{R}_y\left(\begin{bmatrix} 0&0&1 \end{bmatrix}^T, \begin{bmatrix} 1&0&0 \end{bmatrix}^T, \theta\right) &= \begin{bmatrix} \cos\theta&0&\sin\theta \\ 0&1&0 \\ -\sin\theta&0&\cos\theta \end{bmatrix} \\ \mathbf{R}_z\left(\begin{bmatrix} 1&0&0 \end{bmatrix}^T, \begin{bmatrix} 0&1&0 \end{bmatrix}^T, \theta\right) &= \begin{bmatrix} \cos\theta&-\sin\theta&0 \\ \sin\theta&\cos\theta&0 \\ 0&0&1 \end{bmatrix} \end{array} \]

Again, every rotation matrix in three-dimensions can be described using a single \(\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\) matrix. This implies that the only 3-dimensional rotation matrices which commute are the ones which operate on the same plane i.e. share the axis of rotation or negated axis of rotation.

In higher dimensions

Once we hit four-dimensions, we can have two completely independent planes and define commuting planar rotations which operate in them independently. Again, the number of independent planar rotations which we can for a given dimension is up to \(\left\lfloor \frac{N}{2} \right\rfloor\).

Rotating one vector to another

Sometimes we want to find an orthogonal matrix which will map one vector to another (which must have the same Euclidian norm). This is useful in algorithms like the QR decomposition where we are trying to triangularise a matrix and want to force vectors to be all zeros apart from a single component. This can be done effectively using a reflection, but we can also do it with a rotation. Please don’t do this - it is purely academic and will have a worse implementation than using a reflector.

Given two known vectors (which I will assume are unit vectors) \(\mathbf{x}\) and \(\mathbf{y}\), how can we find a rotation that satisfies the following?

\[ \mathbf{y} = \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\mathbf{x} \]

This turns out to not be that complex:

\[ \begin{array}{rl} \mathbf{y} &= \mathbf{x} - \mathbf{uu}^T\mathbf{x} - \mathbf{vv}^T\mathbf{x} + \begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\mathbf{x} \\ \mathbf{y} - \mathbf{x} &= \begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\left(\mathbf{R}_2\left(\theta\right)-\mathbf{I}\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\mathbf{x} \\ \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix} \left(\mathbf{y} - \mathbf{x}\right) &= \left(\mathbf{R}_2\left(\theta\right)-\mathbf{I}\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\mathbf{x} \\ \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix} \mathbf{y} &= \mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\mathbf{x} \end{array} \]

Let \(\mathbf{u}=\mathbf{x}\). Then \(\mathbf{v}\) needs to be a vector orthogonal to \(\mathbf{x}\) that can be linarly combined with \(\mathbf{x}\) to form \(\mathbf{y}\) i.e.

\[ \mathbf{v} = \frac{\mathbf{y} - \mathbf{x}\mathbf{x}^T\mathbf{y}}{\left|\mathbf{y} - \mathbf{x}\mathbf{x}^T\mathbf{y}\right|} \]

Then:

\[ \begin{bmatrix} \mathbf{u}^T\mathbf{y} \\ \mathbf{v}^T\mathbf{y} \end{bmatrix} = \mathbf{R}_2\left(\theta\right) \begin{bmatrix} 1 \\ 0 \end{bmatrix} \]

The \(\mathbf{R}_2\left(\theta\right)\) matrix can be found using the following:

\[ \begin{bmatrix} \mathbf{u}^T\mathbf{y} \\ \mathbf{v}^T\mathbf{y} \end{bmatrix} = \begin{bmatrix} \cos\theta \\ \sin\theta \end{bmatrix} \]

If theta is required, it can be found as:

\[\theta = \text{atan2}\left(\mathbf{v}^T\mathbf{y}, \mathbf{u}^T\mathbf{y} \right) \]

This works as long as \(\mathbf{x}\neq\mathbf{y}\).

Again, don’t do it. :-)

Eigenvalues and eigenvectors

Finding the eigenvalues and eigenvectors amount to finding values for \(\lambda\) and \(\mathbf{w}\) which satisfy:

\[ \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) \mathbf{w} = \lambda \mathbf{w} \]

We know that all vectors which are orthogonal to \(\mathbf{u}\) and \(\mathbf{v}\) are not modified by the rotation. This implies that in an \(N\) dimensional space, \(N-2\) of the eigenvalues must be equal to \(1\). The other two eigenvalues must satisfy:

\[ \begin{array}{rl} \left( \mathbf{I} - \mathbf{uu}^T - \mathbf{vv}^T + \begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\right)\left(\mathbf{u}\alpha + \mathbf{v}\beta\right) &= \lambda \left(\mathbf{u}\alpha + \mathbf{v}\beta\right) \\ \begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\left(\mathbf{u}\alpha + \mathbf{v}\beta\right) &= \lambda \left(\mathbf{u}\alpha + \mathbf{v}\beta\right) \\ \begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix} \alpha \\ \beta \end{bmatrix} &= \lambda \left(\mathbf{u}\alpha + \mathbf{v}\beta\right) \\ \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix}\begin{bmatrix}\mathbf{u} & \mathbf{v}\end{bmatrix}\mathbf{R}_2\left(\theta\right) \begin{bmatrix} \alpha \\ \beta \end{bmatrix} &= \begin{bmatrix}\mathbf{u}^T \\ \mathbf{v}^T\end{bmatrix} \lambda \left(\mathbf{u}\alpha + \mathbf{v}\beta\right) \\ \mathbf{R}_2\left(\theta\right) \begin{bmatrix} \alpha \\ \beta \end{bmatrix} &= \lambda \begin{bmatrix} \alpha \\ \beta \end{bmatrix} \end{array} \]

This shows the other two eigenvalues are exactly the eigenvalues of the \(\mathbf{R}_2\left(\theta\right)\) matrix. These are known to be a conjugate pole pair. Once we know the two-dimensional eigenvalues and their corresponding eigenvectors (which are of the form \(\begin{bmatrix} \alpha & \beta \end{bmatrix}^T\) and will be complex valued), we can compute the eigenvectors of the planar rotation as \(\mathbf{u}\alpha + \mathbf{v}\beta\). The rest of the eigenvalues are irrelevant as long as they are orthogonal to the two that have been computed.

As we add more planes, as somewhat expected, we introduce more eigenvalue pairs and force more of the eigenvectors to take on particular values.

We can use an eigenvalue decomposition to deconstruct a rotation matrix into its rotation angles and planes!

Finally

\(\mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right)\) can also be defined using the matrix exponential as follows by expanding out the Taylor series of the exponential:

\[ \mathbf{R}\left(\mathbf{u}, \mathbf{v}, \theta\right) = e^{\theta\left(\mathbf{v}\mathbf{u}^T - \mathbf{u}\mathbf{v}^T\right)} \]

This is a particuarly beautiful result, but I’m not going to go into details in this post. It would just seem wrong not to mention it.