Over the last couple of months, I’ve been learning about rotation matrices in some of my downtime. This came about as I was working on a problem which needed me to understand the conditions for when rotation matrices commute - and I was working with matrices of dimension higher than three. I went down a bit of a rabbit hole and feel like I learned some interesting things which I thought would be fun to share as well as serve as a bit of a reference if I ever need it again.
Planar rotations - the building blocks of all rotations
In any dimension of two or higher, a plane can be defined using two orthogonal vectors
The numbered sections of the above do the following:
- Zeroes the coordinates of
pointing in the and directions while preserving all others. - Mixes the components of
pointing in the and using .
Again,
Some of its properties are:
- Setting
to zero (or any integer multiple of ) gives the identity matrix. - Summing the arguments of rotations which are being multiplied gives the argument of the resulting rotation:
- The above implies that all two-dimensional rotations commute:
- It also implies that the inverse of a rotation matrix is the rotation matrix defined with the argument negated:
- Because the matrix is orthonormal, the transpose is the inverse:
- If
is orthogonal to and then for all values of : - If
then: - Swapping
and is the same as negating : - Negating one of
or is the same as negating : - From the previous, negating both
and changes the sign twice (i.e. does nothing): - If
, , and are all orthogonal unit-vectors then: i.e. when planar rotations operate in orthogonal planes, they commute! - Two planar rotations in the same plane also commute:
- Two planar rotations in the same plane can be represented by one planar rotation in that plane:
- From the previous property, two planar rotations in the same plane with negated angles cancel:
- If
is any vector, it can be decomposed into where is orthogonal to both and . The following table gives some interesting equalities for various :
All rotation matrices (not only planar rotations) of any dimension can be created by multipling a series of planar rotations operating in orthogonal planes together. This implies that for a rotation matrix of dimension
In two-dimensions
In two dimensions, there is only one plane. So we should expect that every set of orthogonal two-dimensional unit-vectors
But it can also be shown for any two-dimensional unit-vector
This can be proved from the definition with a little bit more work. Grouping the
Each step in the above is valid because the
What if we were instead to compute
In three-dimensions
Three-dimensions is an interesting special case because all rotations can still be defined using a single planar rotation. As three-dimensions is such an incredibly common dimension to want to rotate vectors in (computer graphics), it’s going to get some attention now.
Typically, we do not want to define the plane using
We’re now going to switch over to talking about implementations for a bit. Rotations about a vector
We’ve stated that all rotations in three dimensions can be represented using a single planar rotation. So let’s prove that it can be defined using
First we note that if three orthogonal vectors are known in a three-dimensional space, we can say.
This defines the
- The component of
pointing in the direction will point in the direction. - The component of
pointing in the direction point in the direction. - The component of
pointing in the direction will become zero.
Using these properties, it can also be shown that:
If we substitute these terms back into the rotation formula, we get our matrix-ified Rodrigues equation in terms of the rotation axis
The three elemental rotations can be derived as:
Again, every rotation matrix in three-dimensions can be described using a single
In higher dimensions
Once we hit four-dimensions, we can have two completely independent planes and define commuting planar rotations which operate in them independently. Again, the number of independent planar rotations which we can for a given dimension is up to
Rotating one vector to another
Sometimes we want to find an orthogonal matrix which will map one vector to another (which must have the same Euclidian norm). This is useful in algorithms like the QR decomposition where we are trying to triangularise a matrix and want to force vectors to be all zeros apart from a single component. This can be done effectively using a reflection, but we can also do it with a rotation. Please don’t do this - it is purely academic and will have a worse implementation than using a reflector.
Given two known vectors (which I will assume are unit vectors)
This turns out to not be that complex:
Let
Then:
The
If theta is required, it can be found as:
This works as long as
Again, don’t do it. :-)
Eigenvalues and eigenvectors
Finding the eigenvalues and eigenvectors amount to finding values for
We know that all vectors which are orthogonal to
This shows the other two eigenvalues are exactly the eigenvalues of the
As we add more planes, as somewhat expected, we introduce more eigenvalue pairs and force more of the eigenvectors to take on particular values.
We can use an eigenvalue decomposition to deconstruct a rotation matrix into its rotation angles and planes!
Finally
This is a particuarly beautiful result, but I’m not going to go into details in this post. It would just seem wrong not to mention it.