**Authors**: Chad Glinsky

**Description**: Mathematics primer for drivetrain engineering rotational mechanics in three-dimensional Euclidean space.

This notebook provides an introduction to the concepts and mathematics commonly encountered in rotational mechanics, an important subject matter in drivetrain engineering. This includes review of vectors, periodic signals, linear systems, rotational matrices, and more. Each section includes at least one example.

This notebook has two main objectives:

- Educate the reader about key mathematics required in the study of rotational systems, such as a drivetrain.
- Reduce the content and avoid repeated content in the other notebooks of the Drivetrain Hub | Notebook Series.

Symbol | Description |
---|---|

$X$ | Cartesian global x-coordinate |

$Y$ | Cartesian global y-coordinate |

$Z$ | Cartesian global z-coordinate |

$x$ | Cartesian x-coordinate |

$y$ | Cartesian y-coordinate |

$z$ | Cartesian z-coordinate |

$\x$ | Cartesian x-axis unit vector |

$\y$ | Cartesian y-axis unit vector |

$\z$ | Cartesian z-axis unit vector |

$r$ | Radius |

$\theta$ | Angle of rotation |

$v$ | Linear velocity |

$\omega$ | Angular velocity |

$M$ | Moment |

$\tau$ | Torque |

$i$ | Imaginary number |

$a_{ij}$ | Linear equation coefficient |

$x$ | Linear equation variable |

$b$ | Linear equation constant |

$A$ | Coefficients matrix |

$R$ | Rotation matrix |

$\alpha$ | Euler angle about $z$ axis |

$\beta$ | Euler angle about $x'$ axis |

$\gamma$ | Euler angle about $z''$ axis |

Any symbols combined with a overhead arrow, $\vec{}$, indicates a *vector*, a term with magnitude and direction. Any symbols with an overhead hat, $\hat{}$, indicates a *unit vector*. For example, $\vec{v}$ is a linear velocity vector and $\hat{v}$ is the unit vector of velocity. Any symbols in **bold** indicate a vector 1-D array. For example, $\pmb{x} = [x_1, x_2, x_3]$, is a variables vector array.

A point is the simplest form of spatial data, being an infinitesimal entity defined by a set of coordinates in three-dimensional space. We will use Cartesian coordinate system, but others such as cylindrical and spherical coordinates are common in engineering.

Point positioned in three-dimensional space with Cartesian coordinates

A point does not have an orientation or direction. It is fully defined by its positional coordinates.

The distance between two points in three-dimensional space is computed by first computing the distance of each axis as:

$$l_x = x_2 - x_1$$$$l_y = y_2 - y_1$$$$l_z = z_2 - z_1$$where $(x_1, y_1, z_1)$ and $(x_2, y_2, z_2)$ are the coordinates of points $\text{P}_1$ and $\text{P}_2$, respectively. Then apply the Pythagorean theorem to three dimensions to get the total distance between points:

$$l = \sqrt{l_x^2 + l_y^2 + l_z^2}$$Notice the components of distance can be negative, but the total distance is always a positive value.

Vectors are used extensively in the mathematical modeling of physical systems. A vector is an entity with a *magnitude* and *direction*. This can be illustrated with an arrow (direction) of a specified length (magnitude). Magnitude is also referred to as *amplitude*.

Vector of magnitude and direction

The magnitude of a time-invariant vector can be defined with a single scalar value. The direction of a time-invariant vector can be defined in different ways, depending on the domain. Since rotational mechanics pertains to the study of physical systems in three-dimensional space, the vector direction is commonly defined with components of Cartesian coordinates.

$$\vec{a} = a_x\x + a_y\y + a_z\z$$where $\x, \y, \z$ are unit vectors corresponding to the Cartesian axes. A vector in three-dimensions can be visualized relative to Cartesian axes as:

Vector in a Cartesian coordinate system

A *unit vector* is a vector of unit (1.0) amplitude, expressed for $\vec{a}$ as:

where $\|\vec{a}\|$, also notated as $a$, is the magnitude of $\vec{a}$ and calculated as:

$$\|\vec{a}\| = \sqrt{a_x^2 + a_y^2 + a_z^2}$$For computations, a vector is often represented in array form:

$$\vec{a} = \left[ a_x, a_y, a_z \right]$$where $a_x, a_y, a_z$ are called *elements* of the array.

Vectors can be added and subtracted by applying the math operation *element-wise*. Element-wise means performing the math operation on the respective elements of each vector. This becomes apparent when considering vector addition of $\vec{a}$ and $\vec{b}$:

Vector subtraction is performed similarly:

$$\vec{a} - \vec{b} = (a_x - b_x)\x + (a_y - b_y)\y + (a_z - b_z)\z$$The addition and subtraction of vectors is visualized as:

Adding and subtracting vectors

The dot product is a scalar resulting from the product of the magnitude of two vectors and the cosine of the angle, $\theta$, between the vectors. It is also referred to as the *scalar product* and *inner product*. Its geometric definition results in the equation:

where $\cdot$ denotes the dot product. The geometric relationship of a dot product is illustrated below, noting that any two vectors in three-dimensional space lie within an arbitrary plane intersecting both vectors.

Dot product geometric relationship

In the case of a dot product with a unit vector, the component of a vector in a given direction, per the unit vector, can be calculated.

$$\vec{a} \cdot \hat{b} = a \cos\theta$$This *vector projection* calculation is made clear in the figure below.

Vector projection from a dot product

The dot product can also be calculated algebraically in Cartesian component form as the sum of element-wise products:

$$\vec{a} \cdot \vec{b} = a_x b_x + a_y b_y + a_z b_z$$Common applications of the dot product include:

- Calculate the component of geometry or velocity in a certain direction.
- Calculate the angle between two vectors.
- Calculate if two spatial objects are parallel or perpendicular.

- The dot product of two parallel vectors is equal to the scalar product of the two vector magnitudes.
- The dot product of two orthogonal vectors is always zero.

The cross product is a vector that is orthogonal to the plane containing the two formulating vectors. It is also referred to as the *vector product*.

Cross product of two vectors

where $\vec{a} \times \vec{b}$ is the cross product. Direction of the cross product is determined by the right-hand rule, and its magnitude is calculated as:

$$\|\vec{c}\| = \|\vec{a}\| \|\vec{b}\| \sin\theta$$The cross product can also be calculated algebraically in Cartesian component form as:

$$\vec{c} = \vec{a} \times \vec{b} = (a_y b_z - a_z b_y)\x - (a_x b_z - a_z b_x)\y + (a_x b_y - a_y b_x)\z$$Common applications of the cross product include:

- Calculate linear velocity of a point in 3-dimensional space on a rotating body: $\vec{v} = \vec{\omega} \times \vec{r}$
- Calculate moment about a point in 3-dimensional space due to a force at a distance: $\vec{M} = \vec{r} \times \vec{F}$

- The cross product of two parallel vectors is always zero.
- The cross product of two orthogonal vectors has a magnitude equal to the scalar product of the two vector magnitudes.

2) Subtraction of vectors.

3) Dot product of vectors.

4) Cross product of vectors.

5) Angle between vectors. Are the vectors parallel, perpendicular, or neither?

In rotational mechanics, periodic signals are frequently encountered since rotation is a periodic motion, repeating every $2\pi$ radians or $360\deg$. This is illustrated below for a shaft rotating at an angular velocity.

Periodic motion of a rotating shaft

From the figure, it is obvious that a *period* is one rotation of the shaft, $2\pi$ radians. As a function of time, the period depends on angular velocity:

where units are seconds. Therefore, the frequency in cycles per second (Hz) is calculated as the inverse of period:

$$f = \frac{1}{T} = \frac{\omega}{2\pi}$$Later, it is shown how certain periodic signals can be described from a combination of other periodic signals.

A common method to describe and illustrate periodic signals is by a *waveform*, sometimes called a *time series waveform*. Waveforms represent signals as a function of time or some other independent variable that relates to time, such as rotation angle in the case of a rotating object.

A sinusoid is a periodic signal with a continuous waveform, as illustrated below.

Sinusoidal waveform

Mathematically, the waveform above can be defined by a sine function:

$$y(x) = a \sin(\omega x + \phi)$$where independent variable, $x$, is usually time. The cosine function can also be used since it is equal to the sine function with a phase shift:

$$\cos(x) = \sin(x + \pi / 2)$$Graphically, sine and cosine functions are compared below for $\omega=1$.

Comparison of sine and cosine functions

Notice the phase shift results in sine *lagging* cosine by $\pi/2$, or cosine *leading* sine by $\pi/2$.

Later, it is shown how the Fourier transform is used with sinusoids to approximate any periodic signal.

Euler's formula is useful for simplifying the mathematical representation and interpretation of complex functions. In its simplest form, it is expressed as:

$$e^{i\phi} = \cos\phi + i\sin\phi$$where $e^{i\phi}$ is a unit complex number. It is easiest to understand the formula by illustrating it as a vector in the complex plane with a unit circle:

Euler formula in complex plane

where it can be seen how a complex number relates to Euler's formula as:

$$x + iy = r \left( \cos\phi + i\sin\phi \right) = r e^{i\phi}$$where $r=1$ in the figure above, calculated as $r = \sqrt{x^2 + y^2}$. A more useful form of Euler's formula for our purpose is to express it as a function of time as:

$$r e^{i (\omega t + \phi)} = r \left( \cos(\omega t + \phi) + i\sin(\omega t + \phi) \right)$$where $r$ is the complex vector *magnitude*, $\omega$ is *angular frequency*, and $\phi$ is *phase angle*. This formula allows a *rotating vector* to be expressed with simple mathematical notation. This means Euler's formula can be used to represent periodic signals. The relationship between a rotating complex vector and a time-series waveform is illustrated below. Note the complex plane axes are rotated for illustration purposes.

Rotating complex vector and its waveform signal

where the real part of the rotating complex vector equals the cosine function and hence represents the time-series waveform of a physical signal.

Euler's formula is particularly useful for performing mathematical operations of periodic signals. For example, $N$ periodic signals of the same frequency can be represented in time-independent form to easily calculate their summation as:

$$r e^{i\phi} = \sum_{n=1}^N r_n e^{i \phi_n}$$where the magnitude and phase of the summed signal is easily calculated from the resulting complex number, $x + iy$:

$$r = \sqrt{x^2 + y^2}$$$$\phi = \tan^{-1}\left(\frac{y}{x}\right)$$Similarly, two periodic signals in Euler form can be multiplied to produce a modulated signal:

$$r_1 e^{i (\omega_1 t + \phi_1)} r_2 e^{i (\omega_2 t + \phi_2)} = r_1 r_2 e^{i ((\omega_1 + \omega_2)t + \phi_1 + \phi_2)}$$where the resulting signal amplitude is $r_1 r_2$, angular frequency is $\omega_1 + \omega_2$, and phase angle is $\phi_1 + \phi_2$.

Harmonic analysis is a vast topic, applicable to a broad range of mathematics and engineering. The narrow scope reviewed here is:

- Define periodic signals as a sequence of sinusoidal functions.
- Determine the frequency content of a time-series signal.

A *harmonic* is a positive integer multiple of a fundamental frequency. In physical terms, the fundamental frequency is the first natural frequency of vibration for an elastic object. Therefore, the first harmonic is the fundamental frequency, the second harmonic is twice the fundamental frequency, and so on.

Harmonics refer to sinusoidal signals with amplitude, frequency, and phase angle. The frequency of the $n^\rm{th}$ harmonic is calculated as:

$$f_n = n f_0$$where $f_0$ is the fundamental frequency.

Fourier series can be used to construct a function, $s(x)$, from a series of weighted harmonics. It is mathematically defined as:

$$s_N(x) = \frac{a_0}{2} + \sum_{n=1}^{N} \left( a_n \cos\left(\frac{2 \pi n x}{P}\right) + b_n \sin\left(\frac{2\pi n x}{P}\right) \right)$$where $N$ is the number of *harmonics*, $P$ is the period of the Fourier series, and coefficients are calculated as:

where $s(x)$ must be integrable over $P$, and $a_0$ evaluates as:

$$a_0 = \frac{2}{P} \int_P s(x) \diff x$$Harmonic and period are related in that the $n^\rm{th}$ harmonic has $n$ cycles over the period $P$.

As an example of how the Fourier series works, consider the square wave. A square wave function is written as:

$$s(x) = \begin{cases} A & \text{for } 0 \leq x \lt d T\\ 0 & \text{for } d T \leq x \lt T \end{cases}$$where $A$ is amplitude, $T$ is period, and $d$ is the fraction of the period spent at the amplitude. The Fourier series coefficients of a square wave evaluate to:

$$a_0 = 2 A d$$$$a_n = \frac{A \sin(2 \pi n d)}{\pi n}$$$$b_n = \frac{2 A \sin^2(\pi n d)}{\pi n}$$Below is a plot comparing the square wave Fourier series with different harmonics. Notice the even numbered harmonics ($n = 2, 4, 6, \dots$) do not contribute to the Fourier series of a square wave. That is why the plots of $N=3$ and $N=4$ are identical.

To identify the harmonics of a *time-domain* signal, the Fourier transform is used, defined as:

where $\omega$ is angular frequency. Therefore, this equation expresses the signal in the *frequency-domain*, allowing for identification of its frequency content. When signals that vary with time are measured, Fourier analysis is useful to interpret their waveforms and quantify the frequencies of its variations.

Since physical measurements and computers handle data as discrete samples, the discrete Fourier Transform and FFT is particularly useful for computational analysis.

Below is a plot of the FFT computed for the Fourier series of a square wave to identify its harmonics.

For completeness, the *order* of a periodic signal is explained. Order does not have anything in particular to do with Fourier analysis, but it is a term frequently used when analyzing periodic signals. An order defines the frequency of a signal *relative* to the frequency of another signal.

Rotating shaft with four cogs

In the illustration above, the shaft rotates at a certain speed and the four (4) cogs attached to it rotate at the same speed. However, if we consider frequency of the cogs from an observer watching the shaft rotate, they will see four cogs pass for each rotation of the shaft. Thus, the cog pass-frequency is 4x the shaft frequency. If the shaft is defined as the reference, then it has order 1.0 and the cogs have order 4.0. This is true regardless of rotational speed.

Mathematically, we can define a signal's order as a function of its frequency relative to a *reference freqency*:

Despite looking similar to the definition of harmonics, orders are not required to be integer multiples and any frequency can be used as a reference frequency.

Here are definitions for each to help distinguish them:

- Frequency is the inverse of the signal period, with units of cycles per second $\left(\rm{Hz}\right)$.
- Angular frequency is the frequency of a rotation, $2\pi$ multiple of frequency with units of radians per second $\left(\rm{rad}/\rm{s}\right)$.
- Harmonic is an integer multiple of a frequency relative to a fundamental frequency.
- Order is a multiple or fraction of a frequency relative to a chosen reference frequency.

2) Sum of signals with the same frequency.

3) Product of low frequency signal and summed signal in (2).

4) Plot waveforms of all signals, including results (2) and (3).

5) Plot FFT of results (2) and (3).

Linear algebra is its own field of study in mathematics, and therefore only a narrow scope is considered here. The application of linear algebra in rotational mechanics mostly pertains to solving systems of linear equations, and to model the orientation and rotation of physical objects in three-dimensional space.

As shown in the kinematics notebook, a system of linear equations results from the formulation of angular velocities for rotationally connected bodies.

The general form of a single linear equation with $n$-variables is:

$$a_1 x_1 + a_2 x_2 + ... + a_n x_n = b$$where $x_1, x_2, ..., x_n$ are the variables, $a_1, a_2, ..., a_n$ are the coefficients, and $b$ is a constant. A well known linear equation is that of a line in two-dimensional space, $y = m x + b$. Geometrically, this can be interpreted as:

Linear equation visualized as a line in two dimensions

where the coefficients are $a_1 = -m$ and $a_2 = 1$, and constant $b$ is the $x_2$-intercept, usually notated as the $y$-intercept in slope-intercept form. From the figure, it is obvious to visualize the linear relationship between the two variables, $x_1$ and $x_2$. For a linear equation of three variables, the geometric interpretation is a plane instead of line. Linear equations with more variables are difficult to visualize, but the same concepts apply mathematically.

If multiple linear equations are considered together, it is called a *system of linear equations* and is of the form:

with $m$-equations and $n$-variables. By introducing another line in the previous figure, a system of two linear equations with two variables can be visualized as:

System of linear equations visualized as lines in two dimensions

where it is easily observed that the *solution*, i.e. the value of each variable that satisfies all equations, is the intersection point of the two lines. Linear system solutions are discussed more in a later section.

In matrix form, a linear system of $m$-equations and $n$-variables is written as:

$$ \begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n} \\ a_{21} & a_{22} & \dots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \dots & a_{mn} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_m \end{bmatrix} $$and written more concisely as:

$$A \pmb{x} = \pmb{b}$$where $A$ is the coefficients matrix, $\pmb{x}$ is the variables vector, and $\pmb{b}$ is the contants vector.

Formulating a system of linear equations results from deriving equations that define a system or process. It may also result from the linearization of a nonlinear system of equations, but that is outside the scope of this notebook.

Linearization is acceptable when the solution to a specific operating point or condition is of interest. However, linearization is not acceptable if nonlinear effects must be captured, such as transient events in a highly nonlinear dynamical system. If only small nonlinearities exist, linearization is likely acceptable.

When formulating a system of linear equations, it is possible to have a system that has no solution or the equations have dependencies that result in a redundant solution. In the latter case, the system is said to be *linearly dependent*, which can be visualized as three lines intersecting at a single point for a system with two variables:

System of linearly dependent equations visualized as lines in two dimensions

It is clear that any one of these lines can be discarded and the same solution is still obtained. This means that any of the equations can be derived algebraically from the other equations and are therefore linearly dependent.

When a system of equations is linearly

Our kinematics notebook demonstrates how a system of linear equations is derived by assembling the equations required to define the angular velocities for a system of rotationally coupled bodies. Both linearly independent and dependent systems are observed depending on the architecture of connected bodies. Refer to that notebook for interactive examples.

Solving a system of linear equations entails finding values for every variable that satisifies all equations in the system. This requires all coefficients and constants to be known, which is achieved from the process of deriving equations for the system to be solved. Practical examples of this are provided in the kinematics notebook.

Consider a linear system of two lines in a plane, i.e. two equations (lines) of two variables (dimensions). The variable values where both lines are equal is obviously at their intersection point, as illustrated in a previous figure. If the two lines were parallel, no solution would exist.

From linear algebra, the *rank* of the *augmented matrix* and *coefficients matrix* can be used to determine what solutions may exist:

- If rank of the augmented matrix is greater than rank of the coefficients matrix, the equations are
*inconsistent*and no solution exists. - If rank of the augmented matrix is equal to rank of the coefficients matrix, at least one solution exists.
- If rank equals the number of variables, it is considered
*full rank*and a unique solution exists. - If rank is not equal to the number of variables, there are infinite solutions.

- If rank equals the number of variables, it is considered

where the augmented matrix is the concatenation of the coefficients matrix and constants column vector:

$$(A | \pmb{b}) = \left[\begin{array}{c|c} A & \pmb{b} \end{array}\right]$$For a square coefficients matrix with full rank, the exact solution can be obtained from inverse matrix multiplication:

$$\pmb{x} = A^{-1} \pmb{b}$$Solution techniques also exist if a system is under- or over-defined, but is outside the scope of this notebook.

In cases of infinite solutions, it is recommended to review the system of equations and determine if constraint equations can be included to achieve a well-defined (full rank) system. In the context of rotation mechanics, a constraint equation could simply be a speed condition, e.g. $\omega_1 = 3$.

2) Solve for all system variables.

A rotation matrix is an array of three orthogonal unit vectors in Cartesian coordinates, making its size 3x3. A rotation matrix can be used for:

- Orientation of an object.
- Coordinate transformation of a vector.
- Coordinate transformation of a set of orthogonal vectors (axes).

In matrix form, a set of Cartesian axes can be expressed as a system of equations that define orthogonal unit vectors:

$$ \begin{bmatrix} \hat{x} \\ \hat{y} \\ \hat{z} \end{bmatrix} = \begin{bmatrix} r_{xx} & r_{xy} & r_{xz} \\ r_{yx} & r_{yy} & r_{yz} \\ r_{zx} & r_{zy} & r_{zz} \end{bmatrix} \begin{bmatrix} \hat{X} \\ \hat{Y} \\ \hat{Z}\end{bmatrix} $$where the coefficients matrix is the rotation matrix. This can be illustrated as:

Cartesian axes oriented in three-dimensional space

It is clear the rotation matrix in this context defines the orientation of one coordinate system *relative to* another. For two coordinate systems *oriented* identically, even if positioned differently, the rotation matrix is equal to the identity matrix:

With rotations applied, each Cartesian vector must remain orthogonal to each other and of unit magnitude. As an example, Cartesian axes rotated about the y-axis by $90\deg$ has a rotation matrix equal to:

$$R = \begin{bmatrix}0 & 0 & -1 \\ 0 & 1 & 0 \\ 1 & 0 & 0\end{bmatrix}$$which can be visualized as:

Cartesian axes rotated about y-axis

This is true since a rotation matrix describes the orientation of Cartesian axes

Coordinate transformation, sometimes called *rotational transformation*, is the process of applying a rotation matrix to a Cartesian vector or axes (set of vectors) to modify its orientation. The section above has already demonstrated a coordinate transformation, but it was explained in the context of *defining the orientation* of coordinate axes. Here, the rotation matrix will be used in the context of *modifying the orientation*.

It is easiest to visualize a coordinate transformation in two-dimensions, but the same concept and mathematics extend to three-dimensions.

Planar coordinate transformation of axes

From the figure, equations can be derived to express the rotated coordinate system, $\left(\hat{x}', \hat{y}' \right)$, as a function of the original coordinate system, $\left(\hat{x}, \hat{y} \right)$:

$$\begin{bmatrix} \hat{x}' \\ \hat{y}' \end{bmatrix} = \begin{bmatrix} \cos\theta & \sin\theta \\ -\sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} \hat{x} \\ \hat{y} \end{bmatrix} $$where the coefficients define the rotation matrix of the coordinate transformation. This formulation corresponds to a *passive transformation*, whereas an *active transformation* can be derived by considering the rotation of a vector or point in a given coordinate system:

Planar coordinate transformation of vector

For the sake of deriving equations, the figure illustrates two scenarios: rotating of unit vector $\hat{a} = [1, 0]$ on left, and rotating of unit vector $\hat{a} = [0, 1]$ on right. Equations can be derived to express the rotated vector components, $\left(\hat{a}'_x, \hat{a}'_y \right)$, as a function of the original vector components, $\left(\hat{a}_x, \hat{a}_y \right)$:

$$\begin{bmatrix} \hat{a}'_x \\ \hat{a}'_y \end{bmatrix} = \begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix} \begin{bmatrix} \hat{a}_x \\ \hat{a}_y \end{bmatrix} $$Notice the rotation matrix of the active transformation is the inverse of the passive transformation, which equals the matrix transpose since they are orthogonal.

In two-dimensions, a rotation matrix is straightforward since there is only one plausible angle of rotation, $\theta$, about the axis normal to the plane. For three-dimensions, there are infinite possible axes of rotation. The next section explains the rotation matrix for the coordinate transformation about an arbitrary axis in three-dimensions.

One method of applying a coordinate transformation in three-dimensions is to specify a rotation about an axis. The derivation provided in the previous section for two-dimensions can easily be extended to three-dimensions by recognizing the axes $\hat{z}$ and $\hat{z}'$, both directed out of the plane of the screen. In this case, the rotation matrix of the passive transformation is written as:

$$R = \begin{bmatrix} \cos\theta & \sin\theta & 0 \\ -\sin\theta & \cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}$$where a rotation vector, $\vec{\theta} = \left[0, 0, \theta \right]$, defines a rotation about the z-axis. This method can be extended to consider the rotation about *any* axis in three-dimensional space, as illustrated below.

Rotation about an arbitrary axis

The rotation matrix for the passive transformation about an arbitrary axis of rotation, with magnitude $\theta$ and direction $\hat{\theta} = u_x\hat{x} + u_y\hat{y} + u_z\hat{z}$, is calculated as:

$$R = \begin{bmatrix} u_x u_x (1-\cos\theta) + \cos\theta & u_x u_y (1-\cos\theta) + u_z \sin\theta & u_x u_z (1-\cos\theta) - u_y \sin\theta \\ u_y u_x (1-\cos\theta) - u_z \sin\theta & u_y u_y (1-\cos\theta) + \cos\theta & u_y u_z (1-\cos\theta) + u_x \sin\theta \\ u_z u_x (1-\cos\theta) + u_y \sin\theta & u_z u_y (1-\cos\theta) - u_x \sin\theta & u_z u_z (1-\cos\theta) + \cos\theta \end{bmatrix} $$Applying a rotation matrix to a Cartesian vector has different interpretations depending on coordinate transformation being active or passive. By designating the rotation matrix of a passive transformation as $R$, the following equation rotates a vector, $\vec{v}$, within its current coordinate system to get $\vec{v}'$:

$$\pmb{v'} = R^\top \pmb{v}$$where $R^\top$ is the rotation matrix of an active transformation. To express the vector in a rotated coordinate system, defined by $R$, calculate as:

$$\pmb{v'} = R \pmb{v}$$Similar to vectors, applying a rotation matrix to orthogonal axes has different interpretations depending on coordinate transformation being active or passive. By designating the rotation matrix of a passive transformation as $R$, the following equation rotates the axes, $A$, within its current coordinate system to get $A'$:

$$A' = R A$$To express the axes in a rotated coordinate system, defined by $R$, calculate as:

$$A' = R^\top A$$where $R^\top$ is the rotation matrix of an active transformation.

- Rotation matrix defining the orientation, $A$, of the body relative to the ground reference frame.

- Vector defining the position, $\vec{r}$, of the body relative to the ground reference frame.

- First, rotation is applied to the rigid body about an axis in its local reference frame.

- Then, rotation is applied to the rigid body about an axis in the global reference frame.

2) Position, $\vec{r}''$, and orientation, $A''$, of the body after the global rotation, $\Theta$, is applied.

Continue reading the Drivetrain Hub | Notebook Series to learn more about rotational mechanics and drivetrain engineering.

Accurately model the geometry and rotational mechanics of gear systems in 3-dimensional space with the Drivetrain Hub | Gears App, a modern drivetrain modeling environment 100% online at drivetrainhub.com.

Our notebooks are publicly hosted in GitHub repositories, available for anyone to view and propose edits.

- Engineering Mechanics: Dynamics, 12th Edition. Russell C. Hibbeler
- Matrix Analysis and Applied Linear Algebra. Carl D. Meyer
- Wolfram MathWorld - Dot product
- Wolfram MathWorld - Cross product
- Wolfram MathWorld - Euler formula
- Wolfram MathWorld - Rotation matrix
- Wolfram MathWorld - Euler angles
- Wikipedia - Dot product
- Wikipedia - Cross product
- Wikipedia - Euler's formula
- Wikipedia - Rotation matrix
- Wikipedia - Euler angles