1 Vectors & Matrices

1.1 Basics

Reading: Stewart Chapter 12, Thomas Calculus Chapter 12, Active Calculus Chapter 9

You should be able to answer the following questions after reading this section:

  • What is a vector?

  • What does it mean for two vectors to be equal?

  • How do we add two vectors together and multiply a vector by a scalar?

  • How do we determine the magnitude of a vector?

  • What is a unit vector

  • How do we find a unit vector in the direction of a given vector?

Typically, we talk about 3-dimensional vectors (as discussed in Stewart and Thomas). However, since talking about \(n\)-dimensional vectors doesn’t require much more effort, we will talk about \(n\)-dimensional vectors instead.

Definition 1.1 An \(n\)-dimensional Euclidean space \(\mathbb{R}^n\) is the Cartesian product of \(n\) Euclidean spaces \(\mathbb{R}\).

Definition 1.2 An \(n\)-dimensional vector \(\textbf{v}\in \mathbb{R}^n\) is a tuple \[\begin{equation} \textbf{v} = \langle v_1,\dots, v_n \rangle \,, \end{equation}\] where \(v_i \in \mathbb{R}\).

In dimensions less than or equal to 3, we represent a vector geometrically by an arrow, whose length represents the magnitude.

Remark. A point in \(\mathbb{R}^n\) is also represented by an \(n\)-tuple but with round brackets. A vector connecting two points \(A= (a_1, \dots, a_n)\) and \(B=(b_1, \dots, b_n)\) can be constructed as \[\begin{equation*} \textbf{x} = \langle b_1-a_1, \dots, b_n - a_n \rangle \,. \end{equation*}\]

We denote the above vector as \(\vec{AB}\) where \(A\) is the tail (initial point) and \(B\) is the tip/head (terminal point). We denote \(\textbf{0}\) to be the zero vector, i.e., \[\begin{equation*} \textbf{0} = \langle 0, \dots, 0 \rangle \,. \end{equation*}\]

Definition 1.3 The length of a vector \(\textbf{v}\) (denoted by \(| \textbf{v}|\)) is defined to be \[\begin{equation} |\textbf{v}| = \sqrt{ v_1^2 + \dots + v_n^2} \,. \end{equation}\]

Definition 1.4 A unit vector is a vector that has magnitude 1.

Exercise 1.1 Turn a vector \(\textbf{v} \in \mathbb{R}^n\) into a unit vector with the same direction.

Rules to manipulate vectors

Let \(\textbf{a}, \textbf{b} \in \mathbb{R}^n\) and \(c,d \in \mathbb{R}\). Then,

\[\begin{equation*} c( \textbf{a} + \textbf{b}) = \langle c a_1 + c b_1, \dots, c a_n + c b_n \rangle = c\textbf{a} + c\textbf{b} \,, \end{equation*}\] and \[\begin{equation*} (c+d) \textbf{a} = c\mathbf{a} + d\mathbf{a} \,. \end{equation*}\]

These formulas are deceptively simple. Make sure you understand all the implications.

Because of this rule, sometimes it is good to write vectors in terms of elementary vectors: \[\begin{equation*} \mathbf{u} = u_1 \mathbf{e_1} + \dots + u_n \mathbf{e_n} \,, \end{equation*}\] where \(e_i = \langle 0,\dots, 1, \dots, 0\rangle\) is the vector which has zero at all entries except that the \(i^{th}\) entry is 1.

In 3D, \[\begin{equation*} \mathbf{e_1} = \mathbf{i} \,, \qquad \mathbf{e_2} = \mathbf{j} \,, \qquad \mathbf{e_3} = \mathbf{k} \,. \end{equation*}\]

Properties of vector operations

Read the book

(Make sure you understand the geometric intepretation)

1.2 Products

1.2.1 Dot product

  • How is the dot product of two vectors defined and what geometric information does it tell us?

  • How can we tell if two vectors in \(\mathbb{R}^n\) are perpendicular?

  • How do we find the projection of one vector onto another?

Definition 1.5 The dot product of vectors \(\textbf{u} = \langle u_1, \dots, u_n \rangle\) and \(\textbf{v} = \langle v_1, \dots, v_n \rangle\) in \(\mathbb{R}^n\) is the scalar \[\begin{equation*} \textbf{u} \cdot \textbf{v} = u_1 v_1 +\dots + u_n v_n \,. \end{equation*}\]

Properties of dot product

Let \(\textbf{u}, \textbf{v}, \textbf{w} \in \mathbb{R}^n\). Then,

  1. \(\textbf{u}\cdot \textbf{v} = \textbf{v}\cdot \textbf{u}\),

  2. \((\textbf{u} + \textbf{v})\cdot \textbf{w} = (\textbf{u}\cdot \textbf{w}) + (\textbf{v}\cdot \textbf{w})\),

  3. If \(c\) is a scalar, then \((c \textbf{u})\cdot \textbf{w} = c (\textbf{u}\cdot \textbf{w})\).

Theorem 1.1 (Law of cosine) If \(\theta\) is the angle between the vectors \(\textbf{u}\) and \(\textbf{v}\), then \[\begin{equation*} \textbf{u}\cdot \textbf{v} = |\textbf{u}|| \textbf{v}| \cos \theta \,. \end{equation*}\]

Corollary 1.1 Two vectors \(\textbf{u}\) and \(\textbf{v}\) are orthogonal to each other if \(\textbf{u} \cdot \textbf{v} = 0\).


Let \(\textbf{u}, \textbf{v}\in \mathbb{R}^n\). The component of \(\textbf{u}\) in the direction of \(\textbf{v}\) is the scalar \[\begin{equation*} \mathrm{comp}_{\mathbf{v}}\mathbf{u} = \frac{\mathbf{u}\cdot \mathbf{v}}{|\mathbf{v}|} \,, \end{equation*}\] and the projection of \(\mathbf{u}\) onto \(\mathbf{v}\) is the vector \[\begin{equation*} \mathrm{proj}_{\mathbf{v}}\mathbf{u} =\left( \mathbf{u}\cdot \frac{\mathbf{v}}{|\mathbf{v}|}\right) \frac{\mathbf{v}}{|\mathbf{v}|} = \frac{\mathbf{u}\cdot \mathbf{v}}{\mathbf{v} \cdot\mathbf{v}} \mathbf{v} \,. \end{equation*}\]

1.2.2 3D special: Cross product

This concept is very specific to \(\mathbb{R}^3\). It will not make sense in other dimensions.

Definition 1.6 Let \(\mathbf{a}, \mathbf{b} \in \mathbb{R}^3\). The cross product of \(\mathbf{a}\) and \(\mathbf{b}\) is defined to be \[\begin{equation*} \mathbf{a} \times \mathbf{b} = \langle a_2 b_3 - a_3 b_2, a_3b_1 - a_1 b_3, a_1b_2 - a_2b_1 \rangle \,. \end{equation*}\]

Theorem 1.2 Let \(\theta\) be the angle between \(\mathbf{a}\) and \(\mathbf{b}\). Then, \[\begin{equation*} | \mathbf{a} \times \mathbf{b} | = |\mathbf{a}||\mathbf{b}| \sin\theta \,. \end{equation*}\]

Theorem 1.3 The vector \(\mathbf{a}\times \mathbf{b}\) is orthogonal to both \(\mathbf{a}\) and \(\mathbf{b}\).

1.2.3 Distance from a point

We can use the cross and dot products to measure the distance of one point to either a plane or a line.

Let \(P \in \mathbb{R}^n\) and \(\vec{r}(t) = R_0 + t \vec{v}\) be a line. Then the distance from \(P\) to \(\vec{r}(t)\) is \[ Dist = \frac{| \vec{R_0 P} \times \vec{v}|}{| \vec{v} |}\]

1.3 Matrices

A matrix is an 2 dimensional array with rows and columns.

\[ A = \begin{pmatrix} A_{11} & \dots & A_{1n}\\ \vdots & & \vdots \\ A_{n1} & \dots & A_{nn} \end{pmatrix}\]

Another way to write out matrix \(A\) is \[ A = (A_{ij})\] where the first index \(i\) represents the row and the second index \(j\) represents the column.

1.3.1 Operations on matrices

  1. Addition: let \(A\) and \(B\) be two matrices with same dimension \(m\times n\). Then \(A + B\) is an \(m\times n\) matrix such that \[[A + B]_{ij} = A_{ij} + B_{ij}.\]

  2. Scalar multiplication: let \(A\) be a \(m\times n\) matrix, \(c\) is a constant scalar. then \(cA\) is a \(m\times n\) matrix such that \[((cA)_{ij}) = (cA_{ij}).\]

  3. Matrix multiplication: let \(A\) be \(m\times n\) matrix and \(B\) be \(n\times l\) matrix. Then the multiplication \(AB\) is a \(m\times l\) matrix such that \[ [AB]_{ij} = \sum_{k} A_{ik} B_{kj} .\]

1.3.2 Linear transformation

A linear transformation is a function \(f: \mathbb{R}^n \to \mathbb{R}^m\) such that \[ f(a \vec{u} + b \vec{v} ) = a f(\vec{u}) + b f(\vec{v}) \] for all \(a,b \in \mathbb{R}\) and \(u,v \in \mathbb{R}^n\).

It turns out that every linear transformation \(f: \mathbb{R}^n \to \mathbb{R}^m\) can be represented as a \(m\times n\) matrix.