Login

Welcome, Guest. Please login or register.

March 29, 2024, 11:59:31 am

Author Topic: Unit 3 Vectors cookbook  (Read 5128 times)

0 Members and 1 Guest are viewing this topic.

RuiAce

  • ATAR Notes Lecturer
  • Moderator
  • Great Wonder of ATAR Notes
  • *****
  • Posts: 8814
  • "All models are wrong, but some are useful."
  • Respect: +2575
Unit 3 Vectors cookbook
« on: July 29, 2019, 04:22:12 pm »
+6
Remember to register here for FREE to ask any questions you may come across in your QCE studies!

The work done in Unit 1 is continued in Unit 3 mostly through work in three-dimensional space. A \(z\)-axis is now included and each vector has a third component.
Conventionally, the \(z\) axis points vertically upwards, and the \(y\)-axis reflects horizontal movement (points to the right). The \(x\)-axis points towards you - the observer. (Note that because it's now a 3D setting, it's also possible to move forwards and backwards, not just up/down/left/right!)

Of course, physically drawing this is hard, so we just draw an orthogonal projection on the paper. (Typically the \(x\)-axis will point at a \(45^\circ\) angle against both other axes.)

Unit 1 work in 3D
The vector is typically still written as \( \underset{\sim}{v} \), or possibly typed as \(\mathbf{v}\). It can be expressed in column vector notation, but the third component denotes the direction in the \(z\)-axis it goes. The following vector is from the origin to the point \( (v_1, v_2, v_3)\).

The standard unit vectors are now \(\mathbf{i} = \begin{pmatrix}1\\0\\0\end{pmatrix}\), \(\mathbf{j}=\begin{pmatrix}0\\1\\0\end{pmatrix}\) and \(\mathbf{k}=\begin{pmatrix}0\\0\\1\end{pmatrix}\).
As in the first screenshot, each vector can still be decomposed into an equivalent component form: \( \begin{pmatrix} v_1\\v_2\\v_3 \end{pmatrix} = v_1 \mathbf{i} + v_2\mathbf{j} + v_3\mathbf{k} \).

Addition and subtraction are still done component-wise. They can still be represented through tip-to-tail procedures geometrically.
\[ \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} + \begin{pmatrix}w_1\\w_2\\w_3\end{pmatrix} = \begin{pmatrix}v_1+w_1\\v_2+w_2\\v_3+w_3\end{pmatrix} \]

Scalar multiplication is also still done component wise. Multiplication by a positive scalar still represents a dilation (stretch) of the vector, and a negative scalar also involves a reflection in the other direction.
\[ \lambda \begin{pmatrix} v_1\\v_2\\v_3\end{pmatrix} = \begin{pmatrix}\lambda v_1\\ \lambda v_2\\ \lambda v_3 \end{pmatrix} \]
The vector from a point \(A(a_1, a_2, a_3) \) to another point \(B(b_1, b_2, b_3)\) can also be found by taking the coordinates of \(B\) and subtracting in those of \(A\). This gives us \( \begin{pmatrix}b_1-a_1\\b_2-a_2\\b_3-a_3 \end{pmatrix} \)

...as you can see, it gets really chaotic in there...
The midpoint of the line segment \(AB\) can still be found the usual way: \( \frac12(\mathbf{a}+\mathbf{b}) \) where \(\mathbf{a} = \overrightarrow{OA}\) and \(\mathbf{b}=\overrightarrow{OB}\), where \(O\) is the origin \((0,0,0)\).

Length is defined in a three-dimensional version of Pythagoras's theorem. The formula is
\[ |\mathbf{v}| = \left| \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} \right| = \sqrt{v_1^2 + v_2^2 + v_3^2}. \]
Direction needs to be defined more carefully now for the polar form \(\mathbf{v} = [r, \theta, \phi] \), where \(r = |\mathbf{v}|\).

The angle \(\theta\) refers to the angle in the \(x\)-\(y\) plane. Imagine that the vector can be projected (pushed) vertically downwards so that its \(x\) and \(y\) component is unchanged, but its \(z\)-component is now 0. Essentially, we are considering the vector \( v_1\mathbf{i} + v_2\mathbf{j} + 0\mathbf{k} \).

\(\theta\) is then obtained as though we were considering \(\theta\) for the 2D vector \( \begin{pmatrix}v_1\\ v_2\end{pmatrix} \). This is the same manner as we saw in Unit 1.

The angle \(\phi\), known as the altitude angle, reflects the angle we have to go through from the positive \(z\)-axis, to the negative \(z\)-axis. (i.e., from the one pointing up, to a hidden one that points downwards.)

To actually convert \(\mathbf{v} = v_1\mathbf{e}_1 + v_2\mathbf{e}_2 + v_3\mathbf{e}_3\) from a rectangular (Cartesian) form to polar form, we know that \(r = \sqrt{v_1^2+v_2^2+v_3^2} \). The following triangles essentially illustrate the idea for the angles. (As hinted above, \(\theta\) works the same way as in 2-dimensions.)


So we have
\begin{align*}
r &=\sqrt{x^2+y^2+z^2}= \sqrt{v_1^2 + v_2^2 + v_3^2}\\
\tan \theta &=\frac{y}{x}= \frac{v_2}{v_1}\\
\cos \phi &= \frac{z}{r} = \frac{v_3}{r}
\end{align*}
Going the other way, we first have
\[v_3 = z = r\cos\phi. \]
The other two require a bit more thinking that I don't describe here, but essentially:
\begin{align*}
v_1 &= x = r\cos\theta\sin\phi\\
v_2 &= y = r\sin\theta\sin\phi
\end{align*}
(Observe how \(r\cos\theta\) shows up for \(x\) and \(r\sin\theta\) shows up for \(y\).)

The scalar product is defined in a similar way for two vectors in three-dimensions in a similar way. It turns out that it satisfies the same geometric properties.
\begin{align*}
\mathbf{v}\cdot \mathbf{w} &= v_1w_1 + v_2w_2 + v_3w_3\\
\mathbf{v}\cdot \mathbf{w} &= |\mathbf{v}||\mathbf{w}|\cos\theta
\end{align*}
where here, \(\theta\) reflects the angles between the two vectors.

The corresponding unit vector to a vector \( \mathbf{v} \) is denoted \( \hat{\mathbf{v}} \), and given by the formula \( \hat{\mathbf{v} } = \dfrac{\mathbf{v}}{|\mathbf{v}|} \). (Recall that this is the vector in the direction of \(\mathbf{v}\), but with length 1.)

The vector projection is also found the same way through the formula below. It represents the same "squashing down" of one vector onto another.
\[ \mathbf{v}\text{ on }\mathbf{w} = (\mathbf{v}\cdot \hat{\mathbf{w}}) \hat{\mathbf{w}} \]

Parallel vectors in 3D are also vectors that are scalar multiples of each other, i.e. \( \mathbf{v} \parallel \mathbf{w} \) if \( \mathbf{v} = \lambda \mathbf{w} \) for some scalar \(\lambda\).

Perpendicular vectors in 3D are also vectors whose dot product equals to 0, i.e. \( \mathbf{v} \cdot \mathbf{w} = 0 \).

New results in Unit 3
Instead of describing lines and curves through Cartesian means, we can alternatively describe them through vector forms.

A vector function describes how a curve behaves according to a parameter. The parameter may just be a placeholder (like \(t\), \(k\), \(\lambda\) or etc.), or it may hold some physical significance such as time. The function will be of the form
\[ \mathbf{r}(t) = x(t) \mathbf{i} + y(t)\mathbf{j} \]
in the 2D case, or
\[ \mathbf{r}(t) = x(t) \mathbf{i} + y(t)\mathbf{j}+y(t) \mathbf{k} \]
in the 3D case.

This is really just a "shorthand" for
\begin{align*}
x &= x(t)\\
y &= y(t)\\
z &= z(t),
\end{align*}
which shows that in reality, we're saying that each component \( (x,y,z)\) depends on \(t\) independently.

In the 2D case, it is often that these vector equations can be converted into a Cartesian form. The objective is to eliminate the parameter involved in the question. This can be done through various techniques.

For example, the unit circle is defined through a vector form by \(\mathbf{r}(t) = \cos t \mathbf{i} + \sin t \mathbf{j} \). This converts to
\[ x = \cos t\text{ and }y = \sin t, \]
and we may retrieve a Cartesian form through a Pythagorean identity:
\[x^2+y^2=\cos^2t+\sin^2t=1. \]
This is not always doable in the 3D case, and there are only a few situations where Cartesian equations are required in 3D contexts, as mentioned below.

A line can instead be described through a point \(\mathbf{a}\) it passes through, and the direction \(\mathbf{d}\) the line goes in. It takes the form:
\[ \mathbf{r}(t) = \mathbf{a} + t \mathbf{d}. \]
To find these vectors in the 3D case, suppose we want the line through \( A(a_1, a_2, a_3)\) and \(B(b_1, b_2, b_3)\).  Then the direction can be found by considering
\[ \mathbf{d} = \overrightarrow{AB} = \begin{pmatrix}b_1-a_1\\b_2-a_2\\b_3-a_3\end{pmatrix} \]
and whilst you can really pick any of the points to reflect your \(\mathbf{a}\), I would choose the one that corresponds to the first point in the direction vector. Here, my direction vector was \(\overrightarrow{AB}\), so I would consider \(\mathbf{a} = \overrightarrow{OA} = a_1\mathbf{i}+a_2\mathbf{j}+a_3\mathbf{k}\).

Converting this back into a Cartesian equation can only really be done in an awkward way, as seen in your formula sheet. The corresponding Cartesian equation to \(\mathbf{r}=\mathbf{a} + t\mathbf{d}\), using the same method of attempting to eliminate parameters, is
\[ \frac{x-a_1}{d_1} = \frac{y-a_2}{d_2} = \frac{z-a_3}{d_3}. \]
Vector equations of lines in 2D are found through similar means. However their Cartesian equations will just be of the form
\[ \frac{x-a_1}{d_1} = \frac{y-a_2}{d_2} \]
and it is possible to make either \(x\) or \(y\) the subject of this equation.

A line segment between two points \(A\) and \(B\) reflects only the bit between those two points on the line. It turns out that its equation is also
\[ \mathbf{r}(t) + \mathbf{a}+t\mathbf{d}. \]
The difference between this and the above lies in the parameter. This time round, the parameter has a domain restriction - it is only allowed to take on values between \(0\) and \(1\). In general:
\begin{gather*}
\text{For a straight line, it is assumed that }t\in \mathbb{R}.\\
\text{For a straight line segment, it is assumed that }t\in [0,1].
\end{gather*}
(Note that other vector equations may have different domain restrictions on \(t\).)


When the parameter \(t\) reflects time, and \(\mathbf{r}(t)\) reflects the displacement of a particle at said time, we can determine if:
- the paths of two particles ever intersect, and
- if the particles ever collide in their motion.

To do this, suppose we have particles \(A\) and \(B\) with equations of motion \(\mathbf{r}_A(t)\) and \(\mathbf{r}_B(t)\). To see if the paths overlap, we need to equate and solve
\[ \mathbf{r}_A(s) = \mathbf{r}_B(t). \]
Carefully note how the parameter in one of the equations has been substituted out for a different variable! This is because having their paths overlap doesn't mean they need to be at the same spot at the same time! You can also alternatively consider \( \mathbf{r}_A(t) = \mathbf{r}_B(s) \) if you prefer.

To see if they do happen to collide, we can either:
- simply equate and solve \( \mathbf{r}_A(t) = \mathbf{r}_B(t) \) without changing parameters, or
- if we've already found where they overlap, see if in a corresponding pair of points \(s\) and \(t\), \(s\) actually equals to \(t\).
(Note: With the second method, you may get one solution, say, \(s=1\) and \(t=2\). Your other solution may be \(s=2\) and \(t=1\). This is NOT useful, as the values of \(s\) and \(t\) aren't matching for the same pair of points.)

Planes and spheres are examples of surfaces. Describing them through vector equations that involve parameters actually requires two parameters to work, and hence we define their vector equations through other means.

A sphere is defined through its centre and radius \(\rho\). In Cartesian form, suppose that the centre is at \( C(h,k,\ell) \). Then its equation is given by
\[(x-h)^2 + (y-k)^2 + (z-\ell)^2 = \rho^2 \].
For a vector form, we rely on the fact that every point on a sphere is at an equal distance away from the centre. (That distance being, as expected, the radius \(\rho\).) Hence, we can assign the coordinate vector \(\mathbf{c} = (h,k,\ell)\) for the centre of the sphere. Then the sphere's equation can be described using the magnitude as
\[ |\mathbf{r}(t) - \mathbf{c}| =\rho. \]


In 2D, the Cartesian plane was the only plane you could ever have. (The rest were points, lines, curves and so on.) A plane in 3D space is basically an instance of the Cartesian plane found within the 3D context.

Planes are uniquely characterised by a point, and two direction vectors that "span" the plane. (Note how there's 2 direction vectors here, whereas lines only had 1. This is why we actually need two parameters to describe a plane in a parametric vector form.)

Given two vectors, we can always find a plane in the direction of the two vectors. (As for how high up or low it is, we also need the point.) Ignoring where the plane is placed, however, we note that there will be one and only one vector perpendicular to said plane.

This vector is perpendicular to the two vectors \(\mathbf{v}\) and \(\mathbf{w}\) that made the plane, and is given by the vector product (a.k.a. cross product) of the two:
\[ \mathbf{v}\times \mathbf{w} = \begin{pmatrix}v_2 w_3 - v_3 w_2\\ v_3 w_1 - v_1 w_3\\ v_1 w_2 - v_2 w_1 \end{pmatrix}. \]
It may be worth noting that the magnitude of this vector, i.e. \( |\mathbf{v} \times \mathbf{w}| \), equals the area of the parallelogram spanned by the vectors \(\mathbf{v}\) and \(\mathbf{w}\). It is given by the formula \( |\mathbf{v}\times \mathbf{w}| = |\mathbf{v}| |\mathbf{w}| |\sin\theta|\). The direction is the main thing useful to us though.

It may be worth noting that parallel vectors have \(\mathbf{v} \times \mathbf{w} = \mathbf{0} \).


It turns out that every plane can be described by considering a point it passes through (with coordinate vector \(\mathbf{a}\)), but also by a vector that is perpendicular to the plane. The vector perpendicular to the plane can usually be computed through the vector product. It is then labelled as \(\mathbf{n}\).

The point-normal form is then the vector form we use to describe planes. It takes the form:
\[ \mathbf{r}\cdot \mathbf{n} = \mathbf{a}\cdot \mathbf{n} \]
The formula arises from the equivalent form \( \mathbf{n}\cdot (\mathbf{r}-\mathbf{a})=0 \) The expression \(\mathbf{r}-\mathbf{a}\) is actually the plane translated back to the origin. Possibly surprisingly, translating back to the origin first is mandatory! (Then recall that dot product equals 0 implies perpendicularity.)

Expanding the dot product actually gives us a Cartesian form for the plane. In general, it will be of the form
\[ n_1x+n_2y+n_3z = k \]
where \(k\) is a fixed (constant) scalar; in particular \(k = \mathbf{a}\cdot \mathbf{n}\).
« Last Edit: August 05, 2019, 09:40:08 am by RuiAce »