TuteSmart 2018 - secure your spot early for the lowest cost tutoring in VCE! Free Notes and Topic Tests with early-bird enrolment. Click here

December 12, 2017, 09:12:08 pm

### AuthorTopic: First Year University Mathematics Questions  (Read 410 times) Tweet Share

0 Members and 1 Guest are viewing this topic.

• MOTM: FEB 17
• Posts: 875
• Graphing is where I draw the line.
• Respect: +458
##### First Year University Mathematics Questions
« on: September 10, 2017, 11:04:09 pm »
+7
Here's a place to ask all your first year uni mathematics-related questions!
I couldn't find any others so I thought it would be a good idea to make one
Completed VCE 2016
2015: Biology
2016: Methods | Physics | Chemistry | Specialist Maths | Literature
ATAR : 97.90
2017: BSci (Maths or Engineering: To be determined) at MelbUni
Feel free to pm me if you have any questions!

• MOTM: FEB 17
• Posts: 875
• Graphing is where I draw the line.
• Respect: +458
##### Re: First Year University Mathematics Questions
« Reply #1 on: September 10, 2017, 11:06:41 pm »
0
I'll start it off with a question of my own:
Do the vectors in the basis of the row space of a matrix + the vectors in the basis of the nullspace / solution space of a matrix make up a basis of Rn, where n is the number of columns in the matrix?
Completed VCE 2016
2015: Biology
2016: Methods | Physics | Chemistry | Specialist Maths | Literature
ATAR : 97.90
2017: BSci (Maths or Engineering: To be determined) at MelbUni
Feel free to pm me if you have any questions!

#### Sine

• Victorian Moderator
• Part of the furniture
• Posts: 1723
• Respect: +348
##### Re: First Year University Mathematics Questions
« Reply #2 on: September 10, 2017, 11:12:26 pm »
0
I'm first year uni but doing a 2nd year maths unit LOL, i'll probably be using this thread soon. Great idea Shadowxo

#### RuiAce

• HSC Lecturer
• HSC Moderator
• Great Wonder of ATAR Notes
• Posts: 5927
• I don't write essays. I write proofs.
• Respect: +1071
##### Re: First Year University Mathematics Questions
« Reply #3 on: September 10, 2017, 11:36:19 pm »
+8
I'll start it off with a question of my own:
Do the vectors in the basis of the row space of a matrix + the vectors in the basis of the nullspace / solution space of a matrix make up a basis of Rn, where n is the number of columns in the matrix?
$\text{This would impose the condition that the matrix in question, say, }A\\ \text{is a square matrix.}\\ \text{In general, a matrix need not map vectors from }\mathbb{R}^n\text{ to }\mathbb{R}^n.\text{ We may have }\mathbb{R}^m \to \mathbb{R}^n.$
This can be achieved by simply changing the dimensions of the matrix.
$\text{In addition, quite often the vectors of the row space are written down}\\ \text{as row vectors. Hence, if we desire a basis,}\\ \text{we should exploit the transpose.}$
$\text{Therefore, let }A = \begin{pmatrix}\textbf{v}_1^T \\ \vdots \\ \textbf{v}_n^T\end{pmatrix}\\ \text{where }\textbf{v}_k\in \mathbb{R}^n\qquad \forall k=1,\dots,n\text{ are column vectors}$
________________________________________
$\text{The null-space, however, is the set of vectors }\textbf{x}\text{ such that}\\ A\textbf{x} = \begin{pmatrix}\textbf{v}_1\cdot \textbf{x}\\ \vdots \\ \textbf{v}_n \cdot \textbf{x}\end{pmatrix} = \textbf{0}\\ \text{where the dot product formula can be proven by expanding }\textbf{x}\text{ as the }n\text{-tuple vector, and same for }\textbf{v}_k^T$
$\text{And at the same time, the row-space is the set of linear combinations of}\\ S = \{ \textbf{v}_1, \dots, \textbf{v}_n \}.$
$\text{Because }\textbf{v}_k \cdot \textbf{x} = \textbf{0}\\ \text{for some arbitrary vector in the null-space}\\ \text{and any row vector in the matrix}\\ \text{it follows that any vector in the null-space is orthogonal to that in the row-space.}$
$\text{But by extension, because we know that the dot product is a linear operator}\\ \text{this affirms that EVERY linear combination of the vectors in }S\\ \text{i.e. EVERY vector in the row-space is orthogonal to those in the null-space}$
$\text{Hence, it follows that the row space and the null space}\\ \text{are orthogonal complements of each other.}$
________________________________________
$\text{But a very standard linear algebra theorem says that}\\ \text{for any subspace }S, \, S \oplus S^\perp = V\\ \text{where }S^\perp\text{ is the orthogonal complement}\\ \text{and }V\text{ is the original vector space.}$
$\text{From properties of direct sums}\\ \text{this immediately implies that the union of the bases for}\\ \text{the row space and the column space}\\ \text{form a basis for }\mathbb{R}^n.$
Handwavy - Some results are assumed trivial and left as an exercise. Also potentially poorly explained with my 11:36PM dead brain.
« Last Edit: September 11, 2017, 04:25:45 am by RuiAce »
Gone overseas! Inbox me if you wish

ATAR: 98.60
Currently studying: Bachelor of Science (Advanced Mathematics)/Bachelor of Science (Computer Science) @ UNSW
Formerly studying: Bachelor of Actuarial Studies/Bachelor of Science (Advanced Mathematics)

#### AngelWings

• MOTM: NOV 17
• Victorian
• Posts: 647
• "Angel wings, please guide me..."
• Respect: +156
##### Re: First Year University Mathematics Questions
« Reply #4 on: December 07, 2017, 09:40:42 pm »
0
Started elsewhere:
Spoiler
Still need help with Taylor series/ approximations here. Learnt it back in first year maths (MTH1030) and need to revise this. Forgotten most of it. Mostly I just need a proof and how it works again. I've also forgotten mostly about limits, so yeah... that'd be great if you could help.
Which parts were you expected to prove? The existence of the $k$-th order Taylor expansion, or that if remainder -> 0 then f is represented by the Taylor series?
To be honest, I’ve forgotten some of the basics. The most common one in these books, after a bit of dissecting, incorporates Taylor series on e^x, giving approximately 1 + x + x^2 + x^3 +... Not so sure how we got from Point A to B and would just like to see how to do it again, how we can prove this and so forth.
$\text{But that's the series for }\frac{1}{1-x}.\\ \text{The Taylor series for }e^x\text{ is }1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\dots$
Did you mix some of them up?
That is entirely possible considering the only line of working I have been given is: e^x is approximately 1 + x where any term of order 2+ is ignored because it’d be ridiculously small and thus negligible. (X is meant to be tiny e.g. 10^- 8, hence why it’d be negligible. At least in the context where I got these from - a genetics book. See below.)
Intending on a theoretical genetics project for Honours, which involves some first year math, parts of which my memory stalls on. After previous experience, my intended supervisor advised that during this break, I should go through two genetics books. Both of them indirectly expect you to use Taylor approximations, which I can't remember how it works or how to do them. Hence the revision.

I got that answer because my notes are old and written. I must've been haphazardly copying them down quickly during the lectures. Must've missed the factorials.
Still not quite so sure how we would get the one your wrote for e^x above though, but maybe it's because I've forgotten large chunks of content.
2013 - 14: Psychology | English Language | LOTE | Mathematical Methods (CAS) | Further Mathematics | Chemistry
2015 - :

#### RuiAce

• HSC Lecturer
• HSC Moderator
• Great Wonder of ATAR Notes
• Posts: 5927
• I don't write essays. I write proofs.
• Respect: +1071
##### Re: First Year University Mathematics Questions
« Reply #5 on: December 07, 2017, 10:00:25 pm »
+3
Started elsewhere:
Spoiler
That is entirely possible considering the only line of working I have been given is: e^x is approximately 1 + x where any term of order 2+ is ignored because it’d be ridiculously small and thus negligible. (X is meant to be tiny e.g. 10^- 8, hence why it’d be negligible. At least in the context where I got these from - a genetics book. See below.)
I got that answer because my notes are old and written. I must've been haphazardly copying them down quickly during the lectures. Must've missed the factorials.
Still not quite so sure how we would get the one your wrote for e^x above though, but maybe it's because I've forgotten large chunks of content.
$\text{Well ok, when it comes to approximations}\\ \text{you just approximate using whatever's reasonable enough.}$
\text{As for the series itself}\\ \text{perhaps the series of the following four functions should be memorised:}\\ \begin{align*}\frac{1}{1-x}&=1+x+x^2+x^3+\dots\\ e^x&=1+\frac{x}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\dots\\ \sin x &= x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\dots\\ \cos x &= 1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}+\dots\end{align*}
Optionally you may also memorise the one for $\ln(1-x)$, but that can be obtained by integrating $\frac1{1-x}$.
Remark on first order approximations
This means, that for small $x$, the following are reasonable:
\begin{align*}e^x&\approx 1+x\\ \sin x &\approx x\\ \cos x&\approx 1\end{align*}

These four are generally regarded as the most useful. The first one really isn't anything fancy as it's literally just the geometric series, but exponentials and sinusoids appear quite common naturally. All the other stuff either stem out of these (e.g. $\sinh$), are inverses of these (e.g. $\tan^{-1}$) or are just random shit mathematicians use for convenience.
$\text{With that being said, the actual computation of a Taylor series}\\ \text{is just done by quoting and using the formula.}$
$\text{In general, a Taylor polynomial about a point }x=a\text{ takes the form}\\ P_n(x)= \sum_{k=0}^n \frac{f^{(k)}(a)}{k!}(x-a)^k.\\ \text{Assuming the limit exists as }n\to \infty,\text{ a Taylor series is what happens when we do exactly that.}\\ \text{i.e. }P_{\infty}(x)=\sum_{k=0}^\infty\frac{f^{(k)}(a)}{k!}(x-a)^k$
We omit justification as to why that is the case for the sake of mere computations.
$\text{If the series about }x=0\text{ exists, i.e. taking }a=0,\\ \text{then }f\text{ is }\textit{exactly}\text{ represented by this series.}\\ \text{That is to say, }\boxed{f(x) = \sum_{k=0}^\infty\frac{f^{(k)}(0)}{k!}x^k}$
So all we really need to compute is $f(0), f^\prime(0), f^{\prime\prime}(0), f^{\prime\prime\prime}(0), f^{(4)}(0)$ and pretty much, all of the derivatives of $f$, evaluated at 0.
The actual computation starts here.
$\text{But if }f(x) = e^x\text{ then this is easy.}\\ \text{We know that the derivative of }e^x\text{ is still }e^x.\\ \text{Hence, the first, second and so on derivatives must STILL be }e^x,\\ \text{i.e. }\boxed{f^{(k)}(x)=e^x}$
$\text{Which consequently implies }f^{(k)}(0)=e^0=1\\ \text{Hence, }\boxed{e^x = \sum_{k=0}^\infty \frac{1}{k!}x^k}$
Expanding this sum out gives $1 + \frac{x}{1!}+\frac{x^2}{2!}+\dots$ as required.

Small note - The factorials basically appear because they're actually a part of the Taylor series formula.
« Last Edit: December 07, 2017, 10:14:44 pm by RuiAce »
Gone overseas! Inbox me if you wish

ATAR: 98.60
Currently studying: Bachelor of Science (Advanced Mathematics)/Bachelor of Science (Computer Science) @ UNSW
Formerly studying: Bachelor of Actuarial Studies/Bachelor of Science (Advanced Mathematics)