Thank you so much! My teacher said we need to know eigenvalues to be able to apply it to Leslie matrices for our exam and also in our investigative report where we have to use markov chain and eigenvalues.
Sorry my response took so long - my laptop recently died and it took a while to get a new one as I live in Perth and we just went into lockdown...
Anyway, interesting point from your teacher. I question if you do need to know how to use eigenvalues for Leslie matrices, but there's still a lot of unknowns about QCE exams, so still more than happy to discuss. For Leslie matrices, there are two questions you could be asking:
1. When do I need to use eigenvalues?
2. What does the eigenvalue of a Leslie matrix tell me?
I emphasise this, because it's really important you're NOT asking the first question. The first question seems good and like it'll give you all the information you'll need, but the truth is it's a distraction. The key to doing well in maths is to look beyond the algorithms, look beyond the equations, and to figure out what they intuitively might tell you. If you can figure this out, not only will you have a better and deeper understanding for the topic at hand, but you'll also know when to use the eigenvalues - when the question is asking about what the eigenvalues will tell you.
Now, I've not worked with Leslie matrices before, so figuring out what they mean is as much a challenge for me as it is for you - thankfully, we don't need to be experts of this. There are a million people around the world and - more importantly - on the internet that have figured it out for us.
Here's a handy powerpoint I found, with some very interesting results. It tells us that the first eigenvalue, and its corresponding eigenvector tell us:
a) if the population will grow or decay
b) the final proportions of each age class
I encourage you to do some more reading! There might be some other results you can find that I couldn't. It's also worth checking with your teacher what they meant when they said you need to know about them for Leslie matrices - there's probably some applications that they have in the back of their mind that they were thinking of when they told you you need to know them for Leslie matrices.
---
The next topic I have some information about - Markov chains. I love Markov chains, and I actually spent a fair amount of time at university purely devoted to learning about Markov processes. Really interesting stuff! In the simple case, for the discrete-time Markov chain (note: I highly doubt you're covering continuous time Markov chains as they're a heavily complex topic. But, if your teacher said you need to learn about them, let me know, I'll see if I can find some simple resources for you), there is one eigenvalue we care about - the one where lambda=1. From here, I'm assuming you have basic Markov chain knowledge, where I'm using T for our transition matrix, and x is our distribution of states.
We care about this one, because if a transition matrix has a solution of the form \(Tx=x\), then we call this a stationary distribution. That is, if you're ever in this distribution, then your distribution will not change. Eg, if x=(0.5,0.2,0.3) is a stationary distribution, then \(T^nx=(0.5,0.2,0.3)\) - it will not change, no matter HOW MANY transitions you go through. Essentially, all eigenvectors that correspond to the eigenvalue of 1 for our transition matrix are a stationary distribution. There is also an interesting theorem that shows a transition matrix can only have 0, 1, or infinitely many stationary distributions that's easy to show. Here's a hint - suppose that \(x_1\) and \(x_2\) are stationary distributions of \(T\). In other words, \(Tx_1=x_1\) and \(Tx_2=x_2\). Now, consider the third vector \(x_3=ax_1+(1-a)x_2\), where a is any real number between 0 and 1. Two questions:
1. Is this new vector \(x_3\) a probability distribution? (do all the probabilities in it sum to 1?)
2. Does this vector also solve the equation \(Tx=x\)?
If both of these are yes, then \(x_3\) represents an infinite amount of other stationary distributions.
So, I've spoken alot about stationary distributions, but there's one more really important bit about Markov chains - for some specific chains, you will always reach a stationary distribution after an infinite amount of time (for real-life applications, you can usually reach something close enough to the stationary distribution after a large amount of transitions that the next state is essentially the same as the one before it). Trying to define which situations they will would take ages though, sorry, and unfortunately I don't have a lot of time to be going through everything - but the wiki page is pretty good at discussing a lot of the requirements.