
TurkishCodeMan/csm-1b-lora-fft
Text-to-Speech
•
Updated
audio
audioduration (s) 20.5
68.2
| text
stringlengths 82
786
| source
stringclasses 1
value |
---|---|---|
Hello everyone and welcome to this lecture of our course Reasoning LLM from Scrage. We are currently in the reinforcement learning phase of the course and we have started to look at methods which you can use to estimate the value functions for a given policy and subsequently find the optimal policy for a reinforcement learning problem.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
In the previous lecture we looked at the first method to do this which is called as dynamic programming. In dynamic programming what we do is that we take the values of the states and then we update the values of the states based on the values of the states which follow that particular state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
This process is called as bootstrapping because we are using the values of subsequent states to estimate the values of the states which have come before that. In dynamic programming type of problems we looked at there are prediction problems and there are control problems. In the prediction problem we are given a policy and we are trying to predict the value function for that particular policy and remember how we did this.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
What we said was that let us say there are five states which the agent can go through. So let us say these are the five states and then what we say is let us say let us initialize the value functions for all these states to be zero.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this value function we call as V0 and then what we do is we use this value function to estimate the next value function V1 then we use V1 to estimate V2 etc. And then eventually this method converges to the true value function for the given policy.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Now how did we estimate V1 from V0 and V2 from V1? We use the Bellman equation for that. Remember what the Bellman equation says it gives you a way to express V pi of S in terms of V pi of S dash where S dash is the next state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is the bootstrapping which is at the core of dynamic programming methods. And then what we are essentially doing here is that we are changing here and we are saying that this is Vk plus 1 and this is a function of Vk of S where k plus 1 is the next update.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
For example here this will be V0 we will use V0 to calculate V1 then we will use V1 to calculate V2 etc. This is the policy evaluation step of the dynamic programming method. Then after that what we do is we say that I was given a policy but that policy is not optimal.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So now I want to improve my policy and to do that what you say is okay fine let me do this thing let me take my state from my state I have all possible actions and let's say my current policy is giving me that okay you follow this particular action but I find out that this action is not optimal but instead of that this action is yielding me the maximum reward.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So then what I do is if this happens I say that my policy is not optimal and whenever my agent reaches state S it should move to or it should take action A1 instead of action A and I do this for all the different states I check whether my policy is optimal or not if not I change my policy for that particular state. So essentially we use pi 0 to calculate V pi 0 this is policy evaluation.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Then what we do is we use V pi 0 to calculate the improved policy pi 1 and then subsequently we do this so that you get the optimal policy and you also get the optimal value function. So essentially what happens is you start with a policy you get a value function then you update the policy you update the value function and then these two processes go simultaneously till you get the optimal policy.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
This process is called as policy iteration or it is also called as generalized policy iteration or GPI. But one of the core learnings of the last lecture was that this function is heavily dependent on the model of the environment. In other words we cannot solve dynamic programming problems unless we know the complete model of the environment. Remember your teaching or let's say you are teaching an reinforcement learning agent to play the game of chess.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Now before even we start playing the game of chess how can I know the exact probabilities of winning from or exact probabilities to transition from one state to another state. And this happens in a lot of cases in most of the real world cases you do not have the model of the environment available with you but this model has to be learned through experience.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And Monte Carlo method is our first learning method because these methods require only experience. They do not require any prior knowledge about the dynamics of the environment. So I am interacting with the environment I am experiencing with the environment and then I am getting better and better and better. This is the essence of Monte Carlo methods and this is how it differs significantly from dynamic programming.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Let's take an example. Imagine we have an agent which is or a rover which is sent to Mars. Now would you use dynamic programming to calculate the optimal trajectory or optimal policy for this rover or would you use Monte Carlo? You cannot use dynamic programming because you do not know the model of the environment in Mars. Only when the rover interacts with the environment then you will get to know that okay this patch is rocky I should avoid this patch etc.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So there would be a lot of learnings but all the learnings would happen only after a experience interacting with the environment. So this is why Monte Carlo methods are way more practical compared to dynamic programming methods which offers a good window to find out optimal policies in a mathematical fashion which we saw in the last lecture.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So now in real life practical cases Monte Carlo methods might be useful but will also build our intuition by taking a very specific scenario. Imagine that you have been given this task all of us we are this yellow circle and or let's say this is an agent and this agent has to reach this goal by avoiding the obstacles.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So it has to find the best possible way in which it can reach from the starting position to the goal by getting maximum rewards. So how will you start by doing how will you start solving this problem? Okay so firstly let's try to see which are the states here which are the actions here.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So states are if I am here this is a state if I am here this is another state if I am here this is another state and actions are whether I want to go up down left or right. So there are four possible actions for every single state and then you can calculate this optimal value functions using two methods.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
The first method is dynamic programming and as we looked at before in the dynamic programming method you would need to know the these transition probabilities. For example if I am in this state then what is the probability that my agent goes to this state and in this case since we are assuming that all actions are equally probable the probability can be assumed to be one by four that is 0.25.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So we do know the transition probabilities in this case and it will help us to solve this problem using the method of dynamic programming and then when we use dynamic programming we will directly get one final answer by solving this problem using policy iteration. Now we are not going to focus on the details of how to do this using dynamic programming but rather we are going to focus on the differences between dynamic programming and more take our method.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is method number one in which we solve for these value functions in an iterative fashion by using the knowledge of the environment where we say that we already know the transition probabilities and we solve these system of equations and we directly get one result which is the final optimum value function. Now in the second method in the Monte Carlo method what you will do is that you say that I do not know these transition probabilities so I am simply going to learn through experience.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So you start and you start to take random walks so I first do something like this let us say. So episode one let us say I do something like this. I take this path I first visit this state then this here down, left, down, right, down, right, down and right and finally I reach my goal and what I do is after every episode is completed let us say you focus on this state which is marked as one in this figure.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
What you do is you want to update the value of this state. So you can actually add up all the subsequent rewards. Remember what is the definition for the value function of the state it is the expected value of g of t and g of t is the summation of all the rewards which the agent receives in the future and I actually know all these rewards because this is the reward that I am getting for one this is the reward I am getting for reaching this particular step then I get a reward for reaching 3 then my reward at 4 will be negative because this is an obstacle remember.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So I will get an estimate for g of t right for this state and similarly I can get estimates for all of the states. So let us look at okay so this is what I do after my first episode ends. What I do is I calculate these estimates of estimates for expected return.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Let me write that in green for all states. Now what I do is I repeat that episode.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
I will again interact with this environment and this time I follow a different route. So this time from one I take this right turn then I take down etc. So I follow a different route and again I do the same thing. Now you might ask me the question that reject the okay this is fine. Now if you look at this state in the first interaction or in the first episode you took this action at this state and then for the second interaction you took this action and it has led to a different path.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So you will get a different expected return for example let us call this state as s subscript 3 and I get a return of 0.5 for this episode and for this episode I get a return of 0.2. So then what I do is I simply take an average. So my current estimated return for state number 3 is going to be 0.5 plus 0.2 divided by 2 which is equal to 0.35.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And then if this state is again visited in the next episode then I will get an additional entry into these two terms. So I will have a third term now which I will add to these two terms. And then as the number of episodes increase my estimate for the expected return for the state will start to slowly approximate the true value function of that state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And this is exactly the intuition behind Monte Carlo methods. In the Monte Carlo method you do not care about the state transition probabilities because you are not aware of how the environment is mathematically behaving. But rather you decide that and I am going to experience the environment by myself and every time I visit a state and I reach towards the end goal I am going to update my estimate for that state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So here the estimates are updated after every episode is completed because only after every episode is completed you will be able to get an estimate for this expected return. If you have not understood the mathematics here it is completely fine. I want to explain the intuition here that is the main objective in the initial section of this lecture. The crucial difference between Monte Carlo and dynamic programming is the availability versus non-availability of the environment model.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And we compensate for the non-availability by repeated experiences of via different episodes. And every episode makes our estimated return closer and closer to the true value function. Does this remind you of something? We have looked at an example which is quite similar to this. Although in that example there was only one single state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Here you have multiple states. In that example we had only one single state but then we used to visit that state and there were multiple actions possible from that state. And then what we did was we kept on adding the estimates to the previous estimates and finally we took an average and we got the true values for all the actions taken in that state. And this is an example of the bandit methods.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So I am just going to see if I have any image. So this is the bandit problem which we looked at in the second lecture of the reinforcement learning phase. And here you see there is a lever which I am calling. And we looked at a case where we had a multi-arm bandit which means we had these four different levers.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Lever one, lever two, lever three and lever four. And then we asked ourselves the question that which is the best lever to pick, which lever is going to give me the maximum value. And the answer was not very clear. So what we did was we kept on pulling multiple levers and then for every lever pulled we used to update our estimates for all of these actions.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is something which is quite similar to multi-arm bandits. The only difference is that here instead of one single bandit you have multiple bandits. So the number of states you have in a game are equal to the number of bandits. This is how the multi-arm bandit's lecture is very crucially related to multi-carlo methods.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Okay, so let us now get this intuition firmly fixed in our mind and we will move to multi-carlo prediction and multi-carlo control. This is the same approach that we had followed for the dynamic programming methods as well. Remember that in the prediction problem we estimate the value function v pi of s for a given policy pi.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And this is something which now all of you will be able to write based on the previous example that we just saw. The value function for a state can now be easily estimated and this can be done by following this simple algorithm which is mentioned over here. Let us say we have five different states and then this is the return array.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So you initialize let us say 0, 0, 0 for all these states. Okay, then you generate an episode and for every state in the episode you calculate the expected return. So for example for a state number one the expected return is 20. So you append it to the list.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So it becomes 0, 20 over here and this becomes let us say the expected return for state 2 is 10, 0, 10 this is 0, 5, 0, minus 2 and 0, 5. Then you again do one more episode. Then you will get one more estimate for the returns. So 20, 10, 0, 10, 5, 0, 5, minus 2, 0, minus 2, 3 and 0, 5, 1.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And you do this for a large number of times. So you will get a lot of entries and finally you take an average of all of these entries. For example here I do 0 plus 20 plus 10 divided by 3, 0 plus 10 plus 5 divided by 3 and so on.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So finally what you will get over here is the converged true value function for all the states. So this is how we learn to calculate the value function for a given policy. And now imagine that you have the value function and you now want to tell me that okay I have the value function for all these states. So if I ask you the question that okay you have the value function for state 1.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Why don't you tell me should I go up, down, right or left for this state. Can you say that information is not available in my value function. Now in order to give me an answer to this question you need to estimate not the state value function but you need to estimate the values for each action for us to determine which action to take for a given state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And this is in fact the definition of the action value function which is remember we had defined this as Q of s comma a. Value function was just V pi of s which is the value of this given state but action value function is the value for a given state and for an action taken in that state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Now if you have this value function you easily know what are the values for different action. So you can get closer to finding an optimal policy which is what we want finally. So now our objective is to estimate Q pi of s comma a and this is done exactly in the same way that we estimated the value function.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
The only difference is that here we are going to be looking at not just visits to one state but we are going to be looking at visits to one state and an action following that state. For example, let's say I am interested to find in the above example itself of state s 1 for an action of a which is this state and for going up.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So what I will do is I will generate a lot of episodes in which my agent is reaching this state and going up and then I will calculate the return from my state to the final end of the episode and then I will average out all these returns exactly the same way but here I am neglecting all these averages for s 1 down s 1 right etc which we considered for the value function but now I am not interested in all of this.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
I am only interested for one state and the and one action from that state and this is very much possible to determine using this exactly same algorithm. There might be one thought playing in your mind though you ask me Rajat this is fine but what if one state and action is never encountered before in all of my experiences.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Then how do I solve this problem? And remember we had this same question in the multi-arm bandit lecture where we said that there are these five levers and what if my agent keeps on pulling first and second lever and it never reaches levers 3 4 and 5. So there is a very nice way to do this and it is exactly the same that we saw in the multi-arm bandit lecture.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So now once we have solved the prediction problem that is once we have learned how to determine the action value function for a specific policy. The objective of the control problem is to find the policy which is optimal and here we are going to use exactly the same approach of policy iteration which we used for dynamic programming but the only difference is that instead of in dynamic programming remember we used a policy we calculated value function then we proved the policy then we again calculated the value function and we repeated this steps number of times.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Now we are going to do the same but the only difference is that the skate value functions are going to be replaced by the action value functions. So we will take a policy we will calculate the action value function for that policy and then we say that okay but this policy is not optimal. So we nurture this policy towards a better policy then we calculate the action value function for that policy and then we repeatedly do this till the time we get the optimal policy.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is the same process of policy iteration which we looked at for dynamic programming as well. Now one common issue with this algorithm is that what if we never explore some options what if let us say this path is never explored which is quite possible and we have seen this before you can pause here and try to recollect what was the solution for that problem.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Imagine you went to play a game with four levers and you just keep on pulling one lever because it is giving you good returns but only see you explore other levers you will never get to understand that maybe you will get good returns from other levers as well. So there is a balance between exploitation and exploration. Action is where I look at this state and I calculate the action value function for all left right up and down and I look at the action which gives me the maximum value and I will take that action and I apply the same logic for all the states.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
This is one way of the agent interacting with the environment but this is only exploration. This will not allow you to explore or this is exploitation rather this is not exploration. This is exploitation because you are exploiting the best policy for every state.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Action is where you are being a bit more random you are saying that even if this policy is optimal I am going to once in a while choose or even if this action is optimal I am going to once in a while choose an action which is random which might not be optimal for that state and what that will allow you is that will allow you to explore different paths.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is what we saw for the bandit problems we saw that there is a balance between exploitation and exploration. While exploitation is good but exploitation will not allow you to get the optimal policy at the end because it will not allow you to reach some states at all.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So let me see if I can show you the results of the multi-armed bandit lecture. So this is what we saw in the multi-armed bandit lecture where we saw that if you have a policy which is completely greedy that is the policy which is only exploiting which is shown in green your average reward is in the long term is going to be much less compared to if you are exploring also along with exploiting.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So epsilon equal to 0.1 performs the best because it achieves a fine balance between exploration and exploitation and this is exactly what we want to do for multi-carbosimulations as well. We want our policy to pick the action which has maximum action value function for most of the times but once in a while we want to pick an action which is random and this is called as epsilon greedy policy.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
For example if epsilon is 0.1 out of 100 moves, 10 moves will be exploratory random moves and then what this will make sure is that all the moves are tried and the estimated values of actions will converse to the true values. So imagine that you are here and you are reaching this goal which is over here and in between there are 100 moves.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So let's say I reach this particular state and since 10 moves are going to be random for this particular state if my action value function is telling me that up is the best action for you I will say no I will not select up but I will select right and I will know this 10 times because epsilon is equal to 0.1 so I am making random moves 10 times out of 100 possible moves and this is called as E greedy policy or epsilon greedy policy which is used to make sure that all different possible actions are explored and we are indeed converging to the optimum policy.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this method is called as on policy Monte Carlo control method. On policy means that the policy that we are optimizing is the same as the policy which is used to generate the data. We will look at the meaning of on policy better when we talk about off policy a while later but currently for our understanding it is enough to know that on policies where an agent is interacting with an environment where the data that we have is the same is from the same policy which the agent is encountering when the agent is interacting with the environment.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So that is the meaning of on policy when we are discussing about off policy we will get to know that there is a behavior policy and then there is a target policy. Behavior policy is the policy which is used to generate the data that we have and target policy is the policy that we want to optimize.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
But for now you can focus on the epsilon greedy nature of our policy rather than the type of the policy and we are going to look at a very interesting interactive simulation to understand this. So the objective of this simulation is to understand how on policy Monte Carlo control method works. So this is a grid which is a 4 by 4 grid you might see that there are 1, 2, 3, 4, 5, 6, 7, 8.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
You might think it is an 8 by 8 grid but it is only a 4 by 4 grid. Now the reason there are these colors is because for every state for example if I am at this state I can either go up I can go down I can go left or I can go right so there are 4 possible actions and for this state also there are 4 possible actions similarly for all states there are 4 possible actions. Now the colors indicate whether the action value function is positive or negative for each action for example if I am in this state all the colors are negative which means that the action values for all these states for all these actions are negative.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is what our environment looks like our environment has a total of 1, 2, 3, 4 into 4, 16 states for every state there are 4 possible actions and we want to maximize the reward. The reward as you can see this cross means that there is a negative reward when the agent reaches here and this means that there is a positive reward when the agent reaches here.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So we are going to now use Monte Carlo algorithm to update the action values of all the states using an epsilon-grady policy. So what we are going to do is that first we are going to initialize the action values and randomly. So what we will do is that let's say we have 16 states remember so we will initialize the Q values randomly let's say they are 0 even for all these states and then after that what we are going to do is that for each state action pair in an episode so let's say after that we generate an episode an episode is where you move from your starting point to the end point of the grid.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So I generate an episode and for every state and action encountered in this episode I update my array for example for this state I will update Q of s comma down. So this array is represented like this these are the Q values so on the left hand side you have the state which is 0 comma 0 0 comma on there are 16 and on the right hand side you have values for up down left hand right.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So there are four values for each state and you have 4 into 16 that is totally 64 values. So this is the Q value table which is very important for us the initial initialization of this table is random but after every episode is generated the table is updated. And you might think that the table is updated but where is the eGridi policy being implemented here.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So what we are doing is that after every episode we are calculating the return for each state and action and we are updating the Q table by taking the average of the returns which we looked at before this is what is mentioned over here and then what we are doing is after that we want to update our policy and the way we are updating our policy is that we are looking at each state and we are saying that let us say now for this state 0 comma 0 the best action is obviously right because it has the maximum value but what I say is for epsilon epsilon is how much over here point 2 that means 2 out of 10 times I will choose the maximum 8 out of 10 times I will choose the action which has the maximum value but 2 out of 10 times I will choose a random value which is maybe 0.02 or 0.03.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is how we are using the epsilon greedy policy you are here. Now I am going to run the simulation after I run the simulation I want all of you to focus your attention on how these values are updating. So let us start so you can see now after each episode is completed so episode number 1, episode number 2, episode number 3 etc just focus your attention on this table over here.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
These values are updating because whichever states and actions are visited the expected returns are being averaged and added to the Q values and we are also changing our policy for example now for 0 0 I am getting up with the maximum probability so this is my policy and I am using an epsilon greedy policy which means that for 0 0 8 out of 10 times I will choose the action which has the maximum value which is 0.31 but 2 out of 10 times I will choose randomly.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So as we interact through this simulation more and more you can see these colors also changes and if you increase the speed it the changing of the color happens very fast and you will slowly find that the colors stabilize to a value which is the true value function for all the states which are encountered in this reinforcement loading environment. So this is one example which I thought it will be great to demonstrate how epsilon greedy policy is actually work how the Q value tables are updated and how the policy is chosen for the different states using the epsilon greedy algorithm.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So let us move ahead in the last section of today's lecture we are going to focus on off policy Monte Carlo methods in the off policy prediction problems we again have to estimate the value functions for a policy but the episodes are generated from a different policy. So there is a behavior policy which is denoted by mu and there is a target policy which is denoted by pi.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is called as off policy learning this is quite non-intuitive and I will show you some examples of off policy prediction a bit later in this course but this terminology you will find a lot in RL literature even in large language models whether the policies on policy or off policy. So here what we do is that since we have a behavior policy which is which is quite different than the target policy but we have to estimate the value functions for the target policy. So what we do is that we say that okay first let us see what are the relative probabilities of the behavior trajectory and the target trajectory and then the estimated returns will be weighed accordingly.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this relative probability is captured with a ratio which is called as importance sampling ratio. So this is something which we can better understand using an example so let us look at an interactive example to help us understand this. So this is the target policy which is denoted by pi and the behavior policy is denoted by B. The ratio of target and behavior policy is called as the importance sampling ratio.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Now let us say let us imagine step 1 and action A1 and for this my behavior probability is 0.587 and my target probability is 0.775. So my ratio is 1.319 which means that if so let us take an example which means that if my estimated return for my behavior policy is let us say 1.5.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So what I do is I calculate the estimated return for my target policy by multiplying 1.5 with the importance sampling ratio which is 1.319 in this case so into 1.319.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So similarly all the estimated returns of the target policy are calculated in this manner. This is the basic difference between on policy behavior and off policy behavior that there is a ratio which is called as the importance sampling ratio which keeps a track of the relative probabilities.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
Let us say my action in my target trajectory is or my action in the target trajectory policy is 10 times more likely compared to the behavior policy. It means that my returns in my target policy or target trajectory should also be more because the probability of the action being taken is 10 times more so the return should also be equivalently more and this ratio is captured by the importance sampling ratio.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this is the prediction problem for off policy multi-carlo control we are not going to look at the control problem in this lecture. That is not very crucial for us to move ahead in this course. What is crucial for us is firstly to understand the difference between multi-carlo and dynamic programming to understand how the prediction and control problems are solved for on policy multi-carlo control methods which was the primary focus of this lecture.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
And finally we looked at the difference between on policy and off policy methods. One of the crucial learnings which we had in this lecture which was inspired from the multi-armed bandits problem was that to ensure that all the actions are covered properly multi-carlo problems usually some epsilon greedy policies are used. In epsilon greedy policies what we do is that for every state most of the times we choose actions which give the maximum value for the action value functions but once in a while we choose actions which are random which allows us to explore the game environment more.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So this brings us to the end of this lecture in which we discussed about multi-carlo methods and in tomorrow's lecture we are going to look at something which kind of comes in the middle. Remember in dynamic programming we were bootstrapping you were using the values of next states to estimate the values of the previous states or in other words the value function of for the next iteration was dependent on the value function of the previous iteration but we did not have bootstrapping in the multi-carlo methods.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
In the multi-carlo methods we had an approach where we were continuously updating the value functions after each episode was completed by estimating the returns and we did not have a model of the environment either. In the next lecture we are going to see a mixture of the two where again we do not have the model of the environment but we still have bootstrapping and it is also a learning based method like multi-carlo.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|
|
So it is a mixture of dynamic programming where bootstrapping is included and multi-carlo because it is a learning based method. So that is called as temporal difference method which is a very crucial method to understanding how to solve reinforcement learning problems. Thank you very much everyone and I will see you again in the next lecture.
|
cleaned_Lecture 8 - Monte Carlo Methods | Reinforcement Learning Phase | Reasoning LLMs from Scratch.mp3
|