Welcome to our lecture on series. We are going to start off with something that may seem pretty obvious, but it's going to get a little confusing very quickly, so we have to be very careful here. Go back to the day when you first learned how to add, and we're going to start off with one of the easiest additional ones we can. If I say to you 1 plus 1, don't think too hard on this one, we say 2. Why? Why is this the case? Why does this even work? If you go back to elementary textbook with addition, this is called a binary operation if you want to get fancy about it, but the key is you can add two things and you can always add two things, and you have all these properties like associativity and commutativity of these numbers, but what happens if I start to add more than two things? Like 1 plus 1 plus 1, and if I do that, you have three things. Normally what you do is you wrap parentheses around them, for the reminder, this is called the associative property, and you work inside the parentheses first and you still use this rule that I can add things as long as I have two at a time. This becomes 2 plus 1, and then of course we all know this, and that's fine. You can add as many numbers together as you'd like, and you will always get the same result no matter which order you do it in, but the key here is that it must be a finite sum. This is probably something that you never even thought of when you were adding your numbers, doing your math homework back in the day. But the key is you have to have a finite sum. What we're going to ask now is what happens if I have infinitely many terms? What does this expression even mean if I want to do this all day and add up one forever and ever. We need to come up with some precise definition of how to do this and we can't use the old definition of addition because that definition was only defined for finite terms. When I have an infinite sum, this would be known as an infinite series. The title of this section is infinite series but really, we're going to study what does it mean to add infinitely many things. Just to show you how weird and confusing this can get, I want you to consider an example here. Take the example 1 plus minus 1 plus 1 plus minus 1, and we'll do this all day. We haven't defined what this means yet, but I want you to try to guess, could you guess what this is going to mean? Do you know? We don't know what this means. One approach to this, you might come along and say, well let's put parentheses around the first couple terms, and I can do this all day. When I rewrite this, I'll get 0 plus 0 plus, and you just keep getting zeros, and so what does it mean to add zeros all day? Well, that would obviously be zero. Not a bad answer. But then what happens if someone else comes along. Let me rewrite the sum for a second, the infinite sum, this infinite series. If I write it as 1 plus minus 1. Let me just rewrite the expression here. Now your buddy comes along and says, well what if I wrote the parentheses around the second and third term and I wrote them like this. Well, now these terms, I have this one that's out front that is still there, and then I have a zero here, and then these terms cancel, it gives zero and I get zeros for the rest of them, so wouldn't this sum equal one? Of course zero is not equal to one, so how do you justify? How do you resolve this math, what's going on here? We're at a problem. The problem with this is there's something more going on when you deal with infinite series. You're dealing with the infinite, it's a dangerous place to go. Don't go there unsupervised. Bring your chaperone of calculus with you, but you can't just take this naive approach of, well, it worked for finite cases, so should I just do the same thing. You run into problems very quickly when you do that. Let's look next at how to describe these infinite sums, these infinite series. When you work with infinite series, there's a couple of notational conventions that you need to know. First off, let's assume I wanted to add the following infinite number of terms, so a_1, a_2, a_3, add them all up. Well, we use the capital letter Sigma to denote the sum. This goes on forever. I write this as Sigma, we say sum now, from n equals 1 to infinity of a sub n. This is how we write this expression in Sigma notation. The Sigma is like a Riemann sum we saw before, it tells you to add the sequence. The index starts at the first term of the sequence and the top value, it tells you where to finish. Just be in mind that they don't have to be infinite. We saw this when we did finite Riemann sums. Let's just give an example here. If I take the sum from n equals 1-3 of n over n plus 1. This is a common finite case. What does this mean? First off, you plug in n equals 1, and you get 1 over 1 plus 1 and then I add, sequence says to add, plug-in 2, and you get 2 plus 1, and then you have one more because it stops at three. I started at the lower index, stopped at the top index, and you get 3 over 3 plus 1. Simplify that. Of course you get one-half plus two-thirds plus three-fourths and whatever that is, some number. But the point is, you stop when the top tells you to stop. This is a finite example. When you have the infinity sign on the top, that is just a notation to say that the terms will add on forever, so you're going to write down, I know it sounds a little counterintuitive, but you're going to write down infinitely many things. As another example of a infinite sum or a series, as we're going to call them, how about 1 over n? This notation means that I want to try to study the sum 1 over 1, plus 1 over 2, plus 1 over 3, and it goes on forever and ever and ever, and you can write down as many terms, but the ending point, that there is no ending point, it just goes on forever till infinity. Again, you read this as the sum from n equals 1 to infinity of 1 over n, or you can say the series 1 over n, either one is okay. There is nothing that requires that the sum start at one. I could start at three or four, or five or whatever I want, and I can have as many terms in there as I want as well. Now, let's talk about how to actually approach trying to solve with the sum to. Of course, whenever you're working with infinity, we're going to start to look at limits. We're going to add up an infinite series, so let's write it in summation notation, so let's go from n equals 1 to infinity of a to the n. Remember what I'm writing here is shorthand for a_1, the first term, plus a_2 the second term, plus a_3, so on and so on. To make sense of adding this thing up, I want to start to look at the sequence, so I'm setting sequences prior, sequence of partial sums. What does this mean? I'm going to start to make a new sequence, so let's call the first term to be a_1, I want the second term to be the first two terms added together, now, this is a finite sum of which I can do. The third term will then be a_1 plus a_2 plus a_3, and so on and so forth. When you add these up, we get our general term, which would be a_1 plus a_2 plus dot, dot, dot, all the way to a_m, and the last term of each sum has the same index, which is the index of the sum, so when I have the third partial sum, I added up three things, what I had up the mth partial sum, I added up m things. If you notice, all these sums are well-defined, there's no ambiguity here, but what I'm asking you for, I want you to take the terms in the sequence of things you see and add them up in a finite way. This is well-defined. When you do this, you create a new sequence, you create a new list of numbers, we call this a sequence of partial sums, and then I can ask you a question that I asked you prior, what is the limit of this sequence? What we're going to say is that the series, the infinite series from one to infinity of a to the n, we will define this, we will call this, this will be the limit as n goes to infinity, let say, it doesn't matter what variable we pick, of the partial sums. This is a limit of a sequence, this is well defined, this I have tools of calculus to help me define, and if this thing exists as a limit, then I'll say that the sum, the infinite sum or the series exists as one. When this thing actually converges to some number, we will say that the series converges or if the limit does not exist, we can say the series diverges. To study infinite sums, to study series, there's two things that we're after, it's really a question of does the limit of a sequence exist, I just have to make the sequence of partial sums of which it's well-defined. Let's look at our old friend again with this understanding in place. Let's look at the sum of 1 plus minus 1 plus 1 plus minus 1. This thing that gave us a little bit of confusion before. I'm going to write this in Sigma notation, just so we get used to writing things in Sigma notation. I'm going to write this from n equals 1 to infinity, and how do I get something to alternate back and forth? Let's write it as minus 1 to the n. Now, this, by the way, has a name sometimes. It's called the devil series because it perplexed mathematicians for so long. You can argue there's two camps, few will say no, this is zero and no this is one and or maybe zero is equal to one, so fistfights ensued, I'm sure. The idea is, let's look at this correctly in terms of partial sums. I'd like to know if this thing converges, so let's look at its partial sums. If I take the first element of the sequence that's just one. The first partial sum is the first term is sequence one. The second partial sum would then be the first two terms added together, which is zero, and the third sum would be the first three terms added together, which is one. You can quickly see the pattern that follows, so you get 1 plus minus 1 plus 1 minus 1, that's 0. What is the sequence? What's the new terms that I'm looking at here? It's alternating between zero and one. The sequence of partial sums, depending on whether the index is even or odd, will be zero or one. What I'm asking you to find is this limit, this answer. The summation of the sum from minus 1 to the end will then be the limit as n goes to infinity ;limit of the sequence. This is write out in the list here. You see at 1010 dot-dot-dot alternates switches forever. Now this is an easier question. Where does this list of numbers want to go? Does it converge to 1? It does it converge to 0? No, it just bounces back and forth between them. It doesn't really want to go to one place. Remember, limits are unique. This sequence does not approach a single value, does not exist. So we would say this series diverges. This is an example of a divergent series. That's why you can't conclude anything from this series. You can't tell me it's 0, you can't tell me it's 1. By our definition of series converging using partial sums, this thing is divergent, sorry, no solution. Now, this idea, this approach of finding partial sums works out really nice when you have a very simple series like we do. But in truth, it's really hard to add these terms together and to get some nice pattern for these things. Only in special cases will that turned out to be true. A lot of different series problems out there cannot be evaluated this way directly. We are going to need to develop some tools, and this is what the rest of this section of the series is going to talk about, some tools and tests to find out what's going on with these series. We're going to have tests to find out if they converge or diverge, how they diverge, how bad they are, how good they are. There's a lot to be said about series. It's one of my favorite sections to talk about because it's fascinating. This is a really nice study of infinity that we get to see. Let's look an example of a very friendly series, one of the nicest ones that we can get. We're going to take a series called the geometric series. There's the finite one, the infinite one. You may have seen the finite one before. Some courses cover prior, but I just want to look at in a very special type of series where I start off with the index of one, I go all the way to infinity and I have some number raised to a power. Let's start off with a specific example. How about one tenth? You can pick any number you want here but instead of doing in the abstract, let's do it in the specific. Remember what this means? I'm studying the infinite sum, the infinite series, where I replace n with the index. I take the first term n to the 1, I add the second term, 1 over 10, and it's two, so I square it. I take the third term, 1 over 10, and I cube it. Then I would do this forever and ever and ever. Here's my question to you; does the series converge? If it does to what? Can you tell me the limit of its partial sums? Now this is a geometric series. It's very nice one. We will be able to answer these questions. In general though it will get tougher to do this, but to find the sum of this geometric series, let's start by looking at the terms. Let's start by studying the partial sum. Sm will be the sum of the first m terms. What does that mean? I take the first n terms, I have one tenth and I have one tenth squared, and I have one tenth cubed. We're going to do this all of m times. The key though that it's finite, so it's all well-defined. That's perfectly well and good. Then I want to do a little bit of algebraic trickery here to it. I want to subtract one tenth of each term of the sequence. What does that mean? Let's subtract one tenth of Sm. I send one tenth through, and what does that give me? This becomes, if I multiply everything by one tenth, I get a negative one tenth, I get minus, we'll keep the minus out for a minute, I get one tenth squared plus, the minus is all factored out, one tenth cubed plus one tenth to the fourth. Just imagine all four one tenth through here plus dot, dot, dot, and you're going to get one more one tenth at the end here. Now there's a negative across all of it. What you notice quickly is that if you subtract these two pieces, stuffs going to cancel. The negative one tenth squared, that cancels. The cubes cancel, bye-bye. There should be a plus one here. Everything else would cancel, the m term would cancel, the four term. The only thing that's left over at the end of the day is the very first one tenth, and the last one tenth to the m plus 1. When you subtract both sides of the equation, what do we get? We have Sm minus one tenth to it. That's nine-tenths of the partial sums and that would be equal to, what's remaining here, the first term, 1 over 10 minus 1 over 10, parentheses, m plus 1. All the other terms cancel. Why is this nice? Now let's multiply both sides by 10 ninths, so move the side of the side and we get Sm equals 10 over 9 times one tenth minus one tenth to the m plus one. I just move that thing to the other side. That's okay. Now let's distribute in the nine tenths. The tenths cancel on the first one, so we just get one ninth minus 10 ninths times 1 over 10 m plus 1. This is our expression for the nth partial sum. I've isolated the partial sum that helps me to understand what this thing looks like in its simplest form, much simpler than the original term I started with. Now the question says, alright, if this partial sum is going to exist in the limit, if it has a limit as it approaches infinity, then I need this to equal something. Let's look at that for a second. Let's look at the limit as m goes to infinity of the nth partial sum. If this thing exists, then the original series will converge. If it doesn't, then it will divert. This is what we study when we actually try to determine what's going on with this infinite sum. I have one-ninth minus 10-ninths times one-tenth to the m plus one. Can you figure out this limit as m gets really, really big, what's happening, to this series? One-tenth, if I start multiplying that lots of times, 110,100, the denominator gets very, very large, which means, that whole expression, of course is going to zero. This right side goes to zero and the left will just go to one-ninth. The limit of the nth partial sum, no matter what it is, is always going to be one-ninth. That says that the series, the infinite series, the sum from n equals one to infinity of 1 over 10 to the n is in fact one-ninth. We'd say that the sum from one to infinity of 1 over 10 to the n is equal to one-ninth. What this really means is that the limit of partial sums goes to one-ninth. You could hear people say, all the time though if you add up infinitely many terms it's infinite, sum goes to one-ninth. But realize whenever you see the word infinite, we're of course talking about limits. When you generalize the last example to any geometric sum. We'll call any number raised to the n a geometric sum. I have a specific number, I'm raising it to a power. That number is called the ratios. We tend to use r, but you can use any variable you want. We can show using this things in the last example that if, so you can say one or two things that this will converge to r over one minus r if the absolute value of r is less than one. If you go through the steps in general, you need that last term to go to zero, and the last equation, it was 1 over 10. If the absolute value is less than one, then everything will work out and you get the formula one over one minus r. I'll just put a D here for divergence. If the absolute value of r is greater than or equal to one, then the series diverges. Just to show you in our last example that this is actually true. Once we have this general formula, we can use it. We don't have to work it out every single time. In the last example, the ratio was 1 over 10. I look in my options here. One over 10 is the ratio and the absolute value itself, so it's less than one. You can check one-tenth divided by one minus one-tenth becomes one-tenth divided by nine-tenths. To remember how to divide fractions, keep the first one, and then change division and multiplication and flip and you get as promised, one-ninth. That's an example of a convergent geometric series. If you wanted to, you can still talk about diversion ones. They just don't get very interesting. What if my ratio was three, and I had three to the n? What if I tried to do the infinite summation, three to the one plus three-squared plus three-cubed. You can work this out. Your intuition guide you here. This becomes 3 plus 9 plus 27. It makes sense that this thing diverges or does not converge to some specific number and we knew that automatically you can work through the same steps as last example, the absolute value is greater than one and so I could have told you this diverges from the beginning. Just be very careful when working with infinite geometric series. You have to be careful with the index in showing. If you have the index starting at zero. Let's say I start at zero, n equals 0 to infinity. Then that changes things a little bit. When I start plugging this in, I get the ratio to the zero, that's one, and then I have R, then I have r-squared and so on and so forth. The same rules apply about convergence or divergence, but sometimes they start at zero. In this case, the formula changes also slightly to 1 minus 1 over r. The numerator in the formula up here completely depends on the index of the terms. When n is one, the numerator of the convergent form has r over one minus r. If the index starts at zero, it's 1 over 1 minus r, and again, this formula only holds true if the absolute value of r is less than one. If it's greater than one, then of course it cancels. You can also see in both of these forms, they can sometimes put a constant. They can sometimes up to do the zero cases and switch to talking about it. Some constant A who cares what it is, some real number, but it's not affected by the index. In that case, you can still factor it out. We'll show this in a little bit. That's one of the properties of series. You can bring it outside, you can keep it in as well. It doesn't matter, but you can bring it out like derivatives, like integrals, constants come along for the ride and in that case now starting zero, so I have a times 1 over 1 minus r. But sometimes you see this in a over 1 minus r. So if there's a constant upfront, it comes along for the ride and usually gets put on top of the numerator. Again, this convergence formula is only true if the ratio has absolute value less than 1. Another example of a friendly kind of series that we're going to like, and these are few and far between, very specific case and definitely not going to be the sort of common one, is a telescoping series. A telescoping series is of the form when you write it out in expanded form, you get some term minus some second term but then, if you write it out again, it repeats with a different sign. So you get b_1 minus b_2 plus b_2 minus b_3 plus b_3 minus b_4. There's a lot of cancellation that is happening. As an example, this is called telescoping series, just as an example, you say, "When does that ever happen?" Let me show you an example of one that might not be obvious in sigma notation but will be very obvious as soon as you write it out in expanded form. So let's take the series from 1 to infinity of 1 over n parentheses n plus 1. I can rewrite this series. Yes, there was a minus signs. Think about when we had an integral with 1 over n times n plus 1, what was the technique we did to study or solve that integral? We use partial fraction decomposition. Now I can hear the moaning and the groaning already, this example though is pretty easy, you just do one upstairs and minus one on the other [inaudible]. We use partial fraction decomposition to write this as a sum. Still may not be immediately obvious, but if you start plugging in numbers, you get 1 minus a-half plus a-half minus a-third, now you see how it starts looking like the general definition, plus plug in 3, a-third minus a-fourth, and so on. You see the cancellation happening a little better when you have it in expanded form. So if you ever don't see something, write it out in expanded form. The one-thirds go away, the one-halves cancel, this sort of thing. The important pattern of the series, I think takeaway for these very friendly kind of series, is that you can write out the partial sums really nicely. In general, this is very difficult to do. The mth partial sum, if you look at what's left, it becomes 1. There's always the one that stays out in front, everything else is going to cancel except the minus last term. So the formula for the mth partial sum becomes 1 minus 1 over m plus 1. This is true for telescoping series in this case. This is very unique, and you'll know these when you see these because you just start to see cancellation all over the place. In particular, now that I have the expression for the partial sum, I can ask the question, does the series converge or diverge? I can answer this by studying partial sums directly. So let's take the limit as m goes to infinity of these partial sums. We're going to study the convergence of the sequence for possible divergence. So the limit then becomes a good old single variable limit 1 minus 1 over m plus 1. As m goes to infinity, this denominator gets really large, the whole thing goes to zero, so I just get left with 1. Since this limit converges, the series converges, the limit of the partial sums is 1. We have a nice converging series, and we would say that the series from n equals 1 to infinity of 1 over n parentheses n plus 1 equals 1. If you can express a series in telescoping form, you have a really good chance of finding its limit, which is nice and it's better than just saying, it doesn't converge or diverge. Geometric series and telescoping series, they're the first two examples, you see they're super friendly, we like them. Not only can we tell you that they converge, but we can also tell you to what they converge to. We will see very quickly that these go away, and a lot of times you have no idea what the sequence converges to, or it takes a lot of work to figure that out. As another example of a not so friendly series, we will introduce you to the harmonic series. I don't want to say too much about it now because we'll study it more, but I just want to put it on your radar. The harmonic series, the one that gets a name, it's classically studied, it confused mathematicians for a long time as well, and it is 1 over n. I believe one of the Bernoullis proved this for the first time. This series just gets a name so you should know it by name, when you look at it, is it the harmonic series? It's this particular one. This is what it looks like in sigma or close form but if you write on expanded form, it's 1 plus a-half plus a-third plus dot, dot, dot, dot, dot. This series, it's hard to do the partial sums, it's not immediately obvious. You can play around with it. You can see why it stumped mathematicians for awhile. We are going to show you why this is true later with a different test but right now, I'm going to just tell you, you should know the series by name, by sight. This is a classically famous example of a series that diverges. This series, if you add them up, will go to infinity forever, so this is the divergent harmonic series. The other two, geometric, not every geometric series converges, but you can tell pretty quickly harmonic just diverges, straight up diverges. That is, you can look up the history of the series a little bit to see why it gave people so much trouble. But I'll just put in, they'll know for now the proof for this later, and if you say this looks familiar, infinite sum, if you replace this infinite sum with an integral sign, which is what we're going to study later, just think of the p-test. Enough on harmonic. Classic example of divergent series, know it when you see it. I want to get a first theorem down our test for divergence, and here it is. If this series from starting index to infinity if it's convergent, what does that mean? The limit as n goes to infinity of a to the n has to equal zero. This is the intuition behind what I'm about to say. If I'm adding things and it's approaching some number, I can't be adding bigger and bigger things, I can't add 3,927. I have to add smaller and smaller pieces so that I'm basically adding atomically small amounts where I'm not changing the actual amount I have. Like if I had a cup of coffee and I added one more atom of coffee, could you make the argument do you have the same amount of coffee? Something like that. The idea is this limit has to get infinitely small. You can flip the statements called the contrapositive, but you can flip this statement and say, well, it's equivalent to if the limit of the terms in the sequence that have being summed is not zero, then the series has to diverge. This is going to be our test for divergence. Basically, if you give me a series and I want to know if it diverges, I'm just going to take a limit. I'm good at taking limits, I have fancy tools for taking limits, I can take an limit. If I get some number that's not zero, I'm done. I look at you, I look you right in the eye and I say, sorry, this series diverges. Now it's important to realize, and this is a common mistake people make, this test is called a test for divergence. It is wrong to conclude that a series would converge from this test for divergence, people do all the time though, I'm just going to put this a little warning here. If the limit of the terms actually gives zero, then you get nothing. Then who knows? You maybe converges, maybe it diverges, I don't know, but the point is I cannot make any conclusions from this. Just as an example, let's go back to our devils sum here, and I'll take from i equals one to infinity negative one to the n. This was the alternating series 1, and etc. Now we saw using partial sums, we know this diverges. But here's where the tests are a little nicer, we know this diverges, but you can use the test for divergence to see this as well, so let's call this a test for divergence. Let's just take the limit as n goes to infinity of minus one to the n. I'm using my test for divergence. What do you notice about this? This is one, negative one, one, negative one, one, negative one, this in fact does not exist, so in particular it's not zero. Therefore, I can stop right away and I say, the series diverges, and then usually when you use a test, you just give credit to where credit is due by the test for divergence. Whenever you use a test, just say why. Lastly, let's just talk about properties of series, you're going to notice they behave a lot like other things in calculus that we studied, so if you have two convergence series, so watch how I write this by the way. If a to the n and b to the n, I don't write indices on them, I'll put infinite on top. But the point is I don't care about the starting point. Usually when you do that, you don't write the subscript. Because it could be 1, 0, 2, who cares? If these two are convergent, then so are the following two other series. The first one is any any constant times the term and the second one is the sum of the two things, so the sum of convergent is convergent. That's nice. Of course, sums and differences here don't matter as well, so you get one of these two. The idea with this one is that constants like derivatives, like integrals, series start to feel the same. It comes right out front, and you can write it as a over n, so if the sum converges to some number, then c times a number is also some number. If I have two series, you can distribute them across, think like derivatives, thinks like integrals, you can distribute these things across, after all they are just infinite sums as well. If you have both them that converge, then their sum will converge as well, and so I can start handing you now maybe a sum of a geometric and a telescoping. The same thing we saw before, if one piece diverges, the whole thing diverges as well. Now to get better at these rules, we will do lots of examples, but we'll do that in the next video. Enough theory for now, we'll see you next time.