There's many other types of trees that have been studied and it's a very huge topic. And I've really only talk about a few binary trees and binary search trees. I talked at the beginning about general trees and how they relate to binary search tress and some of the results translate over. On some of, and many of the techniques do as well. But we're going to save a lot of that for analytic combinatorics in part two. We'll do many other examples then. But I do want to talk about the other types of trees that can come up briefly to prepare people for other kinds of ideas that we'll be looking at. So classically in mathematics, mathematicians talk about classic structures known as the free tree. That's just a connected graph that has no cycles. If you distinguish a root node, that's called a rooted tree. And then there's ordered tree where the order of subtrees is significant. And what we've been talking about, what I defined at the beginning is really an ordered tree because usually in computer implementations we pick an order because have to represent the thing in some way. In this little diagram illustrates the designation between these types of trees. This is five node trees. There's only three different free trees of five nodes. Again, these tress are defined just in terms of their connections. There's no root and there's no order. So either four other nodes are connected to one or there's one node that three are connected to and there's another one hanging off or they come in a line. That's the only three free trees of five nodes. If you pick a root but you don't care about the order of the subtrees, then there's nine different rooted trees. So in the first case there's no difference but in this second case you can see that the tree on the left, there's a root that's got three children and then the one that's hanging off could be on any one of the three children. In the case of rooted trees that's not significant, so there's only nine of them. And then as we saw, there's 14, the Catalan number of ordered trees, where the order is significant. Now, in this diagram, each one of these gives us a counting problem. So and we'll see that people have solved this counting problem for these types of trees. And also, you can label them, and treat them as label structures. And then have orders on the labels, and many other kinds of problems come up. And we'll examine plenty of those in analytic combinatorics. So stay tuned for part two of the course. In algorithmics there's plenty of natural ideas that come up as well. So what about having more than two children? I want to have exactly three children or exactly four and there's plenty of applications where we might want to do that. So there's the idea of the t-ary tree where each node has exactly t children or maybe there's a bound on the number of children. So each node has at most t children but not necessarily exactly. So that's a t restricted tree. And then there's something called the 2-3 tree which is actually the method of choice in symbol-table implementations and that also generalizes to multiple. And that's where you can have more than one key per node. It's kind of a strange structure, but actually It's easy to represent as a binary tree and I'll show that in a second. And again, all of these are going to give rise to enumeration problems for any value of p, path length and all sorts of other things that we might want to analyze. All these different variations just so far we've talked about 12 different types of trees and more emphasize the need why we need general tools like analytic combinatorics that can get us the solutions to problems without necessarily going into the details. It's why we need the general theorems that we've talked about a little bit and we're going to talk about more later. You can already easily see how easy it is to specify structures like this. I'm not writing it down here but you can certainly take the defining construction for binary trees and extend that to do ternary trees or fournary trees. And now you know that if you can do that then you have a generating function equation. Now a lot of these generating function equations are not as simple to solve or get asymptotics for as the but we will learn techniques for that. But at least I think people in the course are convinced that we can get to generating functions pretty quickly for all of these types of structures. But again, that's a topic for part two of the course. I want to finish just by talking about one unsolved problem that is related to the things that we've been talking about. So in the real world it's not enough to rest our faith in the use of an algorithm on this assumption that the keys are going to come in a random order. They might not come in random order and we want guaranteed performance. And there is a method for getting guaranteed performance that's based on trees called balanced trees, and that's the method of choice. They're the method of choice because, they're extremely simple search code. In fact, you can use BSTs. You can represent balance trees with BSTs. So much of the code, is very much the same. There's only a slight extra overhead for insertion. And you can read all the details in section 3.3 of algorithms. And it guarantees with that slide overhead, you can guarantee that no matter how the keys are inserted, the height is going to be less than 2lgN. So it's going to be less than twice the optimal guaranteed, no matter what the insertion is. And most algorithms use 2-3 or 2-3-4 tree representations, which actually translate right to binary trees. And again, without going into the detail this is just an example of left leaning red black trees which are the ones that are described in detail in algorithm. And that's the 2-3 tree and no need to even explain what that is there's the binary tree representation of that where sometimes you have two piece per node but, you can represent that just by left-leaning node with a special link on it. And so, now we have binary trees with that little extra overhand that we have to maintain all those links in a particular way but otherwise, that's the method of choice for symbol tables. So now the question is if you have a, even though we have the guarantee, we're still interested in knowing the performance if we assume that the keys come in randomly because again, that is a reasonable model. We would be better to know that the guaranteed height was asymptotic to a log n with a coefficient of one then it would be twice as fast. When you got something that is in an interloop of a major computation. Speeding it up by a factor of two is important. So we'd still like to know the answer to this question of whats the average cost of a balanced tree built from a random permutation? Now again, that's not a property of trees, that's a property of the permutation in the algorithm. The tree, it's just a container of data. So this plot that we've been using as our logo is related to that problem. There's one kind of balanced tree called an AVL tree, which again, represents a binary tree. And this is the normalized plot of the probability that the root is of rank k in random AVL tree. Again, that's random three model just like random Catalan tree model, you could specify what is an AVL tree, you can take each tree equally likely can develop generating function relationships and produce that plot. And what this plot signifies is, it shows that sometimes the root is in the middle but sometimes it's like one-third, two-thirds, there's two peaks. It's quite a fascinating plot to stare at. So that is that one, remember, and that's good because remember for random binary trees we have the Catalan distribution where It was not balanced at all. It was more likely to be unbalanced. And for random permutations, it's just flat. So this looks good because at least, it's trying to make the root at the middle. But still are challenges to analytically characterize that. And that's an unsolved problem. There's that plot and this is a empirical plot for LLRB tree. It seems to have some of the same characteristics but still we are not close to To a understanding how to determine the half-life of [INAUDIBLE] permutation. The other one, random AVL tree and random LLRB tree has been studied with analytic combinatorics and we will touch briefly on that in part two okay. So that's a brief summary of other types of trees. Now I'll talk about a few exercises that you might do before the next lecture. There's plenty of exercises on tree enumeration to check out how comfortable you are with using this symbolic method for enumeration problems. So this one is, so we've got a lot of forest. Maybe we're interested only in forest that have no singleton nodes. So the question is what's the proportion of forests with n nodes that have no trees in them consisting of a single node? And this sketch shows that for one, two, three and four the answer is zero, a half, two-fifths, and three-sevenths respectively. So find an asymptotic formula for that. And again, that's just using, again the beauty of the symbolic method is, you could take the basic derivation and you could modify it and still the basic structure of it follow through to give you a generating function equation that you can find asymptotics for or explicit form. And this is just practice in doing that. This is just a computation based on our observation about permutations and binary search trees. What's the chance that you get a perfectly balanced binary tree? And that's just a fun computation. And then, this is the same kind of idea. We did parameters from trees before, this is parameters from BSTs built from a random permutation. So the red ones are the average number of internal nodes who's children are both internal. And the blue ones are the average number of internal nodes that are one internal and one external. For random cattle entries we did before we got the solution to that, constant fraction. And so, this is to compute that for random binary search trees. So in lecture five, we talked about that and now you can compare these results against what we got in lecture five. So that's the reading for the next lecture. Another thing that some people might want to do is to run some experiments to validate the mathematical results we've talked about. For example generate some random permutations in build binary search trees, and look at the average that you get, and check that it's 2n natural log n. It's always worthwhile to do that. It's a little bit of programming but not many programmers, that's no problem at all. And maybe a little harder is to try to do the same thing for random binary trees. Are the internal path links in square root of pie n. How close are they? How big does n have to get to be asymptotic and so forth. It's always worth while to run experiments and some people might enjoy doing that. And then again just to check your understanding the material that's a good idea to try to write up some solution to exercises like the ones that I gave. Now that's an introduction to trees.