[MUSIC] In the following several videos we'll talk about issues of non-integer arithmetics. They can be far more involved than just simple integer overflow. Let's start with some simple math. If you take a divide it by b, and then multiply by b back, you will get exactly a. But of course, we want such basic qualities to hold in our programs too. But do they? It turns out if you take 1 divided by 49 and then multiply it back, you will get something which is not equal to 1. So why does this happen and what to do if even such basic properties not hold? For that, you'll need to know how non-integers are stored in computers. Well, we know how to store integers. So what could we do with non-integers? First of all, consider rational numbers, numbers of the form A over B where both A and B are integers. They arise naturally when we try to divide integers, like 1 over 49. So storing them is pretty straight forward. As the numerator and the denominator are both integers, you can just store them as integers. Now with a fraction by its numerator and denominator, we can do basic arithmetical operations. On the slide, you see the rules for adding and multiplying variables of this form. And the good thing about this approach is that the numbers are stored exactly. There is a one to one correspondence between different rational numbers and different non-variable geometric pairs, if you consider them reducible. So our example would be all right. If we divide 1 by 49 we will get the fraction 1/49. If we multiply it back, we will get the fraction 49/49, which is equal to 1 if we reduce by the common divisor. But finite numbers aren't written this way. At first glance, this approach seems very good. But in fact, there are some issues. First, remember the implementation of integers. The maximum value of its store is bound. Because there is no way the store infinite values in finite space. But with rational fractions, we also can get as close to zero as we want because the denominator is an int, and so is bound by 2 to the k-1. The k is equal to 31, if it's an int. So we limit not only the magnitude, but also the precision. In fact, there's also no way of storing arbitrary close to 0 numbers because even numbers of the form 1 over k are infinite. So limit precision will be an issue for any approach although you'll get much better precision with the same space. However, there are each special distorted natural numbers. Both numerator and denominator are integers, so we have to store them in some standard integer type, like int. But even short sums of small fractions are high flash values. For example, the sum from 1 to 1/25ths has the numerator and denominator of several billions. And so there will be overflow if the type int were used. This happens because ordinary fractions also feature multiplication of integers. We need to get to the common denominator. And that is a big problem in fact, as we would like to have more than a dozen additions in our programs. Second, there are many irrational numbers, those that just couldn't be represented as a ratio of integers. Important examples include square roots. They appear every time you want to calculate some length. Pi, which is needed for arc length and trigonometry. The base of natural logarithm, e, and so on. Next, let's look at the concept of decimal fractions, numbers like 2.37 or 0.125. They are just a special case of rational fractions where the denominator is some power of 10. In general, any decimal fraction is just the number without the point divided by 10 to the power of how many digits there are after the point. For example, 2.37, the decimal point is 237 and then divide by 100 because there are two digits after the point. The good thing is any real number could be written as some decimal fraction. But there is a problem because fractions are often infinite. For example, the square root of 2 is 1.414 and infinitely small and third is 0.666 and so on. With that we have a way of representing any real number as a decimal fraction. Say we want to have three digits after the point. So we round anything we get up to the third digit. Square root of 2 becomes 1.414. Two-thirds becomes 0.667. Of course after rounding, it's not the same value anymore, but it's pretty close to it. The error, that's the difference between the actual value and the rounded, is not greater than 10 to the 3rds, 10 to the -3rds over 2. And if we want to keep more digits, the error would be even less. In fact, that's the key idea of amateur arithmetic computers. We could round any value to some fixed number of digits after the point but at at the cost of errors. However, for computers, base 2 is far more useful than base 10. So it makes sense to consider binary fractions instead. They behave pretty much the same, as decimal fractions one of the base is 2. For example, 1.01 in binary is 101 in binary over 4 is 5/4ths. And 0.001 in binary is 1 over 8 or one eighth. And the general scheme remains. We need to take the whole number without the point and divide it by 2 to the power of how many digits there are after the point. The only thing changed is base, from 10 to 2. The number two thirds will be written as the infinite fraction 0.101010101 and so on. I moved it also to round here, if we round up the third digit, we will get 0.101. And similarly, the area will be no more than 2 to the -3rd over 2, which is 2 to the -4th. So in this video we've seen how integers help represent non-integer numbers and fractions. In next video, we'll cover in more detail how the binary fractions are used and how the errors behave.