Let's look at the method to estimate the value of pi. Here, I have a square and inside it I have a circle. Suppose that the radius of the circle is one, now that makes the surface of the circle in total 1 squared pi. And the side of the square is then 2 so the surface of the square itself is two square. Now, we can look also at one quarter of this entire picture. And then the surface of the quarter of the circle is one quarter and the quarter of the square is also one quarter. So, if we compute the ratio between these two, between the fraction of a circle and the fraction of a square, we get pi over four. Sp. let me denote this value by lambda. Suppose we want to estimate the lambda. What we can do is to randomly sample points inside the entire square, and then see what fraction belongs to the circle. Or alternatively, we can randomly sample points inside one quarter of a square and see what fraction falls into that quarter of a circle. In any case, we will be obtaining an estimate of this value lambda. So if you want to get an estimate of pi, we just need to multiply that estimate by four. Here's sequential code that implements this idea. Here we use pseudo random generators from the Scala Library in order to implement sampling of points. Given two instances of random class, we can use the method nextDouble to obtain a floating point in the intervals 0, 1. If we get two such floating point numbers, it'll be getting points inside the quarter on the square we considered. And then to test whether these two points, these two coordinates correspond to a point inside the circle, we just check whether this condition holds. Whether their distance to the origin is less than 1. If this is the case, we are going to increase the counter. So you have some total number of attempts, denoted here by iter. We're going to compute how many times we ended up inside the circle. So given this ratio, the number of times inside the circle divided by the total number of attempts, we get our estimates for the lambda from previous slide. And then if you multiply it by four, we get our estimate for pi. Now, let's look how we would parallelize this kind of code. Here's the parallel version of computing the same estimate. Here, I'm using the parallel construct to simultaneously count these estimates for one quarter of iterations. Once I get these counts, I can simply add them up and divide them by the total number of iterations that I had, and multiply the value by four. In this case, all these four computations are proceeding in parallel. And in fact, they do not really have much of shared resource since they are all proceeding independently doing this computation. So this is a nice example because we do not have a shared resource and also because the amount of work can be easily subdivided into four, approximately, equal parts. This is because the computation itself is such that even the value of the parameter we can easily estimate the amount of work. In this case, it's in fact linear in this parameter, the number of iteration. Unfortunately, not all parallel computations have this property which means that it can be difficult to ensure that different parallel computations are balanced.