[MUSIC] All right, welcome back. Where we are heading to now, is the exponential learning curve of the machines. It might be more useful in some sense to figure out what the learning curve means, and how it acts in humans, before we can extend it to what it does for machines. All right, so what is the learning curve? The learning curve basically relates time effort with learning levels in new skills. A very broad, simple definition, all right. And it holds true for the new-to-the-world skills. For things that you might never tried before. Here's a nice example, possibilities not skills, right in this case. Till 1954, it was believed that humans could not run a mile in less than four minutes. There were no real physically constraints, okay. There was nothing that say, that it's not possible. It's just a belief that everybody shared it's just too fast. It's just too much of effort required, it's probably not doable. We know what happened in 1954 though, in 1954 Roger Bannister breaks that barrier. Right, and he writes this bestseller book Four Minute Mile. Okay, that's fine, but what happened after that is actually quite interesting. By 1957, 16 other runners had broken the barrier. Till 1954, nobody in recorded history had. The moment the barrier breaks within three years, you 16 people breaking the barrier. Implying, when the impossible is demonstrated as doable, the old mental model breaks down, collective intuition is reset. Which implies that the moment something is achieved in one part of the world, it basically flows across everywhere. It happened in mountaineering, it happened in computer processing, it happened in fuel efficiency, and a lot of other things. All right, it reminds me of a Nelson Mandela quote. It always looks impossible until it is done. Now we know about the learning curve, you've seen how it operates in humans, how does it operate in machines? Well, time to head there. Let me in some sense demonstrate the machine learning curve with a nice little motivating example. So let's come to the exponential learning curve. The date is 13th of March, 2004. The venue was the Mojave Desert in California. It is the site of the DARPA Grand challenge. DARPA standing for Defense Advanced Research Projects Agency and the grand challenge with the $1 million prize money. Basically is this, there is a 150 miles race course in the desert. There are numerous small obstacles. And what you have to do is get a fully autonomous vehicle, fully autonomous. No remote controls, nobody sitting inside. It has to own its own sense its surroundings complete the race. If it comes first, you win the prize money. Yeah, what do you think happened? Well, it was a disappointment. None of the 15 participants could actually, there were 15 cars competing, 15 autonomous vehicles, none of them could complete the course. CMU, Carnegie Mellon University's modified Humvee went the farthest and that went about 7.5 miles before falling into a ditch. So basically, we are talking about 5% of the course completed by the best candidate. So DARPA basically declared the whole thing a bust up, and it kept the prize money with itself. However, it did see some promise. And so, shortly over a year later, year and a half later, October 8, 2005, same venue, re-match happened. This time, they were 12 participants. And the prize money now is $2 million and the obstacles are tougher. There are tunnels. There are narrow roads. There are cliff edges, if you fall, it's over. What happened? What do you think? Right, barely 18 months ago, nobody could complete even 5% in some sense of a simpler race. What happens now? Okay, 18 months later, five racers completed the race and 4 did so within 7.5 hours. Which basically means an average of 20 miles per hour, right? Stanford's Sebastian Thurn, Professor Sebastian Thurn's creation emerges the winner by a ten minute margin. Who is number two? CMU's modified Humvee. All right, Carnegie Mellon's Humvee is number two. Okay so this is October 2005. Fast forward a little over a year. Well, two years in this case. November 10, 2007, rematch, and this time, it is in an urban setting, right? That basically gets more challenging now. And the rules are that the cars, these autonomous vehicles, must obey all of California's traffic laws. They must demonstrate the ability to merge into traffic, to park by the curb, all of that. To sense road signs and follow them. You might think how could they allow this? I mean it might endanger other humans, but what they actually did was they hired an entire town for a day. They hired 300 professional drivers to act as regular traffic, so in some sense, it was a controlled environment there too. What happened? Cars actually came out good. This was two years plus since 2005, most of the participants actually completed the race. And the one that came first was Stanford Sebastian Thurn's creation again. And who came second? CMU's Humvee. However, Sebastian Thurn's vehicle, Stanford's vehicle basically breaks one law. There is a stop sign that it misses and hence it's docked a few points by the judges and hence CMU's vehicle wins. But that's fine, right? Now it gets more interesting. So what happens next? This was 2007, what happened next in 2008? Google's self-driving car project was launched with Sebastian Thurn as its head point of all this. The exponential learning curve off the machines. And the machines are learning and they're getting smarter. Yep, okay, let's see. All right so the point of all this, well basically there's a very famous quotation attributed of Bill Gates which says most people overestimate what they can do in one year and underestimate what they can do in ten. The learning curve in technology is astonishingly steep. Right, I mean, we saw that just now in the examples that preceded. The possibilities are immense. However, there will always be a layer of hype, of misconceptions, and mistaken bets placed on particular technologies. A fact that is actually very well captured in Gartner's well-known Technology Hype Cycle. So let me, in some sense, bring this one up, yeah. That the shape of Gartner's Hype Cycle. What it basically says is, so technology debuts, there is a this peak of inflated expectations. People overestimate what it can do in a year, in a short time. And then disappointment follows, and it crashes again. It goes basically pendulum swings the other way. It falls below where it actually should go to. And over time, it matures and it reaches what is called called the plateau of productivity. So Gartner publishes this annually. This is the Gartner tech cycle as of July 2015. And you can see a number of technologies debut and place themselves at different points on this curve. I want, in some sense, you to read every tech label occurring in the 2015 hype cycle and answer the following questions. [MUSIC]