There are many contenders and it was a little challenging trying to determine where to start. I found some information in informationweek, this is their list of the top eight operating systems, RIOT, Windows 10, VxWorks, Google Brillo, ARM Mbed, Apple iOS and Mac OS X, Mentor Graphics Nucleus RTOS. I've used Mentor Graphics products as part of chip design for years at various points, I had no idea that they developed a real-time operating system, that was news to me. Greenhills Integrity, Arrow Electronics is heavily involved in helping companies build IoT systems in the industrial segment and as you guessed, there's a lot of free Linux out there. So, this shouldn't come as any surprise. This was from 2016 survey that they performed of all of their customers. So, 73 percent of them were using Linux, and then the next one, 23.1 percent. There's no OS or Bare-metal. Then, the next one is FreeRTOS. Heard of that one? FreeRTOS? Yeah. Then, they have this other category other of 11 percent and then there's Windows embedded and Mbed and Contiki and TinyOS and don't know and we see RIOT down again 5.6 percent. Some systems, the reason that this is here next to Linux. If you're demands for deterministic latency and throughput are low enough, this is a great way to go because you can take the operating system, put it in there and with the right drivers you can talk to devices and get your system up and going very quickly build your application on top of that. There are applications where the operating system gets in the way. If you were to put the operating system in there, it would preclude you from meeting your product requirements. So, I'm running. I'm driving into class and to give you an example. So, in high-performance solid-state drives, if this set of design engineers was to decide what we're going to put an operating system into the SSD and then our code on top of that, we would not be able to hit the performance targets. That are what we are trying to meet. So, here's an example. So, here's a host, and it talks to a drive. And each command, the host sends to the drive is called I/O. One I/O, and the transfer happens in units of blocks, they can be 512 byte blocks or they can be 4,096 byte blocks, but also if it wants to read data from the drive it would send a read command and say, give me 1 block and there's an address associated with a cosmological block address, and the drive looks at that address, takes that address, it goes finds the sector data for it and returns it to the host sites called One I/O. Hard drives, 15,000 K. That means platters are rotating at 15,000 times a seconds. Enterprise hard drives. When only using the outer diameter tracks because that's where the maximum speed is get about a thousand IOPS or so. That's it. SSD's, much higher. We started out our first drives we're aiming for 50,000 IOPS and then we went to 100,000 IOPS and then we went to 200,000 IOPS. So, when you get to one million IOPS like, one over one million, you get one over one million is equal to one microsecond. Now the host has to give the drive enough commands to drive enough parallelism and all the flash chips that are connected on the backside of the drive in order to get to this number, but one micro get to be completing one microsecond or one I/O every microsecond to hit one million IOPS. Okay. One microsecond isn't very long. So, imagine you had a, imagine SSD just had a single CPU running it. I'm just going to make this number up. 533 megahertz and calculate the period of that.533 megahertz. One over that, and we'll call it 1.88 nanosecond period and imagine your CPU is able to execute one instruction per clock. Once the pipeline is full you can take one microsecond and divide by 1.8 microseconds. Yeah, of course. Currently, we only have enough time to execute 533 instructions. That's not very many. How much work can you get done in 533 instructions? Not much, right. It can get much work done. Put an app and try to put an operating system in there that overhead. Never going to get anywhere near that number. So, these developers that were chosen has no OS or Bare Metal. Had performance issues that they were concerned about real-time performance issues latency issues, and they probably went off and did an exploration and tried to deploying a real time operating system and realized that they couldn't meet their product requirements, so they chose to go the bear mental route, and others down here were happy with the three hour toss time, saying and performance levels and so forth. But I just wanted to share an example from my work about how performance can drive the technology decisions that you choose when you're attempting to craft a product. Fortunately in an SSD, we don't just have one CPU. If you look inside SSD controllers, there are companies that manufacture them and sell them on the open market and there's proprietary solutions that the flash vendors create themselves. It's very common to see many CPUs in here. Started out with two or four, numbers up six, eight, 10, 12, 16. We had an SSD one time I think it had 34 CPUs in it. A lot of them were M0s now since been replaced with M3 CPUs. They were just tiny itty bitty little things and so the area cost to put a M0 into our chips, it was essentially free. The cost was the area for the memory for holding the CPU's instructions and the data was that SRAM and we would put down next to the CPU was larger and consumed more power than the CPU actually consumes. So, the cost was the SRAM. But we sprinkled M0s all over our design and firmware. Then when they got in a bind and they needed to create some additional parallelism, they could write some code to offload some work to some of those other CPUs. By divide and conquer approach in creating more and more parallelism, we can get up close to this one microsecond number. But, you cannot get anywhere near this with a single CPU. Again, this information is from Arrow. It's an overview of potential open-source OSes for IoT sensor nodes. I'm not going to read through all of this, but you can read it on your own. I'll just talk about the architecture so this is a monolithic one. They call this a microkernel operating, a real-time operating system, RIOT, same thing for FreeRTOS. This guy's contiki is a co-operative, RIOT is preemptive and tickless. I've always meant to dig into this a little more to find out exactly what that meant. No operating system use a timer to generate and interrupt every so often that the scheduler uses to switch between the threads, switch between the processes. Does anybody know what tickless means? Because I don't. Have you heard that term before. So, if I find out by the end of the semester, I'll let you know. Business of the apps side I found this list: Nano-RK, Lite OS, Nimbits, I like that name, OpenAlerts, Thingssquare Mist, never even heard of it. Thingsspeak, I've heard about that one before it supports Arduino and the Raspberry Pi. I've heard of Thingsspeak. IoT Toolkit, Nitrogen, Argot, I think that's how that's pronounced, dat, this data is also from Arrow Electronics. One of the questions you have to ask when considering deploying an embedded OS is real-time performance required? Much like what I was talking about. Does it have these strict requirements on it or not? Is there a benefit for me to deploy an OS? Do I get to my end solution quicker and meet all of the products of performance and latency requirements by deploying an operating system? So you have to ask this question, is real-time performance required? What availability of hardware resources are there in my system? What are the requirements? How much memory do I have? How many DRAM for instance do I have? Is a memory management unit required or not? If you're writing bare-metal programming or doing implementing bare-metal programming, many times MMU is not required. When I was at LSI Logic, back in the 90s, I helped many companies embed microprocessors into their chips and they were MIPS microprocessors at the time and I can't recall any of them deploying an operating system. They were all writing their own code in bare-metal programming, and I think only one of them added the memory management unit that included a translation lookaside buffer but almost all of them did not use that. So, in caches and so forth and all these, what are the hardware resources that are available to me? Security requirements. We'll see a lot more about that week after next. How is the device powered? Is it battery powered? Solar powered? Just plug into the wall? Communications and networking requirements. These things have to be considered. This one's really crucial in the industrial space is the ability to interface to enterprise-wide systems. Again, merging operational technology, the factory floor, the oil and gas field with IT. This gentleman on LinkedIn wrote this paper and I snatched it up I will post it to D12 for you to read the point. I'm not going to read all of this but I just highlighted this section. His primary argument was actually real-time systems do not care about better or faster, this is just his opinion, they care about deterministic responses. Deterministic latency as in many systems are very important and the drive industry that I work in during the day, years ago as these were measured by their raw IOPS performance. How many random read IOPS can you do? What's your sequential streaming write or read throughput? It was all about for faster IOPS, more and more IOPS. More IOPS was better. It was a better drive. Now, what we're seeing is that customers have found that raw throughput of the drive isn't nearly as important as deterministic latency. So, I left the storage business for a couple of years and now I'm back in it again. We're seeing that customers are very interested in deterministic latency. They send a command to the SSD and they want to see a response in so many microseconds and let's just say it's, I'll just pick a number out of the air, let's says it's 10 microseconds. Every time they send an I/O to the drive, takes a drive 10 microseconds to respond and if they send a command to the drive and it takes 100 microseconds, it's an outlier and their system takes a little hiccup, and they don't like that. So, they've been asking their suppliers to squeeze down that distribution of latency, so that they can refer to quality of service, which ties in exactly with this notion here that they care about deterministic responses. They want to see this very tight distribution on the responses from the drives.