Additional hardware techniques, implement, this is in the computer system. Implement a memory protection units, a good idea, especially if you're running some kind of a hard task you don't want one thread being able to look at the memory of another thread's processes. Coke and Pepsi are running on the same machine, and you don't want them to be able to see each other's data for instance. Run application code and user mode and the kernel or the operating system in supervisor mode, and only in supervisor mode you have the instructions to access and modify the memory protection unit. They have to be in kernel mode to make those modifications. This prohibits rogue application's code from accessing each other's memory. Now you probably heard about these recently, raise your hand if you haven't heard about Intel's Spectre and Meltdown exploits. You haven't. You haven't. Okay, well, there's a link there, you can go read about it. So, spectre is a problem that leaks information due to code being speculatively executed by Intel processors. Meltdown is an issue with the memory management system as possible to trick the system into allowing one thread to read another thread's memory structures. Despite their best attempts to keep that from happening, someone figured out a way to do it. You can go read that link. You can have a memory protection unit in there, there's still might be a way around it. I said there are some very dedicated people out in the world that would work hard on trying to break these systems. Other protection methods. You may require certain Phipps certification, maybe a product level requirement. One way to get the certification is to have a tamper evident sticker. Okay. Believe it or not. The idea behind the sticker is that owner of equipment can go around and perform periodic inspections of their drives or whatever these electronic devices are that have these stickers wrapped around them and they can inspect the stickers, and if they see one that's torn and they say, oh, this this device might be compromise. I better pull it out of there. Shut the system down, pull it out, and replace it with a known good unit and bring it up again. It can be as simple as that. You can get more crazy with Epoxy resins and this material called conformal coating. It's a deterrent. It's much like a sticker you got to go around and look at it periodically. It's a real pain for return failure units for failure analysis if you've got this Epoxy resin over a number of components or the board is covered in this conformal coating, and the customer says I want you to root cause why this product failed, and that happens a lot. It drives back. Storage companies many device manufacturers get their products back, and the customer wants to know what failed. How did it fail? What happened with this? We want to know, and so you get this thing back, you get your product back and it's all covered in glue. This part epoxy coating, and you got to drill your way through it and scrape down and finally get down to the chips, and it's nobody, and the failure analysis side at a company likes epoxy resins or conformal coating. I can sure you, however, it can be a determinant for someone, they look at it. Again, it all depends on the value of how much is this person going to go to, how much value is inside this thing that they're trying to get to. If it's a toaster well, I mean, it's no big deal if it's a bank account. Or it's a system that controls the locks at Fort Knox where all the gold are stored, a bunch of gold is stored, it's a much bigger deal. Proximity detection, I've heard about customers doing this. They'll build in mechanisms. It's probably battery operated device. Even though power is disconnected, if it detects it's being taken apart, it might destroy itself. I don't know if anyone ever seen the old Mission Impossible. Shows where they got their assignment on a tape, and they listened to the tape, and then at the end after they got their assignments, it said, this message will self-destruct in five seconds and then simulated the smoke coming out of this old technique like a cassette tape machine, destroyed itself. So, you could build in proximity detection and protection enclosure breach. That is something you can do whether it makes sense for your particular product needs to be evaluated. Software techniques, again, like I'm repeating myself, and I am repeating myself because it's important. Every communication channel in and out needs to have security concerns addressed. Like the Seagate hard drive, it had its open new port and people figured out how to I can just hook a terminal emulator up to it and I can start talking to the drive. Well, I look at all the cool things I can do. Bounds check everything. This is something that's very important especially for software people, people writing software and firmware. Checking the range of values that are input from the outside world where that comes from a human being or another machine. Never assume the values are coming. You go through out. You do all this design where you go through, you do all this work right. Architecture, you develop all the hardware and the software, and you write a specification, say the range of this value will be from 1-32,000, and you get a value that's 38,000, and your system acts erratically because this value was outside the bounds you wrote. It was illegal, but somebody sent it to you anyways and it causes your device to fail. Got to bounds check, you got to range check, everything, and here's an example. So, what happens when an attacker enters one million characters into a web page login, where the username is right? Usernames are usually short or your email address or your aim or something or under 128 characters usually. Right? Well, what happens if an attacker grabs a whole bunch of text from a huge text file, paste it, into your web pages, your user ID. So, code on the end, receiving that, does it handle it appropriately, does it range check and go, oh, I'm over 120 characters. I'm just going to throw all those characters underground and reject this login session, or a command packet enters, and is received that your device that is a million times larger than what you specified as a maximum size. That's not supposed to happen. Range checking, bounds check, every single thing. Row values on the stack. So possible for one process to look at the values on the stack by another process, might be prohibited. Might be an avenue for an attacker to explore leaks, able to extract, leak out some information possibly. Again, apply security analysis and the thought process to all levels of all protocols, and all interfaces. Design reviews can be very, very handy for finding problems. You should conduct numerous designer views across cross-functional teams. So that's hardware design, that's software design, that's mechanical design, anybody that you can get to come into a room and participate in a design review looking for weaknesses. Applying the security mindset. How can this be broken? What are the weaknesses? What are we trying to protect? Have we done too much? Maybe we we've done too much, maybe we've gone too far, or we don't really need to protect that. So, we can shorten our design cycle because we don't have to protect against this particular thread. You get your product out the door faster. Can you think of any unanticipated access channels that an attacker might use that we haven't talked about, I haven't talked about yet? Power, yeah, absolutely. RF. Oh, I guess I do have it on there. I forget what I have on my slides. Scan-chains. Raise your hand if you know what a scan-chain is. Okay, I got time for this. Okay, this is a little tiny baby example, get the point across them. Normally, in logic design, you've got all the clock signals of all the flip-flops hooked up to a common clock source, not just say there's just one clock domain. So it's run at 50 megahertz, okay great, and these outputs come in, and forms computations on these values, and you come into the D inputs to these flip-flops. Q outputs over here, and everything's great. In manufacturing tests before, the parts are diced and I referred to this earlier. So, probe comes down, powers up each die and run some tests on it. These flip-flops have a multiplexer built into them that causes this output to go to this one, and this one to go to this one, this one to come down here, and this one to come up here, and a serial chain, and then there'll be an input and there's an additional input besides D. It's called TI, the test input. So this comes out to a primary input, and this was the last flip flop and the chain comes to a primary output, and there's a control signal on the flip-flop called TE. Thus the name on that comes out to a primary input. Some of the tester can do and the tester comes out has control o'clock also. So, when the tests are asserts a one on this, all these flip-flops turn into a big serial shift chain. What the tester does is it scans in a known pattern of ones and zeros and these guys, in an attempt to find stuck at faults in this logic, stuck at one faults, stuck at zero faults zone. When you took logic design, you hopefully learned a little bit about stuck-at fault that can occur during the manufacturing process. Well, the scan-chain if you don't do anything about, it can be used by an adversary to scan out if you're storing your encryption keys and flip-flops. An attacker may be able to use this scan chain mechanism to scan out information, your keys, who knows what, be very dedicated. So, you need to think about what you might want to, if you want to do anything, if that's something you're concerned about and you can design it in a mechanism to prohibit that from happening. But it has to work at manufacturing time, otherwise your silicon won't get tested and then you won't get any chips. So, it has to work to support scan-chain testing at manufacturing. The JTAG port. I'm sure most of you are familiar with a JTAG port. That could be another avenue and then an attacker could use. Temperature rapidly cooling and chilling. We used to test, we had a graphics chip years ago when I worked at a graphics company and it was on a board and the board was acting weird. I'm not kidding, we had a hair dryer and a can of frizzy spray. I don't remember what bug we are trying to determine but I just remember I was in there with this hair dryer, heating up the chip and then really quickly cooling it down and heating it up again and cooling it down and that rapid thermal cycling either fixed the problem or caused the problem. I can't remember which way it went but very rapid temperature gradients can be used as an attack. There might be others. So we left off here last time, talked about various threat vectors, talked about what a scan-chain attack might look like, talked about temperature attacks, there could be others. There's a class of attacks called Side Channel Attacks. One of them is known as differential power analysis where an attacker will put a current meter on the power supplies and monitor the minuscule variations in current drawn or subsequently the power dissipation and a system can leak information through power analysis, as unbelievable as that sounds, it's true. RF analysis. So, this company that makes these woodblock puzzles cryptography, and I can read that, it says cartography research genocide, they came into Seagate one day, trying to sell us their cryptography solutions, and they had an app on their iPhone that was running AES 256 encryption algorithm, along with a whole bunch of other apps. So, they went like this and they showed all that- I'm killing them, showed all the apps that we're gone. So, one of the apps in there, monks, all those apps that are running with this app that they wrote, that was performing AES 256 encryption. I don't believe this was a magic trick like an illusionist or something. I believe that this was legit. They had a PC setup. I don't know if it was a National Instruments card but there was an RF receiver antenna wand, and they brought the wand over and they held it over the surface of the phone for 10,15 seconds, 20 seconds something like that. Not very long, and they were monitoring how much data they were collecting and then the guy that was doing the demonstration took to wand away. Then the PC went to work crunching on us. We were chit chat and it took I don't know 6, 7,10 minutes something like that. Got done and he projected on the screen what the AES 256 key was because they knew what it was. So, there's a series of hexadecimal values characters right to 256 bits worth of hexadecimal characters. Then right underneath that they displayed the processing that they did based on the RF signal that they picked up off the phone and it had almost every single hexadecimal character correct. I thought, wow, that's pretty cool that you were able to extract from that RF signal enough information to be able to guess, I'm going to just if I had to make a rough estimate 90 percent of the key. That took that 10 to the 47 years down to a very dramatically lower number for an attacker than to just spin values on potentially ASL. Maybe if they had monitored the RF signal longer, maybe they would have been able to predict all the bits in the key. Nonetheless, it was very impressive demonstration. So systems can leak information through power, they can leak information through RF analysis. Another attack that's been known and talked about for a long time is power glitching. Many engineers think well of power comes up to my system and it's stable and it's great but an adversary can glitch the power if they get a hold of your system. They glitch the power supplies to your design and attempt to put the logic, the states of the flip-flops and the memories into some state you never ever intended it to be and then they can use that to exploit and extract information out of the system. I don't know how to defend against the power glitching attack. I mean you got to chip with 10 million flip-flops in it and you start power glitching, the power rails to that design those flip-flops can, who knows what state they can end up in. That's a tough one. They get it in this compromised state, who knows what they can do. They can get in there and start poking around and extract encryption keys and who knows what personal information and upload rogue code. Radiation again could be another- as mentioned on the previous slide. RF energy directed at a chip could cause it to go into a strange state or radiation can do the same thing cause bits to flip, particle strikes, shoot proton beams at a chip or alpha particle streams at a chip that's going to cause upsets in flip-flops and storage elements and again, put the chip into some kind of compromised state that an attacker then might be able to glean information from. I put the link in here to cryptography research. They were separate company and some years ago, they got acquired by Rambus that you can go out there and look at their offerings and see what they're all about. The RF analysis one was pretty cool. Some additional background information for you, there's the Cloud Security Alliance. This is the link that you can go out there and see what they're all about. What initiatives they're trying to drive in the cloud space. These next two are articles that I've found very illuminating, Bloomberg's, it's financial periodical. They wrote this article called What Is Code and here's the link to it and it's a really, real fascinating read. Yes. I've heard about- where did I read that? I can't remember the source where photons are given off by CMOS circuit switching, you can pick upon if you de-cap the chip, you could analyze the photons that were being emitted by the chip in the visible wavelength I guess, or maybe it was infrared. I'm not sure what wavelength it was in. But again, yeah, those are, again, areas of vulnerability where systems can leak information. That's not something we just think about normally. Normally in our engineering we don't think like that. That says oh, developing the security mindset, thinking sideways, thinking orthogonally, think backing up and coming on a problem or an analysis from a different angle. Good point. Again, I strongly encourage you, anyway, you don't have to do it now but I like the summer when you're off click out the slide deck when you've got time and go read this article. It's really illuminating. This one is well, a Wall Street Journal published this one. This is the history of hacking and the evolution of cyber crime. Great stuff. I hope those articles stay up for a long time and stay free that you don't have to pay for them.