Welcome to our lab here on the introduction to neural networks. Here we're going to start off with an exercise based on different logic gates, such as working with the AND functionality or OR functionality. And then as we'll see later on, working with XOR in order to find if we can actually use a single perceptron, as we discussed in the lecture, in order to come up with these AND or OR functions, as well as these more complex ones, such as XOR. So the first thing that we want to do is, import our libraries, so we call import numpy as np, matplotlib.pyplot as plt. And then we're going to introduce the sigmoid function, which we discussed in lecture, where we're just going to set sigmoid equal to 1 over 1 plus e to the -x, where we can pass in values of x. And, if that value of x is very high, we'll end up with a value very close to one. And if that value is very low, we'll have value low, being in negative, and we'll have a value close to zero. And as we saw with the graph, it will range between the values of 0 1, but can't go any lower than 0 and can't go any higher than 1. So we're just going to define that sigmoid function, which is just going to be 1 over 1 + e to the -x. We're then going to plot it out, and all we're going to do is say that we want 100 values equally spaced between -10 and 10. And then we're going to take the sigmoid of each one of those values, which we just defined above, so that we get the activation. And then we're just going to plot the different values on the x-axis versus the activation on the y-axis. And the rest is just drawing lines, and creating a grid, and ensuring that we have the right y limits, so that we don't go too far negative or too far positive. So we run that. And we see that, as the value gets close to -10, or essentially at 0, for our blue line, and as our x-axis gets close to 10, then our y-axis gets very close to positive 1. So with this in mind, and how the sigmoid function works. With the idea that, as we move higher, again, we get closer to 1, and as we go negative we get closer to 0. Just to highlight as well at the point of 0 itself, or at around 0.5. As you see, the grid line at 0.5 for that y-axis, so that's going to be exactly 50/50 chance of either being 1 or 0, if we were to create a threshold. Now, a logic gate is going to take in two Booleans, so two different inputs. Either, usually true or false, we're going to set to define true as 1 and false as 0. And then, with those two inputs it'll return either a 0 or a 1, depending on the rule we defined for that input. So we have here the true table for a logic gate, that shows the output, given that we're working with an OR gate. So if we think about an OR gate, if either one of our two values are equal to 1, then our output should be true, 1 is equivalent to true. And the only time we should get false, or 0, is if both of our inputs are equal to 0. So what we want to know is, can we come up with a neuron that uses the sigmoid activation function, that we just defined that comes up with values between 0 and 1, that will allow us to always output the appropriate values of either 0 or 1? And the idea being that, if the threshold is over 0.5, then the sigmoid would predict 1. If it's less than 0.5, then it would predict 0. So we pass in x1 and x2 in each of those will take on the value of either zero or one as well as the intercepts. And then we multiply each of those by a certain weight. Now again, by limiting the inputs of x1 and x2 to be either 0 or 1, we can simulate the effects of this logic gate that we just saw in the table above which which we saw over here. The goal is to find the weights represented by the question marks we have here in this image, such that it returns an output close to zero or one depending on what the inputs are. So the idea would be if we think through the OR problem, if both x1 and x2 are equal to 0, then we want to output a 0. Otherwise if either of them are equal to 1, then we want to pass in a 1. So we have to think about what those weight should be. And if we think about the plot that we have above, if it's going to be very negative, again negative 10 or less, then we would have a value very close to 0. And if it's very positive, positive 10 then it's very close to 1. So that's our goal. So thinking this through, you can see it the picture here below. We already have the weights, but let's talk through how these weights will actually work in out putting the actual value that you want for this OR gate. If x1 and x2 are both equal to 0, the only value that's going to affect z, this equation that we have over here, is going to be that intercept term of b. And because we want the results for 00 if both x1 and x2 are equal to 0 to be close to 0, b should be negative. Has to be less than 0 to ensure that our sigmoid function outputs a value less than or 0.5. Now if either x1 or x2 is 1 we want the output to be close to 1. And that means that the weights associated with x1 and x2 should be enough to offset that -10 that we have for b. So if we give b that value of -10, W1 and W2 each have to be at least greater than 10. So we set them each to 20, so x1 is equal to 1. Then we have -0 + 20 positive 10. We pass that through the sigmoid function and that would output a value very close to 1. Same would hold if we had x2 equal to 1 and x1 equal to 0. And then if both of them are equal to 1, then we end up with positive 30. And again we get once we pass positive 30 through the sigmoid function, we value very close to 1. So as long as either x1 or x2 are equal to 1, given the weights 2020 and intercept -10, we have the value of 1. And if both of them are 0 then we have the value of 0 pass through our sigmoid function. So here we see how we can come up with the appropriate weights to ensure that we actually complete this OR functionality. So I run this. And the idea that we have here is that we create a function for the logic gate that will take our W1 an our W2 as well as our b. And then return the sigmoid of w1 times x1 + w2 times x2 +b, which is what we hope to ultimately output given that. We're passing in the z of w1x1 + w2x2+ b into our sigmoid and then running the sigmoid and then hoping for value of 1. Or 0, depending on what we want outputs be if we're using an or gate or an AND gate, so on and so forth. And then we're going to test it by saying for each one of these values 000110 and 1 1 we want to output for a and b. Given our test, what is going to be the actual value of a and b. Again if we pass in that sigmoid, then we will end up with a value that is 1, once we pass 1 0 or 0 1. When we call mp. round, it'll start off, maybe with 0.9 and then we round it to ensure that we get 1. So we have our OR gate which is just going to be equal to our logic gate which we defined as just the sigmoid of that linear combination and we pass in our weights w1w2 ends are intercept of -0. We test that OR gate and we get our different outputs. And we see that it matched up according with what we saw our OR gate should actually be. Now let's quickly look at the end gate AND how we can come up with the AND gate. So with the AND gate, when we look at the table that we have here. If both of them are false, which is our 00, then it should output false. If only one of them is true, then they're not both truth that's the AND gate, right? With the and gate, you want both one input and the second input to both be true. So we'd still have a 0 and it would stay 0 unless both the inputs are both true. So can we come up again with the appropriate weights to ensure that if they're both true, then we end up with a truth value otherwise we get a false value. So, as we see here, we set the b equal to a negative value. That's negative enough that even once we add on just one of these values, whether it's W1 or W2, that would be the equivalent of just one of those being true. We still have a negative output. So, b plus W 2 * 1 we'd still have negative 20 + 10 and we'd end up with negative 10 as long as it's negative our sigmoid function will output a value less than 0.5 and we round that down to zero. The only way that we end up with a positive value is if both of these are true, then we have negative 20 + 11 + 10. And then we'd end up with positive one. We pass positive one into our sigmoid function and we have a value greater than 0.5. Now these W1s and W2s can essentially be any number. Well first, let's show that this works. We see that if it outputs 0 for every single value except for one one as it should with our and gate. We can see also if we wanted to, we can make these values any value less than negative 20 or less than the absolute value of 20. So that once it's added on, it remains negative, but once both of those are added on then it becomes positive. So both values have to add up to something greater than 20 and be less than 20. So we can run this and see that again we get all the correct values. Now we're going to do the same thing for the M or gate and the M and gate and or an and just mean not or and not and so the opposite of or and the opposite of and. And you'll see why this is important once we get to the next exercise. So not or is just going to be the opposite of the or. So if it's any of these three values in this table which would have all been true for the or then we set it to false. And we only keep it at true if both values are equal to 0. And thinking through which weights will work, we just need to ensure that we have a positive value if both are equal to 0, otherwise we have a negative value. So we just have to ensure that these are negative and their absolute value are both greater than B. And then we have our an or gate. Will double check the Outputs and we see that 00 is equal to 1, otherwise they're all 0. And then, finally, we're going to closeout this video with an end. Where will see, again, this is just the opposite of the actual end, so and would only be true if both values were equal to true. Now that we do the opposite, it's true every other time except for when both values are equal to true. So what we need to do is we need to ensure that as long as we have the Inputs both being 1, that will cancel out the B that we had here. Otherwise we always have a positive value. So we do that by saying that these two added together W1 and W2 will outweigh the B. Otherwise on their own, they can never outweigh that B value, and that will ensure that we always have what we have here in terms of the not end gate. And we can see that that holds as well. Now in the next video, we're going to pick back up and discuss why there's a limit to only working with a single neuron and how we can build off of a single neuron. Create another layer of neurons as we do with our multilayer perceptron and come up with this X or functionality which will discuss in the next video. Alright, I'll see you there.