Arrays are similar to lists in the sense that they can be used to represent a collection of things. They differ from lists, however, in that they are contiguous in memory. Let's dive into what that means. A linked list can exist anywhere in memory. That's actually why we have a next pointer where a single block, let's say it lives on certain address, call it address 1, the next block can be anywhere in memory where the computer found space to create that block. The first node can point to the address of the second one so that when the computer is attempting to go from cherry to pea, it knows where to look for pea. Cherry might be in address 1, pea might be in address 10, apple might be in address 20, but as long as each node contains the address of the next node, you can follow the chain. With an array, we don't follow chains like that. Arrays are allocated upfront. Having an array to represent those three fruits would actually have three continuous blocks in memory, one for cherry, one for pea, one for apple. As an example, let's take a look at this array that we've shown here. All of the numbers in this array, all 12 of them, are continuous in memory. Now you might ask, "What is the benefit of having objects being right by each other, instead of following the list-like structure where they could be anywhere? What are the pros and what are the cons?" By the way, another interview question that will come up soon. When would I use an array and when would I use a list? An array consists of a number of elements which are squares, in this illustration. Each element contains another data structure. Here we're using it as a number. Each element possesses an index, which is a number starting from zero and incrementing by one for each initial index. Let's look at those common operations again. Accessing an element in an array by its index. For example, if you were to look at the sixth element in this array, it's 90. The way you would find that number is by indexing the sixth element in that array. Note that computers are actually zero index, so five here actually corresponds to a six because we're starting from zero. In order to access this, it actually takes the computer only one operation to get to the sixth element. Contrast that with a linked list where it actually would take six operations because you have to jump around in memory five times before you finally get to where that element is. Again, this is because of the continuous nature of the array where you know exactly where every element is versus with a linked list, they're bouncing all over the place, iterating through the length of the array. Another way of phrasing this requirement is, if you had to print every number, how long would that take? In this case, even though they're continuous, it would still take O(n) time because you have to visit every single number. Next, let's take a look at what it would take to add a new element to an array. You might think that because we just want to add one element, it would take constant time. But this can actually be a trick question. An array is a pre-allocated block of a certain size, it can't grow or shrink. What that means is that if you want to add a new element to an array, you actually have to allocate a brand new block whose size is one more than the previous block that you had. If you're allocating a brand new block, you already have to do n operations to create a block of that size. Actually, adding a new element to an array takes both compute time and also takes storage because you have to allocate new space for that array, so it's actually O(n). To remove an element from an array is a similar problem where you could try taking an element out, but then you end up with a blank spot in the middle of your array. To properly do this, you would actually want to create a new array that is one smaller than the previous array that you had and copy the numbers over. Again, this would be a linear time operation. What we will do is provide you a table of common operations and compare their run-times and space requirements between linked lists and arrays. You can take a look at this for yourself. You might ask, "Is there a way that it can take the best of both worlds where an array can be expanded, but also has the search properties or indexing properties of a linked list where you can quickly index?" The answer is yes. There's actually a hybrid implementation called an ArrayList. As a challenge question, take a look at the implementation of an ArrayList. Next, let's talk about arrays of arrays. Remember, an array is not just for numbers, an array is just a collection of things. Each cell or element of an array can actually be another array altogether. Where could we use an array of arrays? As an example, think about a grid. Every row, which could be the outer array, there is a nested array, which is each of the columns for that row. A natural example of something like this could be if you want to represent a chessboard. Next, I want to turn all your attention to a very popular data structure called a HashMap. HashMaps are a collection of what we call a key and value pair, where a key describes a certain property and a value is that property. Let's take a look at this tweet from Donald Trump. We see that there is a text property, a location property, a name, and the corresponding things on the right side of the colon, are the values. The name of the tweeter is Donald Trump. The location is Washington DC. The text is the tweet that Donald Trump has tweeted. The reason why HashMaps are useful, is that they provide a keyed way of representing your data. Contrast this to an array where your keys are effectively indices, so they're incremental numbers, zero all the way up until n, whereas with a HashMap, you can naturally ask for the text property of that particular map, or you can ask for the location. Normally the design of a HashMap is conducive to adding new key-value pairs as well. That's another benefit that most map implementations will have. From that perspective, they act similar to ArrayLists. In addition to having a natural mapping between keys and values, and making it easy for developers to interact with the data in a map, another property that is very desirable of a HashMap is that lookups are instant. Looking up to see the property name or location of a map is usually constant time. Which means that it doesn't matter how big the map is, how many key value pairs it has, usually, it takes only a constant number of operations to look up the value of a certain key. How does a HashMap achieve this? This is also a popular interview question. HashMaps use something called a hash code. The idea is for a given key, the computer first converts that key into a number. Then it looks up in an internal array for that number and figures out what the value is at that particular cell. As you'd guess, probably by now, a HashMap is actually internally backed by an array. Common operations for a HashMap, as you've outlined before, you might want to access an element in a map by it's key, for example, as we previously covered, the key name would return the value Donald J. Trump. Because of how HashMaps are normally implemented, finding an element by key is usually O(1) time. If you want to iterate through a map and print all key-value pairs, that will take O(n) time, because while HashMaps are very good at finding an element, if you want to go through all of them, it doesn't matter how good you are at finding a specific one or how quickly you can do it. You still have to go through all of them, so that would still be O(n). To insert a new key-value pair in a HashMap, most implementations will let you do that in constant time as well.