We've covered why it is important to break potential design solutions. Let's take a look at some examples before going a bit deeper into the specific methods for how this can be done. The examples in this video are chosen to illustrate some fundamental approaches to the breaking part of the design process. The approaches are: to design with breaking in mind, to evaluate designs with users and to assess solutions against design principles. The first approach, designing with breaking in mind requires embracing failure. Instead of striving for a final perfect solution, the process focuses on creating multiple representations along the way. Each duration adds a level of fidelity meaning it might include more refined features, more functionality or use more polished materials. In 1959, British industrialist Henry Kremer created a prize for designing a human-powered aircraft that could fly a figure eight course around two poles half a mile apart. Despite more than 50 official attempts, the prize went unclaimed for 17 years. In 1976, Paul McCready, an aeronautical engineer completed the challenge by looking at the problem from a different perspective. While everyone else was trying to build the perfect human-powered plane that could fly figure eight around two poles, he build the plane that could be crashed and rebuild within hours. His team would often break the plane several times a day and from those failures, they learned how to improve their approach. The solution was to build the lightweight plane that could fly very slowly. Constantly breaking their concept sped up the design team's process of finding a new successful solution. The second approach, evaluating designs with users ensures that people will understand how to use a product or service. This is referred to as the usability of a product. The United States presidential election 2000 and its outcome, leading to the appointment of George W. Bush, has been linked to a ballot design with a poor usability. One county in Florida used a so-called butterfly ballot design which listed presidential candidates on each side with one column of punch holes in the middle of the ballot sheet. Punching the first hole would cast a vote for the republican candidate George W. Bush who was listed first. However, punching the second hole did not cast a vote for the second listed name, democratic candidate Al Gore, but instead for the candidate listed on the top right. Reformist candidate, Pat Buchanan who was listed on the top right, received an unusually high number of votes. And, the county also saw a high number of invalid votes cast by voters who noticed the error and added a second hole, thus invalidating the ballot card. Testing the ballot card design with only a handful of people before the election would have quickly pointed out this design flaw which seems small but had a dramatic effect not only on the US, but on global politics. The third approach, assessing solutions against design principles, is a quick and easy way to avoid fundamental design flaws. But it requires knowledge of those principles and a certain level of experience in applying them. With more experience it becomes easier to spot violations of those principles. A false missile attack alarm led to a short moment of panic across Hawaii in January 2018. The alarm was accidentally triggered by someone selecting the trigger alarm instead of the test alarm option. The two interface control elements looked similar, and sit next to each other in a long list of options making it an easy mistake to select the wrong one. Alan Cooper, an American interaction designer described the underlying design principle in his book About Face. It's known as hiding the ejector seat levers. The principle suggests that not only should interface controls that provide different functionality be designed to clearly look different, but the design should also be used to assure that a user doesn't inadvertently trigger the wrong action, especially if an action has severe consequences such as ejecting a jet fighter seed or triggering a statewide missile attack alarm. Over time, many evaluation methods have been developed to support the three approaches we just covered. Martin Maguire from Loughborough University in the UK provides a great overview of such evaluation methods in his 2001 article. Common evaluation methods include: evaluation workshops, walk throughs, heuristic evaluation and controlled user testing. Some methods are better to use early on in the design process to test designs of low fidelity. Some are better at later stages when prototypes are more refined and of higher fidelity. An important distinction between methods is whether they involve users or experts. User-based evaluation can identify usability issues. In other words, whether people will understand how to use a product or service. Expert-based methods are quicker and cheaper to do. They involve, for example, expert designers assessing a design solution against certain criteria. These methods are best done before involving users to pick up any severe issues before time is spent on testing. There's much more depth to each of the methods and today, evaluation methods are used beyond assessing usability to focus on other aspects of user experience such as delight and joy. In the next video, we will uncover common design principles and also look at how they can be used to evaluate everyday products.