Hi. In this video we're going to learn about the test planning process. So first, a review of the stages. Unit testing is where the developer tests their own code primarily for error-prone constructs and very low-level functionality assurance. Design verification testing covers the integration of modules through integration testing, which is done by developers, and functional testing, which is completed either by the developers or by the testing team. System validation test is a test-team task. Once the system has finished or nearly finished development, then the system is tested for high-level behavior and non-functional performance. And finally, customer acceptance involves the testing team and customer, ensuring that the final product is or will be acceptable to the customer. So now, let's focus on just design verification and system validation. This is the test documentation that we find most useful. The following is going to apply to these two stages. Some experts talk about the test plan as a product and some as a tool. The focus here is on test plan as a tool. These are things that a test plan can do for you, your team, and your project. So, be specific about who does what and when. List all the tests you can think of. This is so very valuable if you can do it. If development is responsible for creating any test artifacts or tools, that would go here as well. However, I do not always include why the particular tasks are being done. There's really no need to justify here. That justification can come during a meeting. Just be careful in listing all the testable requirements. That helps here too. So we always include our requirements matrix, so should you. Especially as your projects get larger, it becomes more and more likely that some requirement will be inadvertently missed when you are building your tests. Using a requirements matrix, and also sometimes called a traceability matrix, it helps you find those requirements which aren't tested and identify the test cases that don't really tie to a requirement, and that does happen. Engineers are smart people. You're a smart person. We'll test something that we know needs to happen even if it isn't explicit in those requirements. This is an opportunity to go to the managers and designers and expose a potential problem in the project. The requirements are missing something, and that might have far-reaching implications. By all means, ask for feedback from both. What am I missing? What's the best data to use? Is the schedule acceptable? If things go off track later, at least they all saw it. You can't be haphazard or ad hoc in your methods and expect to be able to measure anything. Remember that management is the practice of observing risk. You can't properly manage what you don't know. You have to have some benchmark to compare to and that's the test plan. It takes careful planning and experience to do this well. It's also a good idea to have all of your choices reviewed. Consider these two things: option one, there isn't enough time to test the product, or option two, there is no way I can run 10,000 test cases in two weeks with three people working half-time on this project. When you have the deadline, the number of test cases to be run and your resource availability, you can show rather than tell, and that's a big deal. Here, by concerns, I also mean risks. These are all important considerations when formulating your test plan. Some of them you can take preventative action and others you can only really be prepared to react. Rapid change is equal to too many builds or too many requirements changes. Too often, incomplete requirements get reviewed, so that a milestone can just be checked off, even though it's known that changes are coming. Never accept a requirement with a TBD in it, a to be determined. That's a lose-lose for everybody. And speaking of lose-lose situations, if testing keeps finding problems, test or QA is blamed for holding up the release. But if testing stops too soon and problems are found in the field that should have been found in tests, QA is blamed for that too, yet there are always problems left to find. And just one more thing on this stack, remember that you can never use testing results to prove an absence of bugs. Testing only finds bugs, it doesn't prove that they are all gone. Having a plan and having other stakeholders agree to that test plan is an important step. And be prepared. It's hard to say no. It's hard to stand up and say this product is not ready for release. It's one of the disadvantages from testing in waterfall-type methods. There is tremendous pressure on testers to prove that a product is not ready for release, but that's why we have test managers. Remember that all of software engineering is the application of rigor to ensure quality. The same thing applies here. This is a process to improve quality by ensuring that a proper testing procedure is followed. We make a plan and we stick to it. Does that mean that we guarantee that the product is good? Certainly not. But we are far better off with a plan and following it than the alternative.