So, welcome to the last section. And here, we want to talk about quality assurance. Validation. How do we assess the quality of an integrated testing strategy? We have now more than 20 years of experience, how to assess the validity of a single test. That's what more than 50 international successful validation studies have brought about. But not a single testing strategy has ever been validated. And here are a few very simple rules apply. The question mark is, do we need to validate the building blocks, the individual tests? Or do we validate an entire integrated testing strategy? And there's increasing consensus that reproducibility needs to be assessed on the building blocks. That we also need to validate mechanistically, that we have a relevant mechanism covered on the level of a building block. And we need to assess the similarity of building blocks to make them interchangeable. But the relevance and predictive capacity of the overall integrated testing strategy is established on the entire testing strategy. That we cannot deduce this from assessing the relevance on predictive value of the building blocks. There remains a problem with regard to flexibility, validation, and flexibility, are somehow antagonistic. You can only validate what is frozen in time. And flexibility is creating some problems. A key element in quality is shown on the next graph. I love this picture which simply says, trash-in, trash-out. If our testing strategies, our algorithms, our functions on this trash, cannot produce something better than the trash we put in, this holds true for both the in vitro data, and the in vivo data which we are comparing to. So it's important that we optimize the quality of these different elements. But it gets even worse when we are using that computational tools to combine these things. So we have to work both on the quality of our formations sources, the test systems, and on their combination with appropriate type of the computational tools. For the validation, it is good to come back for a second to what validation is about. If you have a new test method or test strategy in this case, we're comparing these reference tests. The things of the past. The animal tests we want to replace. And we assess different type of policies. The first quality is, the assessed reproducibility. Is our test methods reproducible? Is it giving tomorrow the same result in my lab? And is there somebody else in another lab able to reproduce this results? It is also about the scientific relevance of manual method. This is the coverage of adverse outcome pathways, and similar things we've been discussing. And last but not least, it is about how well do we predict what we know to be toxic. Our point of reference, the animal tests, the toxic substances we know. And For ITS, we typically can do a reproducibility assessment of the building blocks. But we very often a problem that we cannot directly compare to an animal test. Because very often our test strategies are predicting something else, only part of the animal tests. And for this reason, we cannot in a traditional way assess predictive capacity. But here, a very interesting option opens up to stress more the scientific basis. The pathways of toxicity, the adverse outcome pathways, the mode of action knowledge. And this is something which is over the last few years developing. And this led to thoughts about how to validate, based on scientific basis, our test systems, and test strategies. In the context of a conference, organized here at Hopkins in 2010, on 21st Century: Validation for 21st Century Toxicity. This paper on evidence-based toxicology, the toolbox for validation for the 21st century most a starting point for what is now the evidence-based toxicology collaboration which was created a year later following our conference. And I will not go into too much detail here. But it is important that out of this thought on how to apply systematic reviews, how to apply some of the quality assurance elements of evidence-based medicine. A concept of mechanistic validation was developed which seems to offer some opportunities for validating integrated testing strategies, and novel type of approaches. And which is at this moment, starting to be embraced by some of the validation bodies. So, I hope I have shown you in this lecture how the combination of various tailored tests in an integrated fashion, can actually give more information, and more solid information than any test alone, or the blind combination in a battery of tests. That it is the quality of the building blocks and the quality of the integration which determines how predictive we are. And the key element for the quality of these essays is that they are covering mechanism. That they are covering the way these hazards do manifest, and the adverse outcome pathway concept promoted by OECD, and embraced by regulatory agencies throughout the world, is a prime opportunity to bring this type of mechanistic thinking into, first of all, the composition of testing strategies. But then, also into the validation of testing strategies as a mechanistic type of validation. So let me close for today with T.S. Eliot who wrote once, "Where's the wisdom, we have lost the knowledge? And where is the knowledge, we have lost in information?' What we're trying to do is by creating test strategies, a type of retro-Eliot. We want to go from information. From the data produced by the building blocks but components or test, back to wisdom which is the evidence integration. And this is what an integrated test strategy is trying to achieve. Thank you very much for listening, and I hope to have you back in our lecture series next time.