Welcome in module three in our course, Framework for Data Collection and Analysis. In this third module, we'll introduce a quality perspective on data products. A good resource to start wherever you are in the world, is the EUROSTAT Handbook on Quality Assessment Methods and Tools. It's a beautiful document that has a lot of general guidelines on how to think about data quality and quality of a statistic that you produce. This visualization here shows you elements of quality management systems. And each of these pieces has a corresponding principal for the European Statistics Code of Practice. You can download this document freely and I highly recommend taking a look, in particular, if you're involved in statistic production, the gamut setting. Shorter summary you find here in a piece by Paul Beamer in Public Opinion Quarterly. It catches the most important categories to think about when you think about data quality. One is accuracy. And accuracy is something this total survey error framework, that we're gonna introduce in a moment, is designed to minimize. It's really all honed in on accuracy. However, there are other elements to quality as well. For example, you want your data to be credible. So, it does matter who puts out the data and whether this is seen as a trustworthy entity by the community. Not just people using surveys, but government officials, public spokespersons, private companies, enterprises, any of those. Another aspect of quality is comparability. In particular, if you're interested in long time series or comparing data across countries to make investment decisions or policy decisions, you want to make sure you know that you're comparing similar things. They can be accurate and good, each in their own way. But, they might lack comparability, because different things are measured, for example. So that's something to keep in mind. Usability is another aspect. Is the data documentation and accessibility in a way organized that does allow easy use? Relevance is an important one. We don't wanna invest money in collecting data or designing data and even scraping found data, that lack relevance. You are in charge of demonstrating that relevance or at least having that communication with stakeholders on the relevance. Accessibility, I already mentioned, and we discussed this earlier the importance of the data foundation and the research data centers. Timeliness is an interesting aspect. Some statistics have fix reporting rates, we'll show you an example in module four. Data delivery's on a schedule is helpful. And some of the data sources that you might wanna use don't have this aspect. Also completeness, is the data rich enough, so that you can really follow through with the analysis objective that we discussed in module two? And are the estimates coherent? Can you combine those data and the results with other data sources. Was there mismatch that will automatically confuse any user of US statistics. The various frameworks here. One is that of Continuous Quality Improvement. Also discussed in Paul Beamer's paper. And a suggestion on making sure that you have quality always at the forefront when you produce your statistic or the data. First piece here is, preparation of a workflow diagram that can help to visualize the process and identify key process variables. Those process variables can be identified and monitored over time, and if need be, changes introduced to improve the process. You wanna make sure that you have control over your process, and for those of you who really want to dive deep into that literature, there's a whole segment on process control, quality control. All these keywords here are represented in a flowchart that Morganstein and Marker developed. Quite useful to think about and to look at.