Numbers have their own beauty. Numbers are orderly. Nature has figured out that if you start with a square and add another square that's the same size, as the two size together and keep that going you develop some very wonderful things. It's the shape of the Nautilus, which is the Fibonacci spiral. Which is based on what you see here which is the Fibonacci sequence. In architecture it creates a form we call the golden rectangle. That ratio of about one to 0.62. What makes it a golden rectangle is its proportions are particularly pleasing across cultures to the human eye. That's important. Because when I'm thinking about cost, one of my sayings is it doesn't cost any more to select a proportionally correct window than one that's not proportionally correct. That's numbers. There's a certain beauty to them, they're orderly, whereas building typology is not. These are nine buildings, roughly similar in typology, science buildings for university academia. They're very different. How do you derive cost per square foot value from each of these and make sense of them when they're so different. Not to mention they were built in different places at somewhat different times. But within a narrow window of about a decade. That's where the normalization process comes in. If a design intent cost $X at a different time and place, then what does it cost here and now, so that I can project it out for an estimate for when our project will go ahead? We have to take into account regional differences. Means and methods differ. In parts of the United States for example, site work uses Caterpillar treads equipment. In some places there are rubber tires based on the soil conditions. User expectations will be different. Class A is a terminology we use for office buildings. And Class A in a suburban setting can be very different than a class A in a central business district in a major city. And we have to consider of course cost escalation, particularly that output index. It doesn't help us in a seller's market to know that the cost going into a project is going up gently if the cost of bids are going up quite high. So the normalization process is a process kind of a circling the square if you will. Beginning with something and then figuring out what it means to something else entirely. The first step in normalization, you see the big picture. In this column we see segmented the different costs of a project. Now the hard cost, the cost model that typically estimators are working with other construction hard cost, the trade values and general conditions as construction cost is defined. But there are costs that we say are below the line. You ignore these at your peril if you're an estimator. Because at the end of the day there is one bag of money, a single allocation typically that an owner has which also includes their contingency which they may never tell you about. We need, as estimators to make sure that we understand this big picture. So that we are not working in isolation at our cost model. Especially since, for very complex buildings things such as owner FF&E, for example a hospital or laboratory, have to be part of that cost model because they are essential to the function of the facility. The next step is to benchmark our systems. Highlighted just a few here because they are appropriate to some examples we'll see a little bit later on. And what we're looking at here is along the lines of a UniFormat system. What are the major building systems? How do we deal with lows to highs that can be quite big spreads? We need to understand what happened at that building. And a way to understand it is to segment. When we segment a cost model, we think in three broad chunks. What's the base building? Its structure, its core facilities. Switch gear room, mechanical equipment room, chillers, that sort of a thing. The stuff that goes into the base building. Then we have to ask ourselves, what are the tenants doing to fit out their space? For an owner-occupied building. For some building types, for example, like a laboratory or a hospital or an academic facility, the fit out can be quite expensive. And we do have to ask ourselves then, thirdly, what's the site value? Am I doing significant rock removal? I did a project on the front end up at West Point. The last remaining buildable site core of engineers is going to go design build for delivery. And one of the difficult things was helping to explain to Congressional oversight why 20 plus million dollars in rock removal was necessary to use that last remaining site in a historic core of the campus. So that 650 cadets could have a barracks close to their classrooms. So we segment the cost model. We look at the site, what are the geotechnical conditions, what are the access, are we going to be able to have lay down area and stage here? We'll look at the base building, what's its massing? Remember those typologies, the very different shapes? A building that is a cube, because that's the practical best, if you will, surface area to volume ratio, where you minimize the amount of surface to the volume of the space or the floor area, if you will. That has severe impacts. A long, skinny building, all things considered, would cost more than a cube building, but a cube building might not function for its occupants. Occupants may need as we all do access to light and views and sunlight and the grass, the blue sky. The footprint and then the height affect the base building. When we look at those typologies the disorderliness if you will of the benchmarks. We have to consider what's different in the base building. Now curiously enough, the fit-out was often the thing that has the most commonality. Yes, there will be difference over time and differences regionally. But an acute care hospitality, in one place is very much like an acute care hospital in another place on the inside in terms of its space programs, the user requirements, and the net to gross ratio. That net to gross area is something that reflects the intent of the users. Very big impact on the base building but it's driven by the fit out. For example, in those footprints that we saw earlier when we looked at the typology. They were a subset of a larger segment that we did for benchmarking a few years back on science buildings. And depending on the nature of that facility, and the nature of their business and their organization, some were looking to be very grand and spacious, because they were in a war for talent, looking for researchers. Others recognized that they needed to be a bit more austere. So something that might not be in a space program too easy to see is, what is that value proposition? Where are they looking in terms of the circulation for example. Just how efficient do they want their building to be because that will affect the overall cost and the unit cost might not reflect that. So you do need to spend some time to understand what are they trying to do on the inside. That's why we segment this building into chunks for the estimate. And step four, we produce a parametric estimate. This is the predictive cost model that determines the future market value of a design intent using that incomplete, ambiguous and we now see variable data. We look at the big picture. We go systems based. We segment the building into its chunks. And we produce a parametric estimate. This is a high level estimate with just enough detail for decision making. So the good estimator does a little bit of crystal ball gazing. They certainly make use of our technology. But the good estimator in many respects is both an artist and a scientist. There's a great deal of judgement required to start with something, benchmark data, and normalize it so that we can actually make good predictions going forward for your specific project. Let's go over some useful definitions. Precision, that's a consistently reproduced measurement. Accuracy is a measurement that matches the actual value. We want to be accurate and we want to be precise when we get those bids compared to our estimate. Contingency is an unknown possibility that must be prepared for. Escalation is the increase, the enlargement, intensification or a flip way of looking at it's the decreasing purchasing power of your money. We call that inflation, the persistent decline in the purchasing power of money. We call it an escalation factor when we're looking at an estimate to try to project out to the future. An index is a value that's derived from the trends of observed facts over time and the output index is what bidders will bid, what they will say they want to do your project. The input index is what the cost of materials and labor is going into your project, two different indices. The budget is your allowable cost tied to a specific validated scope and schedule. If you have a pot of money to do a project that's not tied to specific scope, validated scope and schedule then you've got an allocation, you don't yet have a budget. Estimators help us determine whether or not the pot of money we have is an allocation or a true budget. Budget, scope, and schedule hang together. An allowance, unlike contingency, is a known possibility that must be prepared for. Contingency is an unknown, we think it could happen. But an allowance is a known, we know on a project that there is going to be a certain level of finishes in public spaces, we're doing early on parametric estimating. We might allow a certain number of monies for natural stone product for example. Benchmarking is how we use relevant past data to predict the future. It has to be normalized. Normalization makes that disparate data tell the same story. And value engineering, there is no such thing. That's probably somewhat controversial. Folks probably right now are writing their dissertations on value engineering. It certainly began as a process to add value to projects. It has, unfortunately, devolved into something much less. It is a commonplace in the industry that if you are value engineering, you're cost cutting, that there is no such thing as value engineering, you've gotten in trouble. And this is why lean is necessary. We're going to talk about lean shortly, because value engineering no longer works. It's cost-cutting by another name. Let's think about it as a form of waste. In lean scheduling we talked about flow. One of the things we do not want is backflow. Well, consider why value engineering happens on a project. We design. We blow the budget. We then value engineer and then re-design. Those steps are backflow, they are waste. They are not adding value. Much better to build the cost in properly while we are designing at the start. That's what target value and lean estimating is all about. So to recap, an estimate is a predictive cost model to guide design, procurement, decision making and project delivery. Benchmarking with normalization, with normalization allows the effective comparison of project costs across the variables of time, location, typology and scope. Now this part of the recap is really important, that 4-step normalization process. You see the big picture. Understand all the costs that could, in fact, impact your estimate. Define the major systems. Segment the cost model. Think about site, base building, and fit-out as the three major chunks, and produce a parametric estimate. These steps will continue in lean estimating. They're part of target value design. But value engineering is truly symptomatic of system failure.