0:20
The first is broaden your sample.
And this flows directly our of the point we were just making
about using multiple signals when tapping into these new tests in algorithms.
So just as we said with good performance evaluation, you want to
expand the sources and signals when you're trying to evaluate your talent.
So additional opinions, additional performance metrics, additional projects,
and assignments if possible.
The key is that we want them from the maximally diverse sources.
You want uncorrelated signals as much as possible.
The most valuable signal that you can add to the portfolio that you have
is something completely uncorrelated to everything else.
1:00
Another implication,
another type of sample we're talking is to give people second chances.
And in fact, to give people third chances.
It follows directly from all the challenges we said above
that it's really hard to evaluate a person's performance,
whether they have the ability to do what you want them to do.
Whether they're the right person for
your organization, until you've seen them in multiple settings.
For example we know that a person's boss greatly affects their performance.
We also know that in almost all circumstances a new employee has no
control over who their boss is.
So, how can it be that we can truly evaluate
a person's ability on a job until we've seen them work for more than one person?
Or maybe even more than two, or three people?
We need to see people in different settings before we can really say
a reliably what it is about how good a fit they are,
how much talent they actually have for this particular organization.
So, that suggests second chances, third chances.
It falls directly from everything we've been talking about.
A second prescription, and it's related, is to find and
create what the economist call exogenous variation.
You gotta find outside influences, you gotta find outside sources of variation.
So, the only truly valid way to tease out causation
is to manipulate an employees environment.
If you don't manipulate it, then you can never know for
sure what actually is driving the results that you observe.
So, some ideas here are they require a trade off.
So, we can't sadly, [LAUGH] we can't just run around running experiments in
organizations to see who's best at a job or who's truly good at a job.
You still have to run the business.
So, we understand that, but we wanna push you back a little bit towards your
long term success in organization is gonna depend on identifying the best people.
And developing people, identifying strengths, and weaknesses, and
developing their weaknesses.
Those require intentionality around this kind of variation.
You need to see people in different situations, in order to know what they're
strong at and you can build on, and what they're weak at and you need to work on.
You could either put them in different situations to identify
the best at a particular job.
You can't do that unless you get this exogenous variation.
So, it requires a little trade off, but some of that trade off is worthwhile,
you give up a little bit of operational efficiency
to learn something about your talent.
This search for exogenous variation is one of the major motivation for
rotational programs.
So in many organizations, especially when people are first brought on to
an organization, they get sorted into various environments over say,
their first three months, or six months, or sometimes two years, three years.
IBM for years, ran a trading program that was
a leadership development program that was years long.
They explicitly wanted to put people in a different context to see how they perform,
and they wanted to commit ahead of time to doing that, and doing in at set times.
This is exogenous variation, this is a great example
of kinda taken this experimental approach to leadership development.
4:00
So, [SOUND] this is the gold standard for
sussing out the true ability, and truth strengths and
weaknesses of your employees is to commit to that kind of rotation.
However, we know that you can't always do that, and you can't do that for
all your employees.
And later in life, and later in their career you can't do it as readily.
So, you need to look for lesser variations.
Look, this is a rationale for changing teams, pushing people around to different
teams, changing direct reports, changing projects, changing offices.
Any source of variation changes up the environment, and
gives you a better lens into the true underlying ability of the employee.
4:43
A third prescription, and this is a very general prescription,
and that is to reward in proportion to the signal.
The idea here is that you should match the duration and
complexity of rewards to the duration and complexity of the past accomplishments.
So, We've talked a lot about how its tough to get these strong signals.
The challenges of pulling from the data true inference, so if you're stuck
with only weak signals you need to only base weak rewards on those signals.
Conversely if you're able to do some of these more sophisticated things
running experiments Pushing people around into different environments,
working with someone for years in lots of different situations.
Once you have that kind of signal, substantial,
clear signal, then you can offer substantial rewards in proportion.
So for short noisy signals, better to give things, like bonuses than raises.
Better to give praise than promotions because they're based on relatively flimsy
information.
So, couple notes here.
One, most signals are noisy, and
importantly we're prone to underestimate that noise.
So, we talked a lot about this in the performance evaluation module.
Individuals underestimate the role of chance,
we underestimate the noise in these signals.
Therefore, we're inclined to think we understand which employees are best, and
which are worse more than we actually can.
We're over confident in our ability to identify one from the other,
so what's the implication?
We probably over-reward, we put too much in response to these signals.
We need to learn to dampen that.
You gotta recognize when you've got weak signals, and
you need to match in proportion the rewards, the longevity of what
you're giving employees in proportion to what signal it's based on.
6:24
Second note is that, well, on the other hand you have to retain people.
So, one of the reasons you give people promotions,
one of the reasons you give people bonuses is to keep them from leaving the firm.
You've gotta compete in this external labor market.
Fine, good, but there's a premium on,
there's competitive advantage in being an accurate evaluator of talent.
And if someone else wants to come in and
overreact to a signal that you happen to know is noisy, a single that's not
based on some of these deeper principles that we've been talking about.
Then maybe it's okay to let them be hired away.
You wanna be the act to judge, you wanna have a compare to advantage in evaluating
people better, and if others wanna prior to these mistakes, you should let them.
It's only gonna make your company stronger.
In general, draw a major distinctions, and
grant major rewards only when following major signals.
This is the principle behind partnership, and consulting in law firms.
It's also the principle behind the academic tenure, so
academic tenure is almost irrevocable.
7:21
So, you wanna be very careful about granting that.
Who do they grant it to?
They grant it to relatively few, and they granted only after 5 or
10 years worth of essentially probation.
Junior faculty are essentially on probation, and it's a year's long,
[LAUGH] no guaranteed I'd track, but that's what it takes.
That's why the organizations do that, hundreds of years
tradition has built it that way because the reward on the other side is so heavy.
You don't wanna give that strong a reward, you don't wanna give such a long
lasting reward, unless it's based on a very strong signal.
7:55
An example of this done well was the succession of Jack Welch as CEO at GE.
So, Jack Welch is probably the most celebrated CEO in the history of the US.
He was CEO of GE for 20 years, and
led that organization to the very top of the multinational world.
And in about 2000, 2001, he was going to retire, and
he had to decide who was going to succeed him.
And now, they'd done planning, of course, for years, but
in the end, it came down to three people.
And they had to somehow assess, which of these three should take over for
the great Jack Welch.
So, what do they do?
They ended up running what is considered a tournament.
So, this notion tournament, of course, many people know it from sports.
But actually Eddie Lazear and Sherwin Rosen, economists,
came up with this notion as personnel evaluation that in some situations,
the best way to evaluate personnel is to put them in a tournament.
And that's essentially what Welch did with these three.
So, he had Jeff Immelt, Jim McNerney, and Bob Nardelli all as viable contenders, but
he's got all the challenges we've been talking about,
everything we've been talking about holds here, so what does he do?
He basically announced to them and
to the world that this were the three finalist, and he's gonna observe
their performance over the next period of time to see who should be the CEO.
This is obviously a big reward, right?
This is a big long standing reward, so
it needs to be based on big clear signals, how does he do that?
He made each one, the head of large divisions within GE.
9:30
And he made the game clear, and then he followed their performances for
not a short period of time, but for a year.
Of course he had a career of watching them to that point, but
the tournament lasted a year, and
it was a way to get a very substantial signal in separating these guys.
They were all working the same economy, of course,
it's not perfectly apples to apples, he still had to do whatever he could
to make sure that playing to this level to contextualize each of their performance.
But within the he'd faced,
this was a fantastic way to get that comparison apples to apples.
10:04
So, McNerney joked at the end of this, Welch reports this in one of his books,
McNerney joked, McNerney didn't get it, Jeff got it and he still the CEO of CEO.
McNerney said, he wanted recap, it's the first thing he said, but he added,
I want you to know I wanted the job, but
I also want to tell you I think the process was fair.
So, this was the biggest competition for
CEO in US history, one of the losers here reported that not only
during the outside thing that the process was an effective way to identify the best.
McNerney, as one of the people participating in it thought it was
a fair way.
So, prescription number 4, is to emphasize development.
Most of this module we've been talking about assessment, but talent analytics is
at least as much about development, as it is about assessment and selection.
So, we wanna emphasize that it's not just finding the next Jeff Immelt.
It's also about, all your employees identifying strengths and weaknesses,
building on the strengths, and shoring up the weaknesses.
So, you can see this within an industry that is famous for selecting, and
making bets, venture capital.
Often people think of them as just standing back,
and assessing which firm to bet on, which entrepreneurs to bet on.
But they spend considerable time, actually investing in developing those managers,
investing in developing the entrepreneurs is not just about selection.
Even in that world, even such a selection-dominant world as venture
capital, is significantly oriented towards development.
Another example comes from sports.
Anthony Davis is one of the best young basketball players in the NBA.
Most recognized as the best, and he plays for the organization New Orleans Hornets.
And his development even though he was very highly regarded coming out,
there are many players who are highly regarded that don't develop.
It was commented here recently by a noted basketball analyst Kurt Kingsbury
that part of his success has been not just because he is intrinsically good.
But because the New Orleans Hornets are well-known for establishing, for
developing players, so Goldsberry says,
player development is one of the least understood elements of NBA success.
Everybody knows it's important, and
everyone knows which organizations excel at it.
Some organizations routinely transform draft picks into good players,
while perpetual lottery teams, these are bad teams, commonly sit and
watch them fail to pan out.
So, even in an environment as closely watched, and
with as much at stake as NBA teams, there's great variation in firms that
understand and value the importance of development.
It's not just about assessment,
it's not just about selection, it's also about development.
There are many more good tools now, some firms are just using those tools for
selection, they're missing an opportunity.
You can use those good tools for development as well.
12:48
The fifth and final prescription is to ask the critical question.
So, I wanna end this module in the same way that we ended the performance
evaluation module, which is to give you a battery of questions.
Which should be useful across a wide range of situations.
Sometimes you're gonna be the analyst,
sometimes you're gonna be the one consuming somebody else's analytics,
sometimes you're just gonna be the person kibitzing on the side.
These questions should help you do any of those jobs better, and
it also provides a way for
us to summarize what we've been talking about over the course of this module.
The first question is, are we comparing apples to apples?
Always bring this question to the table.
Have we sufficiently adjusted for context?
Right along with this, know that we tend to not adjust enough for context.
That's why the question is so valuable.
We're prone to not adjust for context, we need to always push back and
ask, are we comparing apples to apples?
Second, what impact have other people had on this person's work?
How interdependent are these measures?
Again, we tend to attribute performance to individuals, and
we need to always push and ask, what was the context for this?
And in particular,
a special important context is the team that they're involved in.
And the team might not be the formal team,
it might just be the informal team that influenced the work.
13:59
Third question, how have expectations colored our evaluations?
To what extent have successes and failures been influenced
by the way we've treated people, the situations we've put them in?
This isn't meant to rob people of accountability and responsibility for
their actions, but it is meant for underscore the influences individuals and
organizations have on employee performance.
And again, we underestimate the impact we have,
many of us miss this altogether, and so this is a check and
a push to evaluate what impact your expectations are having on your employees.
Finally, are the factors we believe lead to success and failure,
are they truly causal?
We tell stories, we're prone to tell stories, we see correlations and
believe that there are causal stories, do we know that?
Again, a check.
We should know we're prone to go that way.
We need to push back and ask, is this truly causal?
Is there evidence to that, is there evidence to the contrary?
So, that's about your questions for you.
It should help as a way of remembering what we've talked through, but
also as a set of tools you can bring to any conversation about talent analytics.
And especially to the interpretation of data that comes out of talent analytics.
We wish you the best with your work.