Saturday, April 10, 2004
It is the
customary fate of new truths
to begin as heresies & to end as superstitions
-- T.H. Huxley (1880)
One of the greatest dangers to effective strategy is tradition based on good sense. The more common-sensically apt the tradition was at its inception, the more vile and dysfunctional it will become over time. There's a good reason for that.
In baseball, there's this rule traditionally used to measure and evaluate pitchers' performance: Earned Run Average. And it was a brilliant rule when it was invented. It was invented for the 1888 season, an exciting year for futurists, because the first of the modern science fiction novels, Looking Backwards by Edward "Pudge" Bellamy, was published that year. In it, he looked back on changes over time as seen from the future, the end of the 20th Century in fact.
Michael Wolverton at Baseball Prospectus this week wrote about the Earned Run Average scoring tradition and dissed it as being fruitless. He's right, of course, but he managed to throw out the 1887 baby with the 2004 bathwater. His argument is that it has no validity, which is not true; in reality, it has no validity in the current baseball-playing environment. In spite of what he thinks, it did at one time. That's very important to know and acknowledge (which he didn't in his piece, though he probably knows it), because that's the reason it's going to be so hard to get rid of.
Here's the essence of Wolverton's piece (I recommend reading the whole thing -- I'm bound to quote some things out of context here in the interest of brevity).
Errors will happen. Good pitchers will minimize the damage caused by them. That is, a good pitcher will allow fewer runners on base before the errors happen (so there aren't runners to score on the errors), and will allow fewer hits and walks after errors happen (so the runners who reached on errors won't score).
This isn't a hypothesis, it's a fact. Preventing unearned runs is a skill that pitchers have, and it usually comes hand-in-hand with the ability to prevent earned runs. I took all the pitchers since 1900 who pitched more than 2,000 innings, and compared their earned run averages (ERA) to their unearned run averages (UERA). (In both cases the values were normalized to the league averages for the years in which they played). A couple of results:
- The correlation between ERA and UERA was 0.36 -- not overwhelming, but pretty strong. That suggests pitchers who are good at preventing earned runs are also generally good at preventing unearned runs.
- Of the top 50 pitchers in career (normalized) ERA, 46 of them were better than average at UERA.
What this means is that when you throw out unearned runs, you're throwing out part of the pitcher's performance. In other words, ERA is understating the run prevention abilities of the best pitchers in the league, and overstating it for the worst.
By his measure, the better pitcher is the pitcher with the lowest RA (total run average, that is, earned and unearned both). One challenge is he used data since 1900 -- not including the context in which the rule was established (although it went away for a while and then reappeared in 1917). That's like saying there was no voter fraud in Chicago in the 1950s because the incidence of it is undetectable in Boise in the 1960 to 2000 period.
Had he had/taken the time to look at 1887 (the last year before the rule was put into effect -- what is past is prologue and all that), I think he'd have seen the 1888 virtue of the scoring rule.
The 1887 season was representative of its time in that there was a pandemic of fielding errors. Burnt toast gloves not even worthy of a Mexico City barrio playground, groundskeeping practices as random and dangerous as those found in a contemporary Fallujah city park, widespread use of performance-altering subtances (alcohol, opiates) all combined to make baseball defenses the WMDs of that decade.
Just as a taste, I'll contrast some info I built between 1887 and 1987 National League averages.
.........Errors/ .....Steals ...Total Runs ..Earned
Runs ..% of Runs
...........Game ......./Game ... .../Game ......./Game .....UNearned
1887 ......3.6 .........2.6 .......6.1 ..........4.0 ..........35%
1987 ......0.9 .........1.0 .......4.5 ..........4.0 ...........9%
I included stolen bases, because the debate raged about whether a run was earned against a pitcher if the run resulted from a stolen base, a much more important aspect of the 1887 game than it is now and something perceived then as not being the pitcher's "fault". From 1890-96, runs that would not have scored without a stolen base to advance a runner who ultimately scored were considered Unearned.
The 1887 rules builders were trying to isolate out what a pitcher's pitching meant. It wasn't clear then. It isn't now, by the way...Baseball Prospectus is still rasslin' with the enigma 117 years later, gawd bless 'em.
And while Wolverton's findings tend to support his thesis (a 0.36 correlation) that good pitchers will suffer less damage from errors, that is that preventing unearned runs is pitching, pure and simple and no different a skill than preventing earned runs, that just wasn't true in 1887. If it was true, you'd expect the pitchers with the best skill at preventing runs of any kind to also be better at preventing unearned runs. That wasn't true in 1887.
I took all the pitchers with at least 120 innings (the season was 120-something games for most teams, so I'm lowering the criterion for qualifying). I divided into three groups for Runs Allowed Average (RA, earned and unearned together) highest-third, middle-third and lowest third. I also measured them by percentage of those runs that were unearned and divided them into thirds (high, middle, and low). If Wolverton's ideas held for 1887, one would expect the best RA pitchers to also be the ones least affected by unearned runs.
Of the 10 qualifiers with the best RA:
3 were in the best third for (lowest) percentage of unearned runs
3 were in the middle third for (lowest) percentage of unearned runs
4 were in the worst third for (lowest) percentage of unearned runs
Being good at preventing runs didn't correlate well with preventing unearned runs.
Of the 10 qualifiers with the worst RA:
6 were in the best third for (lowest) percentage of unearned runs
1 were in the middle third for (lowest) percentage of unearned runs
3 were in the worst third for (lowest) percentage of unearned runs
Being bad at preventing runs didn't correlate well with allowing unearned runs.
So the rule happened. For a reason that was logical and reasonable in the context in which it was made. And that's making ERA as a criterion for performance almost impossible to get rid of, even though the current context, as Wolverton explained, makes it useless and untenable. Undoubtedly, if you nose around your own organization, you can find old sensible decisions that because of their very validity when they were writ in stone are almost impossible to dump now even though in the current context, they're asphyxiating the organization's life-force. I'll cover the most obvious example we all get to see, the dead octopus in the middle of the room, the Danny Ozark coaching 3rd of institutions, the Special Olympians Edition of Fear Factor...that is, a bad idea so obvious even Bud Zelig would be hard pressed to pursue it.
BEYOND BASEBALL - THE HEALTH CARE SYSTEM FLUSTER-CLUCK
The current context for the design for the U.S. health care system is as useless and untenable as the Earned Run Average.
The underpinning of the idea was to tie it to employment. The assumption was made in the 1950s when the economy and employment was growing, and tended to be long term for most white-collar people and union members (the middle class). Employers as a class were diverse, health care providers as a class were fragmented and, therefore, competitive. This tended to keep cost inflation down. And if you lost your job (your health care), you would likely find another job soon.
People dreamed this would go on forever, like burnt toast mitts and potholes in the outfield must have seemed to baseball players in 1887.
Now, the context has changed. The number of health care providers making administrative and medical and pricing decisions has gone way down, enervating competition. Middle class employment is contracting, and there are fewer employers making administrative and health care shopping decisions, eroding choice. The high costs of oligopolistic providers exacerbated by auto-mechanic style charges tables are crippling entrepreneurial businesses' ability to afford to buy health care for their employees and stunting growth in every part of the economy (perhaps even in health care, though it's not looking that way at the moment). It provides incentives to export jobs and to strip-mine employees (try to maximize short-term extraction and then moving on to new human resources).
Almost no-one loves this system: patients, doctors or employers. And for good reasons. The outcomes are awful (U.S. is 1st in expeditures per capita for health care, 21st among developed nations for life expectancy from birth, 18th-best in infant mortality). In short we're paying a ton, and getting mediocrity at best. While there are a fair number of people who are willing to suffer through the status quo, it seems more are afraid of the alternatives as opposed to actually liking the current system. And yet, among developed countries, there are 20 better market-based and socialized health systems that produce more health for less money than ours does.
And yet to change this system requires the same massive level of effort to overcome interia that fixing the Earned Run Average traditions in baseball does, & for the same reason: it was so sensible when decisionmakers put it in place.
How about your organization? Are they Looking Backwards when they could be looking at the present? Do you have any of these survivals? What are you doing about them?
free website counter