Saturday, March 19, 2005
Observation: It's possible to collect accurate metrics to
support false conclusions
and the false conclusions supported by accurate measures are harder than average to dislodge.
I was blessed recently with the chance to talk with Effern, maestro of The Vision Thing weblog and one of the little nuggets I came away with was a reminder about the difference between "Correct" and "Useful".
Effern does business process and process management in a big corporation for a living, so it's part of his job to assess and analyze workflows and behaviors. A behavior he noted in general, not just his place of work, is that managers frequently latch on to an historical effort that resulted in success and conclude that if they can recreate the conditions and decisions that worked in that previous event, it's a path to repeating that success.
The way this hinky pattern emerges is nowhere clearer & more instructive than it is on the baseball field.
Back when I co-managed a U.S. Congressional League softball team, I was blessed with this wonderful ringer, Della. In a league where you had to have at least three women among your defense at all times, the general rule was the team with the the most-skilled women players tended to win. This was amplifed in the Congressional League because too many of the women who played were more concerned about breaking their fingernails or mussing their coiffs than breaking up a double play. Ironically, the men who played for most of the teams had the same passion about hair and clothes, but they seemed to believe they could mess themselves up in a a game and then put on their faces later.
Della was a poor batter, a fair fielder, but had two great talents that were several standard deviations above the norm: she ran like her feet were on fire, and she had very good judgement about fielders' reliability and their arms. So while she was the woman on our team who hit the ball least forcefully, she could put the ball in play and without the ball ever leaving the infield, she could score. A typical Della first at-bat in a game was a couple of (intentional) foul balls and then a hot grounder to 3rd base. She pour up the line and instead of running straight up the line and through the base, she'd make an almost perfect turn as though she'd hit a double. If the ball didn't beat her to first and retire her, she'd surprise the infielders, frequently generating a throwing error or fielding error as the surprised opponents tried to put her out at second base. More often than not, she'd head for third. If she thought there was at least a 10% chance she would be safe, she'd just put the pedal to the metal. Most opponents didn't know how to execute a run-down play properly (which saved her bacon because she ran so fast she couldn't change direction readily).
Opponents would respond in a typical way to this suicidal approach to baserunning. Self-destructively in the pattern Effern described.
The first man up for the opposing team in the following inning would frequently try the Della maneuver. If not the first, then the second. It would almost never work. He usually didn't have Della's speed and most certainly fell short of her basepath instincts. Most of all, that kind of "run until they put you out" is just a bad play, a low percentage way to use up an out, even when Della, who'd mastered it to its full potential, ran it. The benefits we accrued from her doing it were (1) entertainment, and (2) the effect on the other team's psyche for the rest of the game.
When a manager goes simplistic and assumes success by duplicating a tactic or decision, he's putting himself in the position the opposing hitter did - he's ignoring the fact that a single success doesn't prove the viability of an approach. I'm not suggesting you ignore the past and just pretend it's all random. Nor am I suggesting you collect data for years before you start focusing on a few choices for any decision.
Earl Weaver's approach was the one I advise. Like the Baltimore Orioles' successful skipper, keep your mind open even while acting with past performance in mind. Before it was economical or logistically reasonable to have computer access in the clubhouse, Weaver would compile batter-versus-pitcher history on index cards. With a card for every opposing pitcher, he tracked how each of his hitters performs against him.
In his book Weaver on Strategy, co-authored with Terry Pluto, Earl lays out his decision strategy, which is anchored in a bit of optimism. If a batter is, for example, 2-for 3 against a pitcher, Weaver would try to use him. He knows the batter was at least capable of getting a hit of the pitcher. If the batter was 1- or 2-for-9, he would be guarded, but he wouldn't stop testing to see if the batter might still learn to succeed. Remember, while 1-for-9 is an anemic batting average of .111, if he gets 3 hits in his next 5 at bats, that's going to come up to an acceptable .284. Weaver liked to let a player get 20 at bats against a pitcher to see if he could learn to hit the hurler, so he would use players with a short string of unsuccessful experiences but not commit to using them all the time. Of course a player with success over 20 at bats would get every opportunity to bat against the pitcher he did well against.
In short, Weaver was aware enough of his data and confident enough in himself to vary his approach on any given spring training or regular season day. He kept his eyes open and tracked the success or failure of sets of historical precedents and didn't just let whatever happened most recently lock that course into auto-pilot.
Managers getting lazy and cloning decisions based on a single recent success are common. Context changes, the environment changes, but the manager doesn't want to think it through so the decision stays the same.
The 90s were filled with dot-bombs who thougt you could take any kind of retail operation, strip out the customer service, throw it onto the Internet and reap the success Amazon or eBay did. The venture capitalist graveyards are filled with on-line grocers, on-line toy stores, on-line pizza joints.
The opposite, never again giving a chance to something that didn't work out the last time one tried it, is even more common. My favorite case was a small savings bank I worked with. They had contracted from twelve to just three branches and needed to reposition themselves to corner the neighborhood depositor. I worked with the marketing woman to develop a set of direct mail pieces. When the president had it passed over his desk, he killed it. He'd once worked at a bank that had done direct mail and the program failed. Abjectly. He didn't know why, I didn't know why.
I have a technique I like to use in this kind of case, which is to design a set of small test mailings, just big enough to see what kind of returns you might get. When/if it succeeds, you proceed. If it doesn't, you're out maybe $300. I even offered to not bill them for my time if it failed to break even. The prez said "no"; he was immovable. As far as he was concerned, direct mail couldn't work for a savings bank because it had failed once.
About six months later, a savings-and-loan with the same market need did a ton of direct mail into my client's neighborhood. That mailing worked beautifully, it seemed and within six months, they expanded into the neighborhood, which did cut into my client's growth prospects. The president stuck to his guns though. He was decisively dysfunctional and dysfunctionally decisive.
Either model - crazy-gluing yourself to some approach that worked once, or inflexibly rejecting some approach that failed once - makes it harder to succeed. There are too many variables, shifts in the environment in which a decision plays out, to close yourself off to any chance of experimenting with a respectable idea again.
Just because some leadfoot got thrown out trying the "run until they get you out" mayhem play didn't mean Della would ever stop doing it or notching a few "inside-the-park homers" on balls that never got past the infield dirt.
free website counter