Column 1 2 3 4 5 or 6

Managing to Get the Most Out of a Ballclub

You've got a ballclub here that's playing
15 games a year below their ability. Eventually
Sparky is going to get fired (wouldn't you think")
and then the truth of what I am saying will become
generally known."

Bill James on the Detroit Tigers
in '83 Baseball Abstract

By Kenneth Broder
Los Angeles Herald Examiner
May 15, 1988

Two years later Sparky Anderson's Detroit Tigers won the World Series and a couple years later copped another division title before losing to the eventual World Series champion Twins. The Tigers played at a .575 clip over a five-year period and Sparky wasn't fired.

Sparky still doesn't understand the basics of platooning, positioning fielders or making out a lineup card. He still destroys the brightest prospects in the farm system like Chris Pittaro and Nelson Simmons, and insists on playing career journeymen like Tom Brookens at key positions. But he keeps on winning.

Which brings up probably the biggest rap against Gene Mauch. Never mind that he turned around losing franchises in Philadelphia, Montreal, Minnesota and California. He still lost a lot of games in the hardball wars and lot of crucial battles. Last year's final disappointing spinout into mediocrity after an encouraging start was apparently enough to drive him out of the game.

When stat freaks point an accusing finger at manager Mauch, they usually back it up with a fistful of studies about the futility of "little ball" strategies like the stolen base and the sacrifice. Pete Palmer has pretty much established that stealing a base has historically been worth about three-tenths of a run to a team, but getting caught in the act has cost twice as much. And an exhaustive Palmer study of box scores back to 1900, and cited in this space last week, showed that, statistically, you're sacrificing the percentages if you use the sacrifice.

What they don't tell you is how the effectiveness of these strategies varies with each manager. The stat freaks may be right about the general counterproductiveness of playing for one run at the cost of a big inning, but that doesn't mean that some manager, like Mauch, isn't using it selectively enough to work for him. As far as I can tell, no one has done a study that accurately second-guess how many runs a team should have scored with proper managing versus what they did score. And that includes me.

Last week's column gave oh-too-prominent play to a question I'd wanted to explore for a long while. Do some skippers consistently manage teams that fail to score the number of runs predicted by the following formula?

Runs = (Hits + Walks) x (Total Bases) / (AtBats + Walks)

That formula, known as Runs Created, has been found to predict with well over 90 percent accuracy the numbers of runs a team will score in the course of a season. If a manager's teams deviated radically from the predicted total year after year, I wondered, perhaps it could be ascribed to his on-field moves.

It was a dubious proposition to begin with whose meaning, if any, was totally obscured by a simple mathematical error on my part that reversed all the results for Mauch. Of the seven managers I studied, Mauch's teams were the second-most efficient at converting raw production into runs, not the worst.

But even if you establish that Gene Mauch teams consistently scored more runs than the formula predicts, what have you proven? Is it better to score a runner from first with a couple of sacrifices (that don't show up in the formula) and a basehit than driving him home with a triple (that does)?

I don't know and shouldn't have guessed, though I suspect I'm on to something with this study. It is curious that Whitey Herzog and Mauch, considered the master strategists of their time, rank No. 1 and that Tommy Lasorda is the only manager with a negative runs differential. Than again, maybe my calculator batteries were just running low by the time I got to the pasta master.

What piqued my interest most about the aberration between runs scored and projected runs scored was its apparent defiance of the "Johnson effect," named for Toronto journalist Bryan Johnson, who first speculated about the phenomenon (though it took a stat freak to establish its existence). The "Johnson effect" originally took note of the fact that teams tend to have won-loss records in proportion to the number of runs they score versus how many they give up.

It states that when a team outperforms its projection by any significant amount, the team probably will pay a price the following year and win fewer games than projected.

The law of averages will catch up to it; its luck will run out. The converse is also true for teams that win fewer games than the projection; they should outperform the projection the following year.

Bill James took it one step further and came up with some strong evidence that the "Johnson effect" also holds true for scoring runs. If it is indeed true, teams should not consistently outperform or underperform their projected runs scored year after year. While the Herzog and Mauch teams adhere to the "Johnson effect" for winning games, they seem to defy it in a big way for scoring runs.

Using a formula that projects a team's won-loss record by figuring the ratio between the square of its runs scored and runs allowed, then multiplying it by games played, it appears last year's luckiest teams were the Expos, Phillies and Cubs in the National League; Twins Brewers and Yankees in the American League. The unluckiest teams were the Padres, Giants and Braves in the NL; and Blue Jays, Red Sox and White Sox in the AL.

At the end of the year, we'll see how well their luck really held up.

As Luck Would Have It

The Fortunate Ones

TeamWonLossRunsOPPRunsGames Diff.



The Unfortunate Ones

TeamWonLossRunsOPPRunsGames Diff.


Blue Jays9666845655-5.2
Red Sox7884842825-4.7
White Sox7785748746-4.2