When a batter steps up to the plate two things can happen: He will hit the ball or he will not hit the ball. If he does hit the ball it will lead to something like a ground out, fly out, or base hit. If he does not hit the ball that means he has been hit by the pitch, walked, struck out, you get the idea.
We know that the best pitchers—the Roy Halladays and Randy Johnsons of the world—strike out a lot of hitters and do not walk a lot of hitters. Weaker pitchers will do just the opposite. The Mike Mohlers and Tyler Greens (sorry guys!) walk almost as many guys as they strike out.
But when the ball is hit, we also know that if the ball was thrown by Randy Johnson it is less likely result in a base hit than if it was thrown by Tyler Green. Good pitchers are better than weak pitchers regardless of whether the ball is hit or not; this has been common knowledge for a hundred years. The only problem is: It is not true.
Most people will reject this upon hearing it the first time, I was skeptical too. So let’s bring statistics into the discussion to settle this. The numbers do not lie.
First, Randy Johnson struck out 28% of the batters he faced. Tyler Green struck out 15%. We were both correct in that observation and by all accounts Johnson is the better pitcher, but that is not in question. The question is: If a batter is to hit a ball thrown by Randy Johnson is it more likely to be an out than when that batter hits a ball thrown by Tyler Green?
Common stats like batting average use all at-bats, regardless of the outcome. But we need only to consider part of at-bats, the ones in which balls are hit into play. This stat is called Batting Average on Balls In Play (BABIP). Because fielders have no opportunity to turn them into outs, home runs are not considered balls hit into play for BABIP.
Our hypothesis, based on it having been ‘common knowledge for the past hundred years’ is that hitters who faced Johnson would have a much lower BABIP than those who faced Green.
Thanks to Fangraphs, which provides every pitchers’ BABIP for every season, we can test this out pretty easily. In 1995, Johnson pitched 214 innings. He led the league with a 2.48 ERA and won the Cy Young Award. The hitters who he faced had a BABIP of .301. In other words, 30% of the guys who hit a pitch that Johnson threw reached base safely. The other 70% resulted in groundouts or flyouts.
That same season, Green threw 140 innings and had an ERA of 5.31, which was good for 19th best on the Phillies. The hitters who he faced had a BABIP of .313. That means 31% of the guys who hit a ball thrown by Green reached base safely. Greene’s BABIP differs by just 12 guys out of 1000, so his BABIP is virtually the same as Johnson’s.
Obviously this is a fluke, right? I just picked two season I knew would work… Let’s try it again.
In 1998, Johnson split time between the Mariners and the Diamondbacks, but his season-long ERA of 3.28 would have ranked in the top 10 in either league. Hitters who faced Johnson in 1998 had a BABIP of .320.
Green’s 1998 ERA was not fantastic at 5.03. This would be the last season that Green would pitch in the Majors, but hitters who faced Green in 1998 had a BABIP of .254. Almost 7% lower than Johnson’s BABIP.
You are welcome to look at the numbers yourself, but this example is no fluke. Regardless of whether they are a ‘good’ or ‘poor’ pitcher, the typical pitcher BABIP is between .290 and .300. As we have seen with Green and Johnson most seasons will fall outside of that range and there is little relation from one season to the next. Over a career, though, things tend to average out.
Johnson played for over 20 years and had seasons where his BABIP ranged from .247 (in 1990) all the way up to .348 (in 2003). Based on those numbers it appears that it increased over his career, but the season after his highest season BABIP in 2003, it fell back down to .264 (the second-lowest of his career) the following summer. Johnson’s career BABIP ended up at .291. Meanwhile Green pitched for just four seasons in the 1990s and his career BABIP came out to .289. Again, virtually identical to Johnson’s.
Since this was first discovered by a fellow named Voros McCracken in the late 1990s, people have been just as surprised as you probably are. And they have been trying to disprove it ever since. While it has not been disproven, numerous studies have shown that pitchers do have some effect on balls hit into play—albeit much less than what we had thought in our original hypothesis.
Numerous factors play a role in whether or not a ball in play will fall in for a hit. The quality of the fielders, the size of the park (remember, home runs don’t count as balls in play), and the pitcher (groundball pitchers do tend to have lower BABIPs).
The biggest factor, which tends to be frustrating for a lot of people, continues to be luck. Sometimes a batter will hit a line drive—absolutely drill the ball—right into the glove of a shortstop. Other times the shortstop is two feet to the left and the ball flies into the gap for a double. The difference between those two scenarios is luck (possibly it is good positioning by the shortstop, but more likely luck).
This should, in theory, extend to everyone. If you or I walked out on the mound of a Major League game and started lobbing the ball in there, the balls hit into play would not result in hits 95% of the time like we might assume. They would likely be higher than .300, but perhaps not much higher. In fact, this is demonstrated every year during the home run derby. If you were to put a normal defense behind the home run derby pitcher—who wants the batter to kill the ball—the batter would have a good batting average, but he would still be putout plenty of times.