This is not so much a blog post as it is an extended comment to these three thought-provoking posts from the three keenest tempo-free analysts in the blogosphere known universe. Before we proceed, I'd encourage you to right-click on each of the three hyperlinks in that green phrase to open the posts in new tabs, read the three posts in their entirely, and then come back here.
[Sips on coffee while waiting for readers to return.]
OK, you're done? In contemplating the broad question of how exactly tempo-free efficiency measures can/should be used for in evaluating college basketball teams, I've had two thoughts bouncing around in my head. These thoughts may (or may not) have some utility to other observers, so I'll throw them out there for your consideration.
Thought #1: Tempo-free stats are more useful for telling us why a team is good (or bad) than they are for telling us whether a team is good (or bad).
As of Thursday, the top ten teams in the KenPom rankings were Duke, Kansas, Syracuse, BYU, Purdue, West Virginia, Kentucky, Ohio State, Kansas State, and Texas. While most commentators would no doubt rearrange the order those ten teams are listed in, there's not an entry on that list that seems dramatically out of place. (Texas is only ranked #21 in the current AP poll, but that's because poll voters tend to overweight recent performance.)
Similarly, Mr. Gasaway's most recent conference-only efficiency numbers identify Duke, Kansas, West Virginia/Syracuse, Ohio State/Wisconsin/Purdue, Cal, and Kentucky as the top teams in the six respective BCS conferences. Again, nary a shocker on the list.
Tempo-free statistics can provide a guide to which teams may be slightly over- or underrated relative to conventional wisdom, but if they didn't exist, college basketball fans would not struggle to figure out which teams are the good ones. (I'm pretty sure I'm subconsciously plagarizing that assertion--probably from Mr. Gasaway.)
What the numbers can tell us in a more counterintuitive fashion is what exactly it is that makes the good teams good. This year's figures, for example, demonstrate that:
- Despite Bob Huggins' reputation as a coach who specializes in teaching defense, West Virginia is excelling more because they score efficiently than because they stop their opponents from doing so.
- While they may play a fast, exciting brand of basketball, Texas' recent struggles have been more a function of offensive problems than defensive letdowns.
- Wisconsin (KenPom's #11 team) plays methodically, but their method has been somewhat more effective on offense than on defense. (You knew that one, right?)
So that's my first thought: We should pay less attention to the order in which the tempo-free numbers rank basketball teams and more attention to the specific reasons the teams find themselves in their respective ranking positions.
Thought #2: It's still about wins and losses, baby.
My first point notwithstanding, tempo-free numbers, because they account for margin of victory, have a predictive value (as do Sagarin's PREDICTOR ratings, even though they're not technically tempo-free). They tell us which teams have played fairly well but come up short in close games.
That's a useful tool for purposes of looking ahead at a team's prospects down the road, but once the buzzer sounds at the end of the game, a team has either won or it's lost. It's a binary world. I'm wading into the realm of subjective philosophy here, but I think that when you look back at a team's performance (for, say, purposes of determining NCAA Tournament suitability/seeding), you have to err on the side of rewarding teams that came out on the right side of contests with narrow score differentials.
Does that mean some teams will be punished for what boils down to bad luck? It certainly does. But luck is part of the game--and part of what makes competition (and, dare I say, life) so thrilling.
As a committed fan of a particular college basketball team, I'd much rather see our Spartans go 14-4 in Big Ten play by winning a half dozen close close games (still a possibility!) than finish 12-6 having blown out the vast majority of the 12 opponents they beat but lost 6 close games. Tempo-free analysis will tell me the 12-6 team is fundamentally better than its record says it is (and vice versa for the 14-4 team), but it doesn't change the results in the scorebook.
Tempo-free efficiencies measures can tell us something about the future, but they can't change the past--and success can only concretely lie in the past.
I've perhaps stated the second part of my thinking on this matter in too stark of terms. I don't have any problem if the gentlemen on the NCAA Selection Committee choose to take a gander at the KenPom ratings as they deliberate. (My favorite computer ratings are the merged Sagarin Ratings, which do a nice job of blending a binary-outcome-based perspective with a point-differential-is-king perspective.) If nothing else, the computer ratings are probably the best way to judge strength of schedule--which most definitely should be accounted for by the selection committee.
But I think we tempo-free adherents should recognize that our efficiency measures are ultimately a means for analysis, and not an end unto themselves. (Note that this sentence successfully defeats a straw man argument, as I don't think any basketball observer, no matter how statistically-inclined, would argue that teams should be judged solely on cumulative point differential. Hey, I warned you these were just thoughts bouncing around in my head.)
Two cents deposited.