After reading Heck's excellent post about recruiting and player development, I had a few things to add and hopefully improve upon. Long, first post with graphs, so this should be good. I also used quite a few things from MGoBlog's Mathlete.
Two things stood out to me that I either disagreed with or wanted to improve upon: the formulation of the average recruiting ranking, and also the way we view "success."
First, I feel like Heck's use of average recruiting rankings is useful but could be improved. The general question is, "do coaches get the most out of their players?" so in the most general sense comparing how good a recruiting class was to the results of the team. Importantly, we're viewing success from the 2007-2011 seasons, so the players in the 2012 recruiting class can be thrown out of this analysis for now.
Further, players from the 2003 recruiting class forward all contributed to team success between 2007 and 2011. But these players did not all contribute evenly, simply due to only have four years of eligibility. My proposition is this - let's weight each recruiting class from 2003 forward by the number of seasons a recruit from that season could have contributed . A player in the 2003 recruiting class, if they redshirted, could only have contributed in 2007, so the 2003 class should have a weight of 1. A player in the 2004 class could have redshirted and then played in 2007 and 2008, so they should have a weight of 2, and so forth. (Important note - 2006, 2007 and 2008 all have weights of 4). In this way, players from the 2006, 2007, and 2008 classes have the most weight in terms of recruiting.
We'll sum the weighted numbers, then divide by the number of weighted years (24). Here's Heck's table with my addition:
Not too many major differences here. Illinois recruiting in 06-08 bring them up to MSU's level, while OSU, UM, and PSU are all separated from the pack a bit. Our relatively unheralded classes over the heavily weighted years bring us down (and make our success recently even more impressive).
Using these revised average class rankings, here's Heck's chart of teams with average All Big Ten players against average recruiting class ranking:
I left out specific team labels in favor of the regression line. This shows us that still, average recruiting class rankings don't explain the number of All Big Ten performers very well. Those three teams bunched? Iowa, Wisconsin, and Michigan State. Dantonio wasn't kidding about modeling his program after the ones in Madison and Iowa City. The only other team to outperform their recruiting ranking expectation ( by having their data point above the regression line)? Penn State.
On to my second point - how we characterize success. I wholeheartedly agree with Heck that using NFL draft success as an indicator of college success is not a great way to view player development. However, I also don't believe that using All Big Ten teams is a great way to view success either. Most generally, I'm not particularly interested in whether or not my team has a bunch of All Big Ten performers, I'm interested in whether or not my team as a whole is any good or not. Further, All Big Ten teams seem to me to have the same problems as an MLB All-Star team: the voters try to spread the wealth, and at certain positions it's tough to tell if one player is necessarily better than another.
So, I'm gonna use Heck against himself here - New Math!
I think a better way to view success (outside of wins) is using Football Outsider's F/+. F/+ is a combination of the Success Rates and EqPts per play Heck uses to analyze games, and FEI which is used to evaluate whole drives. Here it is important to note that I'm using F/+ rank over the five year's we're trying to analyze (2007-2011), where a lower = better. MOAR CHARTS/GRAPHS:
|Teams||5-Year F/+ Rank
Ohio State was so dominant before 2011 that they're still on top here. Michigan outrageously underperformed as I chuckled every Saturday in 2008 and 2009. Wisconsin and Penn State were quite solid, while both MSU and Iowa were a bit up and down. Illinois was buoyed by one solid season (2007) and well, everyone else was pretty bad but Northwestern made a habit of winning a ton of close games.
Uh oh. Recruiting rankings are actually very good at describing whether or not a team ranked well in terms of F/+. The three teams bunched above the regression line are Wisconsin, Iowa, and Michigan State, joined by Penn State. Illinois, hilariously, performed just as well as expected given their recruiting (and so did Ohio State), while Michigan was extremely disappointing as expected. There's not much else to say here - in a very noisy data set, teams that recruited well played well. An important note, however, is that teams which maintained staff continuity during this time tended to at least meet expectations, including even the Zooker. And even in a graph where recruiting appears to have a large effect, it's impossible not to see the effect of coaching.
(Here is the listing of F/+ rankings from last season and some other links to the explanation of that statistic. I used Football Study Hall's 5-year average F/+ ranking from their listing of previews and statistical profiles here, which are awesome. )
Just for funsies, I wanted to check out how these recruiting rankings tested against wins and conference wins, which really are the ultimate test of success. Guess what I brought extras of?
|Teams||Wins/Year||Conf Wins Per Year||2007 Wins||2007 Conf Wins||2008 Wins||2008 Conf Wins||2009 Wins||2009 Conf Wins||2010 Wins||2010 Conf Wins||2011 Wins||2011 Conf Wins|
Couple neat things - we've won more conference games than anybody in the last five years if you don't count vacated wins. Second, Northwestern wins more games than it should, based on really any metric. Hey Michigan fans, how does it feel to be worse than Northwestern in the last half-decade? Also, Indiana is more pitiful than I thought. Two more graphs and that's it I swear:
Recruiting average does an average job of predicting wins. Better than All Big Ten performers but not as good as F/+. Northwestern becomes the overachiever we'd all expected. Interestingly enough, wins are kind of a poor way to measure how good a team is, due to randomness like fumble luck (or, remember that Halloween game in 2009 where Minnesota caught a long touchdown after the ball bounced off of a lifeless receiver's hands? Unfortunately no stat for that).
I want to confront the notion that recruiting ranking are everything, however. The Mathlete contends that, "Conference championships can be won with middle of the road talent for the conference. Last year Wisconsin won the Big Ten with the 7th highest rated roster in the conference. But over the long haul talent will win out."
I will say, Mathlete, perhaps you're right. But if Wisconsin had the seventh best roster in the conference and lost their two conference games by a hair, why couldn't they have been in the title game and potentially won? Why couldn't an Oklahoma State team have won the national title last year with a below average roster?
The possible answer is that in the BCS system, perception is everything. If you don't have a roster full of elite players according to recruiting services, you probably won't get a shot even if you're Boise and you went undefeated. With the small sample size we have to work with, I believe we'll see a Cinderella sooner or later, especially with the aid of the coming playoff.
In conclusion, here's the picture I want to paint about recruiting: it really helps, but it isn't necessary to have an elite recruiting profile to contend. Further, having a knack for winning close games will inflate your win totals even if you're not a particularly average recruiting team. If, somehow, you can get all of your close losses out of the way in a specific year (like MSU 2009) and save them for one year to make a run (like MSU 2010) that is a pretty decent way to go. Staff continuity also appears to be a major boost to outperforming expectations. It's a complicated, noisy scenario, and over the long haul, recruiting doesn't always win.