clock menu more-arrow no yes mobile

Filed under:

Michigan State Spartans Football: How to Fix "Rank By Offers"

Joe breaks down a new recruiting ranking method and shows why it is flawed

Steve Dykes-USA TODAY Sports

There's a new recruiting ranking method that's making the rounds this week: Rank By Offers. The crux of the news is that "Rank By Offers" (let's just refer to it as RBO) is an alternative, objective recruiting ranking method. The reason somebody like Mike Griffith cares is because it ranks MSU's class in the Top 5 (specifically, number 4) in the nation.

The problem is that RBO's algorithm has some major flaws.

The Method

First, RBO uses Sagarin's ELO ratings over the past three years to rank teams. This is used to create a weighted average for every team; the higher the rated average, the better. The top five weights belong to Ohio State, Florida State, Oregon, Alabama, and Michigan State.

Second, all offers to players are compiled together. A player receives points based upon the offers he receives. For example, an MSU offer adds 94.21 points to the total "offer points" for a recruit. Messiah deWeaver's offer list and total "offer points" are listed here, for example. Players are ranked according to their total "offer points."

Third, all the offer points are compiled together to rank classes. A class with more than 25 recruits gets capped; per the site: "Offer points for teams with over 25 commits are calculated as 25 times the average offer points per commit." The 2016 rankings are here.

The Issues

Rankings

The most glaring problem to me is that going back three years is simply not a very good heuristic for offer value. I'm sorry, but tradition and resources are meaningful; only in my wildest fever dreams are offers from Utah State or Northern Illinois more valuable than one from Michigan. Going back more time would be helpful, but incorporating other numbers like revenue and/or ability to send guys to the NFL would probably be helpful.

Aggregation

The other major issue is that offers are aggregated, and weightings are on a scale of 35 to 99. This means in an extreme case, a recruit receiving offers from both Bowling Green and Central Michigan would be more highly ranked than a recruit offered by only Alabama. For this reason it isn't surprising that MSU's rankings list basically is just a list of who has the greatest number of offers. A marginal offer from a MAC school doesn't change how valuable a player is. A marginal offer from a traditional powerhouse does. In scatterplot form:

Offers Versus Offer Points, Per RBO

Regionalization

This aggregation means that players from areas where many teams are recruiting will have inflated value. It totally makes sense to me that Miami (FL) and Florida State have the top two classes for this reason: everyone is recruiting southern Florida. It's also not surprising to me that Abdul Adams is the highest rated MSU commit by RBO, since the D.C. area is pretty central for many programs.

The Fixes

Average, Not Aggregate

How do you evaluate a recruit's offer list? For me, I don't care if he's got a blank check to go to any MAC school he chooses. I want to know which offers were the best. For Messiah deWeaver, I care that he got offered by Michigan, Penn State, Louisville, Mississippi State, and Arkansas. I don't really care that he got offered by Toledo, Western Kentucky, and Bowling Green. I'm guessing that most people would feel the same way.

So here's my proposal: take the average of the top 5 offers a recruits receives. This solves the aggregation problem. Maybe it isn't perfect, but at least recruits aren't evaluated almost exclusively based on the number of offers.

More Ranking Factors

The more difficult problem is how to account for the fact that offers from major programs are an order of magnitude more valuable than those from mid-majors. I have no idea how to do this, but I can tell you it's likely to include more than just recent team quality.

The Take-Away

Recruiting rankings are absolutely not perfect. But something like 24/7 composite rankings are more likely to remove any outliers than something objective but flawed like RBO. Kudos to the RBO team for trying something new rather than just complaining about current ratings, but they've still got some work to do.