Go Joe Bruin’s Nathan Eberhardt unveils his personal NCAA football computer rankings (and formula) of all the teams in the nation, with a focus on those that have an influence on UCLA football.
NCAA Football is a notoriously difficult subject to analyze. There are 130 teams, 124 of which are spread out among 10 conferences, with wildly disparate financial resources (LSU vice Louisiana-Monroe) and admissions standards (Stanford vice San Jose State).
RELATED: Dorian Thompson-Robinson Should Stay in the Game
At the end of the year, the available data set from which to draw conclusions and definitively assess all 130 teams is somewhere between 11 and 13 games. We’re expected to take UNLV, UConn, Utah, and the U and place them in rank order because of what they did in 12 games against no common opponents. No wonder the championship-then-BCS-then-Playoff discussions in this sport have been so contentious.
More from Go Joe Bruin
- UCLA Football: It’s time for the nation to meet Dante Moore
- UCLA Football: Where are they ranked heading into week 4
- UCLA Football: Position battle breakdown for Utah showdown
- UCLA vs. Utah: Location, time, prediction, and more
- UCLA Football: Highlights from Chip Kelly’s appearance on the Jim Rome Show
The downside of human polls is obvious. Geographic differences, time restrictions, pre-existing assumptions, recency biases all conspire to render a subjective poll insufficient for creating a definitive 1-130 ranking.
Computer polls have been famous in college football at least since the dawn of the BCS. Follow this sport long enough, and you’re sure to have heard of Sagarin, Billingsley, Colley, Massey, etc. But the problem with most computer polls is that they fall in love with their own complexity. Just because you can create an algorithm that factors in passing efficiency, yardage differential, and time of possession doesn’t mean those things help determine a more accurate ranking.
There are ball-control teams and quick-strike teams. There are triple option teams and air raid teams. There are bend-but-don’t-break defenses and ball-hawk defenses. The computer polls tend to err by measuring a team by how well they line up with the designers’ vision of the kind of teams that win games rather than on that team’s own merits and accomplishments.
Part of this problem is the confusion about what the rankings are supposed to measure. Do they measure how good a team is at a given moment? How likely it is to beat another given team on a neutral field? The quality of its resume? Some nebulous combination of all three?
I maintain that the only thing a ranking should measure is quality of resume. It’s pointless to try to separate a team from its record and say, “yeah, but they’re really THIS good.” It’s pointless to try to guess at some idealized, platonic measure of essential quality separate from the results on the field.
If the whole point of the sport is to win games, then all those other measures of quality are really in service of winning games. And we can assess their value by assessing the value of the games won or lost.
To this end, in 2015, I devised a very simple ranking rubric that measures each game on four inputs:
- Did you win or did you lose?
- Was your opponent ranked in the top 10, top 25, or not at all?
- What is the record of your opponent?
- What was the scoring margin of your game?
That’s one input for a team’s record, two for their strength of schedule, and one for margin of victory. The poll is self-informing and retroactive, so that means that the top 10 and 25 designations come from the poll itself, not from any outside source, and as a team proceeds throughout the year (amassing wins and losses, and rising or falling in the poll) the values of inputs 2 and 3 will change for every opponent.
For example, when Stanford beat a ranked and undefeated USC, Stanford got credit for beating a ranked and undefeated team. But as USC fell from the rankings and lost again, that win is worth less to Stanford because it now counts as a win over an unranked, two-loss team. That will continue throughout the rest of the season.
The poll is not meant to be predictive. In other words, I make no claim as to the likelihood that 1 would beat 2 on a neutral field X% of the time. Additionally, this is not a projection of how I think the teams will finish the year. The schedules are unbalanced – some are front-loaded, others are back-loaded – so I make no claim as to whether or how this week’s ranking will relate to the final ranking in any way.
In past seasons, my poll has roughly tracked with the final College Football Playoff rankings. Whether this lends credibility to my methodology or merely demonstrates how unnecessary it is – well, that’s up to you. The only consistent difference is that my poll tends to find more value in the resumes of top G5 teams than either the CFP or the AP does.
Here are the top 10 of my poll after Week 5, plus UCLA’s non-conference foes and the rest of the Pac-12: