Who's More Correct? A New Idea in Rankings
I like that college football uses a ranking system to determine the top teams. I think it's more reliable than going by straight wins and losses, as other sports do, I think it acts as a great failsafe for referee or official problems, and I think it gives us a better chance of naming the best team as the champion than other sports. Overall, I think rankings are necessary given the unique structure of the college football season as a whole.
But everybody knows there's major problems with the rankings as they are now.
There's no real guide as to what the rankings represent, whether it's the best team, the most powerful team, the team with the most potential, the team that you'd favor to win over other teams, etc; voters are biased towards certain teams, conferences, or regions, as much as they might say they're not; and there's the issue that we're going to discuss today - the inability, some would call it indifference, of voters to know as much as they can about the games and results in order to rank teams as "correctly" as possible.
You know the stories. There was that guy in the Harris poll who thought Penn State was still undefeated in late November, there's the coaches who don't get to watch any of the games because they're too busy focusing on their own team, there's the sports writers who only get to cover the same team throughout the season. In general, there's just too much information to process - too many games, too many plays, too many teams, and not enough time to cover everything. So how can we trust these voters' rankings when we know they're not able to include all of the relevant information?
In reality, there is no such thing as a "correct" ranking because "correct" implies objectivity. Rankings are inherently subjective - you cannot "correctly" place football teams in an order from best to worst any more than you can place fruit or states in such an order. If you could, then the polls might as well just use a single person's rankings. (We could call it the Fred Poll, or the Steve Poll - it would be the one poll to rule them all.) But even though a universal "correct" poll might not be achievable, does that mean we should't even try, or that we can't strive to make our polls as "correct" as possible? No, of course not. Part of the reason the whole excercise isn't futile is because even though the polls aren't objective, they aren't completely subjective either. There are a lot of elements that people can look at and come to the same, universal conclusion about. So while there might not be a "correct" ranking, there certainly are "incorrect" ones and thus, varying degrees of correctness.
For instance, everybody knows that 4-8 Stanford isn't better than 9-4 Oregon - Oregon should obviously be ranked higher by most any relevant, competitive measure, and if they're not then the poll is obviously incorrect. It's when you start dealing with 9-4 Oregon and 10-3 Arizona State or 9-4 Oregon State that peoples' lack of information comes into play. In those situations, dealing with teams that have similar records, each voter is going to have his or her own idea of who is better based on different information. So how do you know which voters' rankings are more "correct" and which are less "correct"? Is there a way to tell?
Yes.
Have the voters predict the winner of each game.
How would this help? Because it would show just how much we should trust each individual voter's opinion. While there might not be "correct" rankings, there certainly are correct predictions and picks - either the team that you say is going to win wins, and you're right, or they lose and you're wrong. Clear cut, black and white. So my overall argument is this: the more correct a voter is in his/her predictions about games, the more "correct" we can assume their rankings are. Again, because rankings are subjective, there's no way to directly connect them to predictions or picks. But the rest of this essay will be an attempt to persuade you that using predictions in this manner is not only a valid way to rate rankings but a way that eliminates the lack of information issue.
For argument's sake, we'll start by using the AP Poll's directive that voters base votes on performance - "base your vote on performance, not reputation or speculation". If the goal is to set up a ranking so that the teams who have performed the best are at the top, then this using predictions as part of this method is a solid and natural fit. When you're asking someone to make a prediction about which team will win, you're asking them to determine which team will perform better. It's a simple question that only needs one of two answers - either Team A will perform better and win, or Team B will perform better and win. So when you're predicting an outcome, you're comparing two teams' abilities to perform. If you're right about which team will win, then you have more ability to assess performance than someone who's pick was wrong. Therefore you have more ability to assess performance for the purposes of ranking teams than someone who's pick was wrong. It's all about being able to analyze performance - that's my basic line of thinking.
(A quick aside - One of the ideas that gets tossed about at some games is that the losing team outplayed the winning team. But that's a subjective trap that we dont' have to fall into. Logically, stripped down to it's barest competitive and objective parts, the team that scores the most points performed better. Sure there's lucky bounces, bad ref calls, yadda yadda. In the end, at its very simplest, when the wins and losses depends only on points, so too can our assessment of performance.)
Let's get to some examples to see this manner of viewing rankings in action. The first one should show that you're more familiar with this connection between rankings and predictions than you think. How many times have you heard somebody say or read in a comment box some version of the following: ""Who's rankings are you talking about? Man, he's a dumbass - he thought Texas Tech was gonna crush Mississippi! His rankings are crap." Along those same lines is the more common, "Man, he had USC at #1 and they got crushed by Oregon State - no way they ever deserved to be #1!" You know these, and you may have even used them. I'm just proposing that the flip side, giving a commentator's rankings more credit when they make a correct pick, has to be relevant too. You usually don't think about that one consciously, but it makes sense if the negative one ripping on the ranker or commenter does.
The next example requires us to define the two different types of "better" within the college football world - there is on-that-day better and seasonal better. Generally, upsets occur when an on-that-day better team defeats a seasonally better team. For example, one of the biggest upset in recent memory was #1 USC's loss to Stanford - nobody I know thought the Cardinal were gonna win that one. Had the Trojans performed better up until that point (4-0 vs 1-3)? Yes. Did the Trojans perform better after that point & over the course of the whole season (11-2 vs 4-8)? Yes. USC was undoubtedly seasonally better, both before and after. But just as undoubtedly, Stanford was better on that day. (Incidentally, this distinction is necessary because one of the favorite pasttimes of trolls is trying to tie these two "betters" together. As in, "Stanford beat USC so they must be better", or "Why is Oklahoma ranked higher than Texas? They lost to the Longhorns! Texas is better!", or my favorite, "How can you say Florida is better - Mississippi beat them on the field! On the FIELD!" You gotta keep the "betters" separate, people - it'll make things a whole lot easier.)
So after that USC-Stanford game was over, ask yourself who's rankings would you trust more - the voter who thought the Trojans would perform better on that day, or the voter who, for whatever reason, thought the Cardinal would perform better on that day? Most people would probably trust the voter who went with Stanford - anybody picking them would have looked like a genius after the fact. They should have more performance-analyzing credit, therefore their rankings shoud get more credit.
This example brings up an important point - how do we know that the guy who picked Stanford wasn't just picking them in order to be different? Who's to say it wasn't just luck? That one pick might be luck, true - but that's why you have voters pick a lot of games over the course of the whole season instead of just focusing on one single game. Your great-Aunt Myrtle might be able to get three of her March Madness teams into the Final Four, but could she do it for fifteen straight weeks? The more games you have them pick, the less of an issue luck is.
This brings us to the aspect of this system that makes the issue of too much information vanish. Again hypothetically, let's say Herbstreit's picks are correct 90% of the time, while Corso's calls are correct only 55% of the time - we should probably rely on Herbstreit's rankings more. A lot of people would also make the assumption though that Herbstreit knows more than Corso. But here's the beautiful thing about this way of looking at rankings - knowledge has next to nothing to do with it. It's not about how many X's and O's you know (coaches), or how many games you cover (writers) - it's simply about your ability, by whatever method, to correctly assess which teams perform better. That's it. Coaches and sports writers SHOULD be able to assess teams better because they have more general and specific college football knowledge, but this knowledge isn't a requirement. The only thing that matters is how well you can choose who's going to perform better.
I know that goes against rational thought, that the people who are best qualified to rank college football teams don't have to know much about it. It seems so logical to think that the people who know the most about the sport, the coaches, writers, and announcers who get paid to know football, are the ones who can produce the most correct rankings. But let me ask you this - if they're so knowledgable, shouldn't they be able to correctly pick who's going to win? I would argue that the opposite should be true as well - wouldn't you think that someone who's able to correctly pick winners is knowledgable? Of course. It doesn't matter how many games you watch, or how many hours you pore over the stats, or how many highlights you see - if it doesn't help get your picking percentage up, it doesn't matter. Being a coach doesn't mean you know how to assess performance - it means that somebody hired you because they believed you could win games. Being a commentator or analyst doesn't mean you know how to assess performance - it means that somebody hired you because they thought people would read or care about what you have to say. Being able to consistently pick who's going to win means that you know how to assess performance. So take as much or a little information as you want into account - whatever helps you get the call right.
Now I'm not naive enough to think that voters would accept this setup, being forced to make picks about who's going to win. Sure some would be willing to put themselves on the line, but most wouldn't - they're not going to risk a blow to their credibility just to give the poll more merit (even though the polls seem to need all the credibility they can get nowadays...) But this type of system could be set up easily and involve anyone who wanted to participate - announcer, coach, fan, anyone. It could be an easy, transparent alternative to the polls we know are broken beyond repair.
3 comments:
Are you planning on doing another installment of your "Versions of the BCS"? It's always my favorite part of reading your site, and it's been strangely missing for the past month.
Hey there dethwing, thanks for the note. I decided not to do an official version this year, but since you so asked so nicely, let me work up an unofficial version... it'll be up soon.
Thanks so much!
Post a Comment