Saturday, June 10, 2006

Ways to Fix the Rankings Right Now

This is a list of ways to make the human polls & computer rankings fairer (so that every Division I-A team has the same chance at making it to the National Championship game). These are things that would be relatively easy to implement and could be done between seasons - there's no need to wait until the next post-season incarnation, whether it's another version of the BCS, a playoff, etc.

1) Give official instructions to Official poll voters…

Part of the problem with voter differences in the polls is that people interpret the poll very differently. One voter may rank teams based on how well they’ve played so far in the season, another may rank them on their potential, another may rank them on who would beat who in a matchup (something nobody really knows until they play)… it makes for a very uneven poll that is more of an average than a near consensus. If there were more instructions for the voters, it would lead toward a common goal and would be more acceptable to more people. Part of the job of this committee would be to settle on a definition of “National Champion”.

2) … and then don’t release the Official Rankings until mid-October.

As it is now, the BCS doesn’t come out until October, which is a good thing. This shows that people recognize that when polls are released has an impact on the outcome of things. However, the Coach’s poll comes out in August, and since it’s part of the BCS rankings, that rather defeats the purpose, doesn’t it? Other polls, like the AP, are more than welcome to start in August – but the Official Rankings and all polls involved in them should not even be considered until mid-October.

3) Give the computer rankings and human polls the same weight in the Official Rankings

After 2004, when USC was left out of the national championship game after being ranked #1 in both polls, the weight shifted to the human polls being worth 2/3 of the BCS rankings and the computers being worth only 1/3. People seem to think that this solved the situation, but it begs more in-depth questioning. Each of the methods, computers and humans, have advantages and disadvantages. Let’s list some of each –

Humans: People are able to take into account more flexible data, such as how a team played, luck, bad referee calls, and other intangibles. On the negative side, people are always biased, as much as they claim (or would prefer) not to be, almost always tilting the scales in one direction unfairly.

Computers: Computers are unbiased (as much as their programming is) and are able to compare huge volumes of concrete data, mathematically examining teams in relation to one another. They’re also the same throughout the season, and usually years – they cannot be changed from one week to the next. On the negative side, they cannot take much flexible data into account and cannot see the intangibles that humans can.

All in all, we need the computer and humans to balance each other out. Each has strengths and weaknesses that the other can compensate for, so a balance of power between the two is preferred.

4) Margin of Victory should be factored into the computers (to an extent)

One of the main problems that people have with the computers is that since 2000 they’re not allowed to use margin of victory (MoV) as a factor. (Some polls the BCS used were even dropped because they refused to give up this component of their rankings.) To human eyes, if Team A beats Team C 54-3, and Team B beats Team C 24-21, Team A deserves more credit and has achieved more. But since computers are not allowed to factor this in, in theory they would rate Teams A & B the same. But what comes into play with MoV is sportsmanship. Part of the reason that MoV isn’t allowed to be factored in is that the powers that be don’t want teams running up the score on inferior opponents in an attempt to boost their rankings. But there are ways to factor in MoV without encouraging unsportsmanlike behavior. Putting a cap on the MoV at, say, 21 points, like the BCS had previously, does not encourage teams to run up the score, yet it rewards them more for a sound victory than for squeaking by. (I suggest using tiers of how many possessions it would take to change the game, such as 1-3 pts (field goal), 4-8 pts (1 TD), 9-16 pts (between 1 & 2 TD's), and 17+ (over 2 TD's).

5) Official Voter’s ballots need to be made public at all times

This mainly pertains to the Coach’s poll, which has historically been secretive and has only just recently been made public after the regular season is over. It will allow for more transparency in the system, which is definitely needed because of all the issues at hand. Coaches (and the people who may fill out their ballot for them) are human, with biases just like the voters whose ballots make up every poll taken in college football. A big difference though is that Coaches stand to gain considerably because of their biases – the higher they rank themselves or the team’s they’ve played, the better the chance that their school will reap benefits. That isn’t to say that all coaches disingenuously fill out their ballots in their own favor, though I’m sure some do. But the possibility is so much greater when they are allowed to keep their ballots secret and thus aren’t held accountable. By making their ballots public every week, it will probably reduce the amount of skewing, intentional or otherwise, especially at the end of the season when their ballots are made public.

*A Thought about Rankings... You know, it always grates on my nerves when people gripe and moan about Team A being ranked lower than Team B after Team A beats Team B. Let's think about this logically.

First, we wouldn't ever say that a 1-7 Temple team should be ranked higher than 6-2 Notre Dame, even after knocking the Fighting Irish off (in theory). Why? Because 6-2 is a lot better than 1-7, obviously, and in this case, the teams' records trump the on-field outcome. So right there we have two factors that come into play: 1) head-to-head outcome, and 2) record. Most of the griping comes when teams' records are close, usually within 1 or 2 games of each other, like if Temple had been 6-2 or 5-3 after the game. In this case, people use records to "prove" that the teams were close to equal, and that the head-to-head victor should be ranked higher.

But there's another factor: 3) opponents. If Notre Dame has beaten USC, Ohio State, and Michigan on their way to 6-2, while Temple has beaten Army, Duke, and UConn, people are going to give more credit to Notre Dame because their schedule is (seemingly) harder. I put the "seemingly" in there because strength of schedule is one of the most subjective things humans can try to measure without using computers. Computers have the ability to objectively look at and compare teams to each other using the same mathematical formula for each team. Even then, people disagree as to what the correct or best mathematical formula is for calculating SoS. Because they can do this, computers are often railed against because they don't automatically put Team A higher than Team B when all else is equal.

But is all else ever equal? No, I would argue never. With regards to SoS, unless teams play exactly the same opponents, which they usually never do, and play them in the same weeks, which is impossible, their SoS will always be different, providing that minute fraction which might keep Team B ranked above Team A. Or it could be some completely other factor which the computer takes into account which provides the margin of difference.

My point is that the rankings take the whole season into account, not just single weeks. So whenever you see Team A NOT jump Team B in the rankings after beating them, take a second to think about how many different factors are involved in determining those rankings and realize that it's a much more complex issue/calculation than just "Team A beat Team B so Team A is better".

5 comments:

CJH said...

You cannot fix the manner in which teams are ranked until subjectivity is eliminated from the process. Opinion polls are the worst idea to happen to sports ever. The intangibles that humans are able to consider should never be considered. They have nothing to do with determining the winner of a competition. As bad as situations like Oregon-Oklahoma last year are, the rules should only consider that Oregon won the game. Bottom line is that human polls have no value whatsoever, they tell us nothing, produce arbitrary results, and are internally inconsistent. In short, they are no better than pulling names out of a hat.

Mike said...

Great stuff.

One thing the BCS has done to try to even out different ranking methods for the computers is to throw out the highest and lowest scores for each team.

Yet the human polls simply add every ranking by every person. Of course some people have notions about conference strength, and favorite teams and such, which will creep in. If the human polls did the same thing the bcs does for computer rankings, discard say the top 15% and bottom 15% of rankings for each team, we might see a much more stable (fairer?) set of ratings.

I did propose this to BCS after last season, and got a nice response saying they would "consider it"

Ed Gunther said...

I like that idea, Mike. I think that might have a major (and positive) effect on the polls, not only because you're taking a more stable sample, as you say, but also because it might encourage people to be more fair and thoughtful. If I'm a voter and I think my vote for some team might get thrown out because it's too high or too low, I'm really gonna think about where I put them and try to be fair to everyone. (But that's just me.) Thanks for the comment.

Anonymous said...

But what about elections for president, senator, governor, etc.? Aren't they decided by human polls?

Ed Gunther said...

I'm not sure I get your point with the whole president, senator thing. What does that have to do with it?