Cheer up Buckeye fans...
You're National Champs after all!
Even though all of the BCS component polls and the AP Poll have placed the Florida Gators at the top of their respective rankings there are at least two selectors that do not.
Harry Devold, who has been rating college football teams since 1945 says that OSU is the number 1 team in the country despite the fact the Buckeyes were pretty much embarrassed in every phase of the game on a supposedly neutral field (it wasn't, there were 3 Buckeye fans there for every Gator fan, but they had a two week head start on purchasing tickets).
So you can buy your championship T-shirts, caps and arrange your parade. You have at least as much of a claim as Alabama does for some its Championships. But you'll have to share the 2006 championship with not only Florida but also with USC.
You see, USC topped Sagarin's "PREDICTOR" ratings. These ratings used to be part of the BCS until the BCS required Sagarin to modify them to exclude margin of victory. That requirement resulted in the creation of Sagarin's "ELO_CHESS" ratings currently used as one of the components of the overall BCS rankings. Florida is the top team in ELO_CHESS but 4th in the PREDICTOR or "Pure Points" ratings, behind USC, OSU and Louisville. SW!-TECH also picked USC as the topped ranked team in the country.
Enjoy the parade.
3 comments:
I suspected Sagarin was going to go that way, with his absurd rankings of the Pac 10 as the toughest conference and USC's big win over Michigan.
So, according to Sagarin, you can lose to 2 unranked teams (one who then gets killed by .500 FSU) and still be the best. The BCS needs to examine their use of his ratings.
As for Harry DeVold, well, what can you say. Everyone has a crazy uncle out there.
To be fair to Sagarin, his conference rankings did change after Bowl season and the SEC is at the top. But the Predictor rankings are still a mystery to me. I'm guessing that since MoV is included USCs point-heavy wins in the Pac 10 are propping them up. Turns out that maybe the BCS unwittingly forced Sagarin to create a more accurate ranking.
Sagarin seems to be a fairly sharp guy, but his program obviously has a major flaw. Don't be at all surprised if he makes significant tweaks in the off season. He has said for a long time that what we need, to determine rankings, is more computer programs not less. For any given year, the apparent strength of schedule and actual margin of victory scenarios can cause a mathematical model to fall flat on its face, like happened this year. However, at the same time, if you have 15 or 20 different algorithms doing evaluations, several of them should be in agreement. Polls, on the other hand, are going to be biased by the certain segments of the media playing up certain teams and certain conferences. That is amplified by two factos: Dishonest voters in the polls and honest voters who don't have the time to evaluate for themselves and are relying on assistants who may have their own agendas or gut feel based on what they have heard from a biased media.
Post a Comment