Ted Glover wrote a post about this topic last week. He provided a link to Chris Stassen's site where he has compared the preseason consensus to the final AP poll since 1989, when the poll expanded to 25 teams. He used simple arithmetic to determine which teams have been the most over and under rated teams over that period.
Some important notes about Stassen's method should be made. First, he explicitly says this:
These numbers do not really reflect how good the teams are; they instead reflect how well they lived up to preseason expectations. Any Big West member would have a score of zero (since they never had a team ranked in preseason or final polls during this span) but that would not mean each of those teams are stronger than the ones (such as Nebraska) which ended up with negative numbers in this list.
Second, any team not in the top 25 is considered to be #26. That's very important for teams lower in the polls since they can't drop very far. Third, The converse is also true. A team highly ranked in the preseason has little room to improve but lots of room to fall. That means teams with low expectations will do well in his analysis while teams with high expectations will fare poorly. Finally, he lists teams by their cumulative total with no attempt to account for the number of times that team was in the polls.
As expected, some of the big name teams that regularly appear at the top of the AP preseason poll turn out to be the most over rated. The bottom 10, in order, were OU (-97.5), USC, UT, FSU, NE, MI, ND, Miami, UF and Clemson (-50). Conversely, the top 10 did well with low expectations in a few years, generally. In order, they were OR (80.5), BSU, WSU, Utah, KSU, TCU, BC, UC, Stanford and BYU (33).
Most of the teams came out with fairly low values. 54 teams were between 23 and -23 inclusive. For a cumulative value over 23 years, that's not too bad.
As a first cut at taking a deeper look, I took two steps. First, I am using the preseason AP poll instead of a consensus value. The same voters select the final AP poll, so it seemed more fair. It's also easier to look up. Second, I am reporting average values. That helps keep the teams commonly in the poll on more equal footing with the teams that only appear occasionally.
Average Values for Current Big Ten Teams
- IN: 0 (never ranked)
- IL: 4.0
- IA: 5.2
- MI: -4.0
- MSU: 5.3
- MN: 7.0
- NE: -3.2
- NW: 5.0
- OSU: 0.45
- PSU: 0.83
- PU: 2.6
- WI: 3.3
Only MI and NE turn out to be over rated, and that's not a surprise based on how high they were ranked in many years. On the other hand, OSU was ranked in 22 of 23 years and came out slightly under rated. It appears that the B10 is slightly under rated by the AP preseason poll on average with 9 of 12 teams above 0.
But the average isn't the only value of interest. The absolute value of the error in the polls is also of interest since it shows how accurate the preseason polls are. Over these 23 years for all the B10 teams combined, the poll was off by 7.6 places on average. WI, IA and IL had the largest errors as they all had some spectacular flops as well as great years when little was expected. The most accurate predictions were for PU, NW and OSU.
In case anyone is curious, ND came in at 0.80, very similar to PSU.
The next step is to do the same analysis for other conferences. It would be interesting to know who is chronically over rated, if anyone. The next level is to look how each preseason poll spot has done in the final polls on average, then compare each team to that. That will separate the mathematical issues with this method from the teams that get chronically over or under rated.