I always find it to be interesting to compare how the CFP committee's decisions compare to how the BCS would fare in picking the top 4 teams in the country. Wesley Colley, the creator of the Colley Matrix which was used during the BCS era as part of the computer formula, maintains a "BCS Like" page that shows how the BCS would rank teams if it was still used today.
http://www.colleyrankings.com/curBcsLike.html
Top 10 according to the "BCS":
Rank | Team | BCS Average |
---|---|---|
1 | Alabama | 0.99936 |
2 | Michigan | 0.93405 |
3 | Clemson | 0.93297 |
4 | Washington | 0.88439 |
5 | Ohio State | 0.82963 |
6 | Louisville | 0.79328 |
7 | Wisconsin | 0.73479 |
8 | Auburn | 0.68691 |
9 | Texas A&M | 0.57611 |
10 | Oklahoma | 0.53020 |
The differences between this top 10 and the CFP committee is actually pretty small. You have Clemson and Michigan switched, but their BCS averages are nearly the same, so that is justified. You also have Auburn and Texas A&M switched, which is a little more concerning since there is an over 10 point gap between them in the BCS averages. Even still, if the CFP is over valuing Texas A&M, it isn't really by much (1 ranking position). The only other difference is Oklahoma at 10 versus Penn State by the CFP committee. Those are also flip flopped and there is a difference in only 5 points in the BCS between the teams, so again, this isn't much of a miss by the committee.
The main thing to notice is that the top 4 is identical in the matchups it would have provided. I did similar analysis after the final CFP rankings in both 2014 and 2015 and the BCS provided the exact same matchups both those years as the final committee rankings. So I guess the question is, why do we need a committee?