PSU at 75
Unsurprisingly, this isn’t going over well.
8 Big Ten teams in the Top 20. Lots of opportunity to pick up high quality wins.
Nate Silver weighs in (read his replies)
Including tomorrow night’s opponent (VT).
I have no idea how this ranking system works, but I’m sure it will improve as we get more data points. it’s still REALLY early to do a computer ranking just based on games played this year.
What a disaster these rankings are lol. They are so bad that it’s impossible to know where to start.
We are wayyy too early in the season to draw anything from them. Wait till we are half way in. There is a reason why the CFP does not release ranking until later in the season.
I mean these rankings are offering no predictive value at all, but they also don’t seem to make any methodological sense either. There’s a real chance NET could be a complete disaster and the committee has to scrap it by March.
I love Jerry Palm but I would challenge his conclusions there. First of all, why is he using a comparison to a flawed RPI system as the measure of what’s good or bad? One of the complaints that was always held up as an example of why the RPI was bad was situations where a team would beat at team handily but drop in the RPI rankings - the exact scenario that occurred here. All of a sudden, that’s supposed to be a good thing? Houston won, so not dropping them in the rankings might be the correct thing to do.
Secondly, the only way to correctly make the point that it was an uncapped margin of victory that was an issue would be to compare what would have happened if their margin of victory was capped. But the NET formula isn’t published to there is no way to do that.
NET may very well be flawed, but I sure don’t see the results of Houston’s game as evidence that it is.
Palm’s not really making a conclusion is he? Actually, more to the point, he’s not implying any kind of value judgement. He’s just pointing out an “isolated example” (And I think he chose those words so that you DO NOT assume he’s making any kind of conclusion).
He’s just pointing out an example of where the two formulas created two very different outcomes… and it’s valuable as we don’t know know the formula behind the the NET. I bet if you find enough of these outliers, you can start to make predictions of how the “unknown formula” will react in given situations (Which is why he tweeted this in the first place).
He’s concluding that the difference can be attributed to an “uncapped MOV”. There’s no way to draw that conclusion without comparing it to what would have happened with a “capped MOV”. There are five factors to the NET. How do we know that the large difference isn’t due to one of the other four?
The really odd thing about is his post is that technically MOV is capped. What isn’t capped is “net efficiency”. So he’s also making an assumption that capping MOV would cap net efficiency. I’m not sure that is a valid conclusion unless he were to publish exactly how he believes net efficiency should be capped.
One more thing to add.
KenPom’s predicted score for the game was 80-50, a margin of 30 points. The actual score was 75-44 a margin of 31 points. Since Pomeroy uses team’s adjusted efficiency scores to predict games, the actual score is a pretty good indicator that Houston’s efficiency numbers probably didn’t change much pre-game to post game. The fact that Houston went from 36th ranked to 35th ranked pre and post game is also another strong indicator that there wasn’t much anomalous about Houston’s efficiency numbers, and thus performance, in that game…
What’s out of whack is the dive that Houston took in the RPI. If anything, the results of the game should be an indicator that the NET is better than the RPI was. While it is true that there is nothing specific in this particular post that places any value judgment on the difference, Palm has been a critic of the NET, so I believe that there’s an implied value judgment being made.
That’s fair enough, I don’t read all his tweets. All I saw was an outcome that moved the needle on one metric and not the other, so that would be a valuable data point for understanding the differences.
Personally, I’m kind of on the fence about the whole MOV thing. Particularly with College kids who can be really up and down form day to day, the Win should be the thing, and the only thing. However, we’re at the point of not just saying who “won their way in”… that’s already taken care of, so now we’re making value judgements on who should get those handful of at-large bids (and who shouldn’t). While the models that use MOV might be much more accurate in terms of how “good” a team is, I still wrestle with the notion of giving one team credit over the other because their MOV is somehow better or more impressive.
Yes. I agree it’s a valuable data point.
There’s no evidence that MOV was the criteria that “moved” (pun intended) the needle. In fact, there’s just the opposite. The NET needle didn’t really move (up two spots) so the 31 point margin of victory apparently didn’t do much at all. What moved was the RPI needle (down 16 spots). MOV isn’t a factor in the RPI so none of that needle movement can be attributed to margin of victory (Houston would have dropped 16 spots even if they won by only 1 point)…
I think a better way of handling it isn’t to eliminate or cap margin of victory but make it have a diminishing marginal return.
Right, weight the MOV based on location and rank of opponent.
Statistically I agree 100%… I just wonder if using anything other than wins and losses is fair. I’m not a fan of the “beauty contest” aspect of these things.
There are two views toward what ranking systems should be. They often lead to conflicting results. They fall under the general categorization of results based measures (RPI for example) and predictive measures (Pomeroy for example). They both have their good points and their bad points.
I distinctly remember one article, I believe written by Jerry Palm, who was approached by the NCAA for consulting advice when the NCAA initially considered replacing RPI. His question to the NCAA was “What is it that you want the new measure to do? Reward teams for past performance? Or predict how teams will do in the future?” The NCAA couldn’t answer the question. Palm response to them was that you can’t build an algorithm until you define what you want that algorithm to do.
The new NET system is an amalgam of both approaches. The NCAA seems to feel that it captures the best of both approaches. If you listen to the media criticism so far, you might conclude that NET only captures the worst.
Personally, I believe there’s room in the NCAA selection process for both. Results based measures are good for determining which teams belong in the NCAA tournament. Predictive measures are best for seeding them.
Selection Sunday will be particularly interesting this year.