Get NCAA women's college basketball rankings from the Associated Press, USA Today/WBCA and NCAA committee.
www.ncaa.com
NCAA RPI for B1G teams through games of 8 December 2019. They haven't issued the Nitty-Gritty Sheet yet so I'll add Strength of Schedule later.
10 Northwestern (7-1)
12 Ohio State (6-3)
19 Indiana (8-1)
24 Rutgers (8-1)
25 Purdue (7-2)
26 Iowa (6-2)
39 Michigan (8-1)
42 Maryland (8-2)
55 Wisconsin (6-3)
59 Minnesota (7-1)
81 Nebraska (8-1)
100 Michigan State (6-2)
155 Illinois (6-2)
272 Penn State (5-4)
Update: This post has been updated with the correct data, and the comments suitably updated. The initial post mistakenly used data from the Warren Nolan website for last-year’s season.
Iggy, I'll let you fill in the SoS later from NCAA data, but I want to contrast the above current (after game 8) RPIs to the projected end-of-season RPIs. For this I'll use Warren Nolan data. Some of us (including me) are liking that website better than RealtimeRPI and even better than the official NCAA RPI website. For one thing, Warren Nolan has Nitty Gritty sheets and Team Sheets and the whole nine yards. It also might update even earlier Monday morning than the proper NCAA site (two samples show that Gopher data doesn't show up at NCAA until 10 AM-ish). Importantly, I'm not sure how much I trust the supposedly official NCAA RPI site anymore. I found five of our players missing from the NCAA individual statistics website, so how much confidence does that give me that even their won/loss data is correct, or up-to-date (other teams may report their weekend results even later than Minnesota does). Also, in the past RealtimeRPI has been shown to be prone to errors.
Importantly for this post, the end-of-season projections that the RealtimeRPI Gamer algorithm produces have been shown to be, well, let's just say wackier than they need be early in the season (where, admittedly, a bit of wackiness is expected). But the end-of-season RPI projection algorithm used by Warren Nolan seems to be, well, let's just say quite a bit less wacky than Gamer.
Projected End of Season RPIs and Charlie Creme Bracketology
RPI Team W-L (Conf W-L) SoS Charlie
#5 Maryland 27-2 (18-0) 18 #13-16
#7 Indiana 27-3 (16-2) 15 #9-12
#11 Missouri State 25-4 (16-2) 43 #17-20
#23 Rutgers 25-4 (15-3) 72 #25-28
#24 Northwestern 23-6 (13-5) 42 #37-40
#33 Michigan State 21-8 (12-6) 47 #25-28
#47 Minnesota 20-9 (10-8) 39 #21-24
#57 Michigan 19-10 (10-8) 40 #21-24
#70 Iowa 16-13 (8-10) 22 #33-36
#77 Ohio State 13-16 (6-12) 7 #41-44
#84 Nebraska 18-11 (8-10) 57 #45-48
#95 Purdue 15-15 (6-12) 25 #29-32
#170 Notre Dame 9-21 (4-14) 19 out
#182 Wisconsin 10-19 (2-16) 36 out
#217 Illinois 9-20 (1-17) 55 out
#219 Penn State 7-22 (1-17) 48 out
Charlie Creme’s currently projected brackets (on right) are a range, scaled to 64 teams, from his 1-16 in-region ratings. Note that he projects Notre Dame (with projected RPI #170) to be out for the first time ever (probably). Also out are Penn State, Illinois and Wisconsin (but you’d expect that with projected end-of-season RPIs of #219, #217 and #182.
But looking at the mid-range of teams from Minnesota to Purdue, we see that Charlie currently has all those teams in; whereas by their Nolan-projected RPIs you’d expect Minnesota and Michigan at least to be on the bubble, and the set {Iowa, Ohio State, Nebraska, Purdue} to most likely be out. What gives?
Well, Charlie’s predictions are most likely more accurate, so perhaps that means that Warren Nolan’s RPI projections are excessively low-ball.
If you believed those estimated end-of-season RPIs and set a hard cutoff at #64, then {Iowa, Ohio State, Nebraska, Purdue} would miss out on the Big Dance. Warren Nolan would have 7 Big-Ten teams in (a more typical number), whereas Charlie Creme currently thinks that the Big Ten is so good this year that 11 of its 14 teams deserve to go to the tournament.
What do I mean by “hard cutoff”? Well, we know that the NCAA Selection Committee does consider other factors besides RPI, but let’s do a thought experiment in which we pretend that the committee only looks at RPI using a fixed dividing line between in and out (just to see the impact of RPI, since RPI is indeed a major factor). In that case, the hard cutoff would be closer to 48 than the naive 64 that I cited above. That’s due to the effect of automatic bids. Using Charlie’s current bracket as an exemplar, we see by inspection that the bottom 16 “in” spots are occupied by automatic bid teams that otherwise would not make it in. This means that if a hard cutoff were to be used, you’d have to first reserve slots 49-64 for automatic-bid teams, and then set the RPI hard cutoff at 48. Note that Warren Nolan projects an end-of-season RPI of #47 for Minnesota. So by an RPI hard-cutoff approach, Nolan projects the Gophers to be on the Bubble (as noted by
@thatjanelpick and others). (Yet Charlie calls us an end-of-season Top-25 team. We’ll get back to that, below.)
By Nolan’s projections (and a thought-experiment assumption of a hard cutoff at RPI #48), Michigan would be just out at a projected RPI of #57. Something as simple as Michigan beating Minnesota could flip that, putting Michigan in and Minnesota out (in our thought experiment). So in any RPI-only based selection criterion, we’d have to hit a target RPI of #48 or better to be in (never mind whether or not the AP or Coaches Poll rated us a Top-25 team). This gets to the essence of why RPI is stupid, as I’ll expand on in the next several paragraphs.
So maybe Warren Nolan’s RPI projection algorithm is just inaccurate. But what if Warren Nolan and Charlie Creme are both right? That could actually happen if the Big Ten is super strong this year. In that scenario, Charlie’s numbers (on the right) would accurately reflect the relative qualities of the teams, but the end-of-season RPIs (that are supposed to be representative of the relative qualities of the teams, but aren’t) for the Big Ten would be smeared all over the spectrum from #5 to #95 for the teams that Charlie thinks should be in. This could result in a real dilemma, since it would explicitly demonstrate how bad RPI is as an intended metric to rank the relative quality of teams.
How the latter might come about could be due to the following fatal flaw in the RPI metric along with a Creme-conjectured situation in which the Big Ten might have 11 good teams and 3 bad teams this year. There are a lot of in-conference Big-Ten games in the season, and each game must have both a winner and loser. All the B1G games are a W for somebody and an L for somebody, when integrated over the entire conference schedule. But the Losses are highly concentrated among the worst 3 teams. That’s why (as Warren Nolan predicts) the 3 worst Big-Ten teams end up with horrible RPIs of #219, #217 and #182. But every team in the B1G has to play these three horrible-RPI teams (some twice). But RPI is mostly a measure of how horrible-versus-good-RPI are the teams that you play. So those three horrible-RPI Big-Ten teams really badly drag down the RPI of the 11 good teams in the Big Ten. In addition to whatever other non-conference bad-RPI teams that you were unfortunate to schedule. In summary, the RPI of the 11 good B1G teams gets badly dragged down by the bad RPIs of the 3 bad teams.
Think about that for a second. The very fact that the Big Ten is so really good this year (in fact it is conjectured by Creme to have 11 out of 14 teams as “good teams”) is responsible for the Big Ten to collectively have its RPIs dragged into the gutter. In this oddball scenario (which very well might happen this year), the very excellence of the Big Ten might result in worse RPIs such that fewer than usual Big-Ten teams make it into the tournament if they use a simple RPI cutoff like #48. Unless someone with common sense and basketball knowledge (like Charlie Creme) can prevail on the Committee to accept Big-Ten teams with RPIs ranging from #47 to #95. We all know that the Gophers didn’t get in last year with a SoS-dragged-down RPI of #102, so fat chance that Purdue will get in with a #95.
This is another example of how RPI is not a very good measure of relative team quality, since in certain scenarios it is an anti-measure, not a measure.
The very strong potential of such an anti-measure RPI effect on the super-strong Big Ten this year, with the very strength of the Big Ten putting a millstone on the neck of Big-Ten teams’ RPIs, makes it that much more important for Big-Ten teams “not to play” non-conference teams expected to have very bad RPIs. On the positive side, Whalen has made some huge improvements to our non-conference schedule this year (over the Marlene-generated bad SoS of last year). But on the negative side, we’re still playing too many charity teams. See the adjacent post on current RPIs of our opponents thus far. Ironically, our gift to American of playing them, not only nearly resulted in a really bad loss, but ironically may yet end up being the defining factor that boots Minnesota out of NCAA Tournament play (just because we played them and thus received a big kick in the (a)S(o)S). All for the convenience of picking up another non-conf road game on our swing through D.C. Now we have to fight that much harder to keep our RPI better than #48 in the Big-Ten season. Moral of the story: Try to schedule zero non-conf games against teams that will end up with an RPI worse than #200.
Note that Notre Dame should end up with an RPI of #170. So as
@thatjanelpick notes, we ironically get no benefit just from playing the Irish, although we do get some small benefit from beating them.
For one thing, Northwestern whose RPI #10 now (currently 1st in the B1G), will move to the middle of the pack. Not to worry, NW is not going to beat us all. It's current position at the top of the RPI stack is just due to a good record so far plus a really good SoS so far. Ohio State who ranks at #12 RPI (2nd in the B1G) right now, does so because of an even better SoS than NW (plus a so-so record so far). Ohio State will move way down in RPI, but it’s final SoS is projected at 7. This is strong enough to put its projected RPI over Nebraska in spite of a projected losing record for Ohio State and a winning record for Nebraska. The Ohio State Coach was shrewd. In a rebuilding year, not only did he go out and get a strong rookie class, but also lined up a tough schedule that just might keep them in the playoffs in spite of a losing record (whereas last year the Gophers had a winning record but were booted out of the playoffs simply due to a poor SoS.
Indiana who is #19 RPI now (versus Maryland's #42) will not be better than Maryland at end of season. Maryland is projected to come in first in the Big Ten at RPI #5, whereas it thinks Indiana ends up right behind it with a #7 RPI. In both cases it thinks these teams will have great records along with fairly strong SoSes, thus the good RPIs.
Across the board, in comparing the two tables, the projected end-of-season RPIs look more sensible than current RPIs. Although I don’t know who thought NW would be so good. And I’m not sure I would have picked Indiana for second place.
Minnesota ends up in 6th place in projected RPI at #47 with a projected SoS of 38. I think (or at least hope) that Minnesota does better than the projected 20-9 (10-8) record (10-1 non-conference) record. But that does require that we win-out the remaining non-conference, plus win some tough Big-Ten games.
In comparison to the projected #47 RPI ranking, Charlie Creme’s bracketology assessment puts the Gophers at an NCAA rank ranging from about #21 to #24. That’s quite literally half the ranking number than what Warren Nolan’s RPI projection would have us at. Charlie essentially has us on the tail of the AP Top-25 poll by end of season. Again, the great discrepancy, which I conjectured above might well be a simple mathematical result of the superiority of the Big Ten this year in conjunction with the stupidity of the RPI. Could our season end up on the AP ranked teams list and the Big Dance Bubble at the same time?
We can conclude that:
(a) Current RPIs after 8 games are whack.
(b) RealtimeRPI's Gamer end-of-season predictions are whack.
(c) Warren Nolan's end-of-season predictions are less whack, but perhaps not completely accurate (we’ll see). Warren Nolan is also a really nice website for current RPI information - perhaps even better than the official NCAA RPI site.
(d) The end-of-season projections are seemingly way off from a more realistic estimate from Charlie Creme’s bracketology. Although one possibility is just inaccuracy in the Warren Nolan projections, we fear that there may be something more evil in play here. Specifically, it could be that the very quality of (all but 3 of) the Big Ten this year, will assess a stiff RPI penalty to the 11 quality Big-Ten teams, simply because they are forced to play all the Big Ten teams, (especially) including the 3 bad ones who all have really bad RPIs that will drag down the RPIs of the good B1G teams. If so, this is just another sign that RPI is mathematically invalid and bankrupt as an intended quality metric. The way RPI is currently defined is just wrong (if we want it to be a measure of how good the teams are, which we do). Even a change of emphasis from the current RPI ~ 75% * SoS + 25% * WonLossFactor to an alternate formulation such as (as I'll call it) RPI5050 ~ 50% * SoS + 50% * WonLossFactor, would be a huge step in the right direction to "right the ship" on the already-sunk-to-the-bottom-of-the-ocean RPI metric (and probably an improvement over NET as well).
(e) I will leave this point in, even though it was my initial use of wrong-year Warren Nolan data that triggered the thought - since on further consideration, the thought is probably still valid. The differences between Gamer's EoS projections and Warren Nolan's EoS projections might turn out to be more severe than one might expect (especially early in the season). There is a strong potential of a mathematical instability in such projections agorithms, as follows. A perhaps seemingly small error in your projection algorithm can be exponentially magnified in its effects by the compounding of errors that occurs due to the fact that for end-of-season predictions, your RPIs are wrong, which makes your SoSes wrong, and wrong SoSes recursively feedback into even-further-wrong RPIs, which continue to compound (ad infinitum) to make your EoS projections wronger and wronger, almost without end. That's how you could conceivably get Warren Nolan projections that disagree with Gamer projections by a lot in spite of starting out with the same (limited) historical won/loss data. I currently have no evidence that this happens in practice; but it’s easy to see how it could happen. A good analogy would be that this potential occurrence (if it happens) would be somewhat similar to the Butterfly Effect or any similar chaotic process.