Some "evidence" in support of the star ranking system

GoAUpher

Section 246
Joined
Nov 12, 2008
Messages
6,256
Reaction score
1
Points
36
I know the can of worms this is opening but I found the analysis interesting. The Daily Gopher has a post up that references some work done by a blogger named Matt Hinton on the blog Dr. Saturday. Dr. Saturday took a look at star rankings versus NFL draft performance and found that players with higher star rankings tended to be drafted earlier.

Obviously this is not an exact science and 5 stars still flop while 2 stars like Decker can still shine. And everyone should note the limited data set Matt worked with (the sample size is 3 drafts). All caveats aside, what he found was interesting and it would seem to back up the general point many posters have made which is that star rankings are useful enough if you look at them as a general mark of the overall talent level a team has.

Now lets see how long before this thread devolves into an anti/pro Mason or Brewster slugfest.
 

anyone with half an idea of how to apply statistics already understands the strong link between star rankings and performance. it really requires and incredible amount of ignorance to not understand the relationship. Unfortunately there is plenty of that to go around. And I'm sure there well be plenty of fine examples of said ignorance in this thread soon.
 

I was expecting some anecdotal evidence to dispute the statistics.
 

At least 75-80% of all BCS-conference scholarhsips have to go to a football player who is 3* or less. There are a total of 120 teams in D1-A. If each school gives out 20 scholarhsips it would mean that between at least 63-75% of scholarships in D1-A must go to a 2* or unranked player. So, the relevant question is how well can they evaluate talent at the 3* or 2* (or unranked) level.

Why did they throw all of the unranked players in with the 2*'s in the second two tables?
 

I was expecting some anecdotal evidence to dispute the statistics.

The psych major in me just had to put the quotes around evidence since the sample is pretty limited and my stats profs would kill me if I bought into these numbers without seeing any proven statistical significance. :)
 


Why did they throw all of the unranked players in with the 2*'s in the second two tables?

My guess is because Rivals doesn't rank anyone below 2-stars. They may imply that a 1-star is a walk-on caliber player but they never give the ranking so I can't say I disagree with grouping the unranked players with the 2-stars.
 

I have a big problem with throwing everybody in the 2 star catagory

First, it lumps players based on as much of where you live vs. actual talent. Southern players have more scouts in their areas, more 5 star players and have a better chance at being scouted themselves, and therefore receive a more acurate rating, against the "we did not see you so here's two stars". 2. Vagaries are what cause statistics to be inaccurate, if you want credible results, be honest and say we never saw you so we have to leave you blank. I would also like to see the difference between the first rankings and the last for some of these 5 star players. Where they 5 stars the whole time or moved up based on the "Notre Dame" effect?
 

First, it lumps players based on as much of where you live vs. actual talent. Southern players have more scouts in their areas, more 5 star players and have a better chance at being scouted themselves, and therefore receive a more acurate rating, against the "we did not see you so here's two stars". 2. Vagaries are what cause statistics to be inaccurate, if you want credible results, be honest and say we never saw you so we have to leave you blank. I would also like to see the difference between the first rankings and the last for some of these 5 star players. Where they 5 stars the whole time or moved up based on the "Notre Dame" effect?

I've never seen any real evidence of this being true. I'd like to see someone look into it though.
 

First, it lumps players based on as much of where you live vs. actual talent. Southern players have more scouts in their areas, more 5 star players and have a better chance at being scouted themselves, and therefore receive a more acurate rating, against the "we did not see you so here's two stars". 2. Vagaries are what cause statistics to be inaccurate, if you want credible results, be honest and say we never saw you so we have to leave you blank. I would also like to see the difference between the first rankings and the last for some of these 5 star players. Where they 5 stars the whole time or moved up based on the "Notre Dame" effect?

That isn't really true at all. The players that get a closer look by the scouts are the players that attend the recruiting "combines." Every other player has the same opportunity to send their video in to the recruiting services. There may be some bias built in to perception about competition and therefore a comparable highlight film from Texas will be perceived to be better than the film from Minnesota but it has nothing to do with the number of scouts in a region because that isn't how the rating services work (they have national analysts not regional). Another regional factor that may impact ratings is that HS Coaches in the south generally have more time to put together film for players and help with recruiting because they are full time coaches instead of full time teachers & part-time coaches in the north.

But regardless, the statistical evidence appears to validate the ratings so even if there is bias within the ratings, they seem to show a correlation with future performance (I worded that carefully because of the lack of data about statistical signifigance). Therefore, even if you think there may be some bias in the data it does not seem like the rating should be written off entirely. And your perceived bias would seem to make the statistical evidence of the imprortance of ratings weaker and I can't say that it appears that any bias was a large factor.
 



First, it lumps players based on as much of where you live vs. actual talent. Southern players have more scouts in their areas, more 5 star players and have a better chance at being scouted themselves, and therefore receive a more acurate rating, against the "we did not see you so here's two stars". 2. Vagaries are what cause statistics to be inaccurate, if you want credible results, be honest and say we never saw you so we have to leave you blank. I would also like to see the difference between the first rankings and the last for some of these 5 star players. Where they 5 stars the whole time or moved up based on the "Notre Dame" effect?

Really your theory has 0 ecidence to support it. First off sites like Rivals have a certain number of talent evaluators that grade prospects and they send those guys all over the country so its not just local guys grading all the southern talent. Also, in addition to getting out and watching games, they grade guys based on the film they receive on each prospect so it doesn't matter where they're from since everyone can get film in. They also grade guys based on camps and combine performances which are held nation wide all summer. Some guys from areas not known for talent may get overlooked because of the competition but on the flip side some guys in areas with heavy talent get overlooked because they don't stand out as much.

As far as the 'Notre Dame' effect that is not real, IMO. That would be too obvious and it would piss other fan bases off to the point where they'd start writing the sites off. Now I have heard that sometimes the evaluators do take a guy's offers into effect and that makes since. If you have a guy as a 2 star player but every major team has offered him maybe you need to re-evaluate the tape since the top staffs in the country all think he's worthy of an offer
 

I would guess that they use the 2* ranking less than half as often as the 3* ranking.

And if that is true, this analysis would suggest that 3*'s are no more likely to get drafted than 2*'s. But by throwing unranked players in with their 2*'s, it makes their 3* rankings appear tp be more significant than they really are.

This would confirm that Rivals (and the others) do a great job of singling out the high end talent (the Top 25%), but when it comes to the remaining 75% their information may as well be random. And I think the remaining 75% is where the games are won and lost for prgrams like Minnesota.
 

I would guess that they use the 2* ranking less than half as often as the 3* ranking.

And if that is true, this analysis would suggest that 3*'s are no more likely to get drafted than 2*'s. But by throwing unranked players in with their 2*'s, it makes their 3* rankings appear tp be more significant than they really are.

This would confirm that Rivals (and the others) do a great job of singling out the high end talent (the Top 25%), but when it comes to the remaining 75% their information may as well be random. And I think the remaining 75% is where the games are won and lost for prgrams like Minnesota.

Your initial assumption about the 2* being used half as frequently as 3* is wrong. The 2* ranking is far more prevalent than the 3* rating which makes your conclusion invalid.

Also, the "non-rated" players that are being added to the 2-stars are just taking any walk-ons that eventually make it to the NFL and calling them a 2-star recruit. In essence, they are going in to the data set at a 1:1 success ratio (walk-ons that don't make it are ignored, walk-ons that do make it count as 2-stars which counts against the experts ranking them lower). If you put them in to the pool at non-rated it wouldn't speak to the abilities of the rating services.
 




Top Bottom