Analysing the Analysists: the ProFootballFocus Discussion Thread

Smiling Joe Hesketh

Throw Momma From the Train
Moderator
SoSH Member
May 20, 2003
36,057
Deep inside Muppet Labs
Ah I see.
 
For me the similarities for a company like BIS and PFF might be in baseball defensive statistics, which would also fall under the "slouching toward amateur scouting" portion of the program that PFF specializes in.
 
But there's a fundamental difference between the two sports: baseball is almost entirely driven by an individual matchup of pitcher vs batter. When an event occurs in baseball, there's far less noise taking place around it because a batter's success doesn't depend on 5 or 6 offensive linemen properly blocking for him, or a wide receiver running a route to catch a properly thrown ball. An at bat is a discrete, small event that can be statistic-ed on a pitch-by-pitch basis. A running play in football cannot.
 

williams_482

Member
SoSH Member
Jul 1, 2011
391
The closest comparison to PFF ratings in baseball is probably DRS/UZR. We all know that those metrics are highly unstable in small samples, and some people here insist that this means they can't possibly be worth anything, but for the most part we believe in 3 year samples of UZR giving a fairly accurate representation of defensive results. Like PFF, we are not privy to individual breakdowns of which play was worth what (something that I would be very interested to see but certainly don't expect any time soon), but unlike PFF we know that the actual values produced are grounded in statistical analysis: how often is a ball like this one caught, how often is it a single, a double, etc. Sure stringers can be biased or inaccurate for other reasons, but I will gladly take the analytically weighted opinions of a group of stringers over my own inevitably biased views. 
 
PFF, of course, is an accumulation of unweighted and completely subjective opinions by non-expert individuals attempting to judge situations even the best of the best (I.E. Belichick) consider very difficult and sometimes impossible to properly evaluate. I still think PFF should outperform the eye test simply because they have multiple people contributing to the grading, but that doesn't make their ratings accurate.
 

coremiller

Member
SoSH Member
Jul 14, 2005
5,872
DRS/UZR/BIS have also been much more forthcoming about their methodology.  We don't know all the details about how those numbers are calculated, but we have a fairly good idea of their approach -- tracking batted-ball location data by breaking the field down into lots of smaller zones, looking at average outcomes in each zone based on that batted-ball data, comparing each player to those average outcomes, adjusting for park/pitcher-type/context, etc.  And in theory that method would be replicable if you had access to the underlying data.  We might question how the batted-ball zones are tracked and whether that's subjective, how the contextual adjustments are done, etc., but we can quibble with their methods because we know something about their methods.  It's not perfect, but it's something.  
 
PFF is the opposite.  We can't quibble with their methods because we know nothing about how they actually assign the grades.  From our perspective, their method might as well be, "we watch the film and make something up."  Maybe they're doing something more objective and sophisticated than that, but they haven't disclosed it if they are.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
coremiller said:
PFF is the opposite.  We can't quibble with their methods because we know nothing about how they actually assign the grades.  From our perspective, their method might as well be, "we watch the film and make something up."  Maybe they're doing something more objective and sophisticated than that, but they haven't disclosed it if they are.
I don't understand this. They have an explanation out there that gives you a decent idea of what they're doing.
 
Yes it's subjective, but I don't see that as an inherent flaw. Many things are subjective. MLB scouting is subjective, and yet it has value. The proof with MLB scouting is that, to a large extent, it produces results (i.e., good players going forward). It's opaque, subjective, but we accept that, because it produces.
 
To reiterate, the test is whether it works. I haven't seen a thorough analysis of that. My limited look at it suggests it does work, but I don't expect people to take my word for it. It's a great avenue for research, for questions, for "deeper dives".
 

Devizier

Member
SoSH Member
Jul 3, 2000
19,697
Somewhere
 
 
Abstracting a number from one player's performance functions in the opposite fashion. It takes a collection of data and contracts it. Information is lost, nearly all of it, and the resulting number, the grade, teaches us nothing, allows for no discussion, and virtually ensures corruption in the form of bias, sloppiness and a lack of accountability.
 
Yep.
 
eta: the article unfortunately veers off into bullshit land at the section titled "A Culture Against "Hating"" ...
 

coremiller

Member
SoSH Member
Jul 14, 2005
5,872
bowiac said:
I don't understand this. They have an explanation out there that gives you a decent idea of what they're doing.
 
Yes it's subjective, but I don't see that as an inherent flaw. Many things are subjective. MLB scouting is subjective, and yet it has value. The proof with MLB scouting is that, to a large extent, it produces results (i.e., good players going forward). It's opaque, subjective, but we accept that, because it produces.
 
To reiterate, the test is whether it works. I haven't seen a thorough analysis of that. My limited look at it suggests it does work, but I don't expect people to take my word for it. It's a great avenue for research, for questions, for "deeper dives".
 
That link doesn't tell what they're doing.  There's nothing in that link that would allow you to look at the film, look at the grades, and see how the grades resulted from the film.  Or, say you were hired as a PFF analyst tomorrow and all they gave you as training was the explanation at that link, and you had to start grading player performance.  Would you have any idea how to do it?  I'm reasonably knowledge about football, and I wouldn't have a clue.  Their "explanation" is just a lot of hand-waving that says, basically, "we watch the film and make something up."
 
They may have a more rigorous process than what they actually disclose that makes their analysis more reliable.  And if there are studies showing their numbers have explanatory or predictive power, then that's useful.
 
On a separate note, it's also highly problematic that they say, "we feel strongly about our ability to grade games based on the broadcast footage."  
 

mascho

Kane is Able
SoSH Member
Nov 30, 2007
14,952
Silver Spring, Maryland
coremiller said:
 
That link doesn't tell what they're doing.  There's nothing in that link that would allow you to look at the film, look at the grades, and see how the grades resulted from the film.  Or, say you were hired as a PFF analyst tomorrow and all they gave you as training was the explanation at that link, and you had to start grading player performance.  Would you have any idea how to do it?  I'm reasonably knowledge about football, and I wouldn't have a clue.  Their "explanation" is just a lot of hand-waving that says, basically, "we watch the film and make something up."
 
They may have a more rigorous process than what they actually disclose that makes their analysis more reliable.  And if there are studies showing their numbers have explanatory or predictive power, then that's useful.
 
On a separate note, it's also highly problematic that they say, "we feel strongly about our ability to grade games based on the broadcast footage."  
 
This might be my biggest beef with them. For players like wide receivers it is very difficult to understand exactly what they are doing on a given play without the All-22 tape. Edelman did a lot of things Sunday that were huge positives for the passing game but would not show up on a scoresheet or be visible on the broadcast footage. But his ability to put pressure on a secondary by getting open forced defenders to break on him, creating space for other guys underneath like LaFell and Amendola. If his route, for example, creates room for Amendola to pull in a 14-yard reception and covert a 3rd and long, that's a positive play. And if you can't really see that on the broadcast footage, you'll miss that.  
 

Smiling Joe Hesketh

Throw Momma From the Train
Moderator
SoSH Member
May 20, 2003
36,057
Deep inside Muppet Labs
"Subjective" scouting in baseball resulting in a number happens, but it happens when they grade a particular player's skills (ie his curve is a 50, his FB a 70, his change a 60). But in baseball they also have multiple cross-checkers because grades based on observation are inherently unreliable, and second opinions are mandatory. And a prospect's tools grades are understood to be squishy figures at best, not concrete ratings to stand the test of time. 
 
Similarly, in those UZR ratings, which we understand can be highly misleading in small samples, at least the observers have divided the ballpark space into zones first, and then they're actually collecting data as to balls landing in or hit to those zones. PFF isn't doing that.
 
PFF is making up grades based on uneducated observations, and then assigning numbers to each player's performance and essentially saying "that's it." That's total garbage in, and we're getting garbage out.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
SJH, first, they claim they are using multiple cross-checkers. Second, almost everything you just said about scouting/UZR is also true of PFF numbers. I think its understood they're squishy figures at best, unreliable in small samples, not concrete ratings to stand the test of time.
 
Mark Schofield said:
This might be my biggest beef with them. For players like wide receivers it is very difficult to understand exactly what they are doing on a given play without the All-22 tape. Edelman did a lot of things Sunday that were huge positives for the passing game but would not show up on a scoresheet or be visible on the broadcast footage. But his ability to put pressure on a secondary by getting open forced defenders to break on him, creating space for other guys underneath like LaFell and Amendola. If his route, for example, creates room for Amendola to pull in a 14-yard reception and covert a 3rd and long, that's a positive play. And if you can't really see that on the broadcast footage, you'll miss that.  
They claim they're using the All 22: from the link above:
[SIZE=small]Using All-22[/SIZE]
 
While we feel strongly about our ability to grade games based on the broadcast footage, the All-22 has been an invaluable addition to our processes. The original analyst is instructed to flag any plays from the broadcast footage that need more information or a better view from the coach’s film. The second and third analysts are then able to pinpoint these plays along with others to get a clearer, more decisive look at every play. The use of All-22 has also allowed us to expand our analysis of special teams plays into greater depth and breadth than is possible from broadcast footage.
 

NortheasternPJ

Member
SoSH Member
Nov 16, 2004
19,481
All 22 doesn't come out to midweek though so all their initial ratings are off the TV broadcast.

I can't believw anyone would try to judge all players based off a TV broadcast. It's insanity.
 

coremiller

Member
SoSH Member
Jul 14, 2005
5,872
Is UZR unreliable in small sample sizes because of measurement error, or just because defensive performance is wildly variable?  Offensive stats are also unreliable in small sample sizes, but not because there's anything wrong with the stats.  
 
Their method involves starting from the broadcast footage, and flagging plays that "need more review."  That means they're not reviewing every play from All-22.  I think just about every passing play requires All-22 viewing to understand what happened -- if you can't see the secondary's coverage alignments as the routes develop, you can't judge defensive performance at all.  Maybe you can get away with broadcast footage for running plays, as at least all the relevant action is on screen, although the sideline angle is sub-optimal for judging interior line play on both sides.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
NortheasternPJ said:
All 22 doesn't come out to midweek though so all their initial ratings are off the TV broadcast.

I can't believw anyone would try to judge all players based off a TV broadcast. It's insanity.
Okay, but now we're holding it against them that they issue initial ratings in the first place? Use the initial ratings at your own peril (use them all at your peril!).
 
coremiller said:
Is UZR unreliable in small sample sizes because of measurement error, or just because defensive performance is wildly variable?  Offensive stats are also unreliable in small sample sizes, but not because there's anything wrong with the stats. 
It's mostly a ball distribution issue with UZR I would imagine. It takes a long time for those distributions to sort themselves out.
 
Their method involves starting from the broadcast footage, and flagging plays that "need more review."  That means they're not reviewing every play from All-22.  I think just about every passing play requires All-22 viewing to understand what happened -- if you can't see the secondary's coverage alignments as the routes develop, you can't judge defensive performance at all.  Maybe you can get away with broadcast footage for running plays, as at least all the relevant action is on screen, although the sideline angle is sub-optimal for judging interior line play on both sides.
This makes sense to me, although broadcast replays often end up getting even the coverage alignments. But yes, if they're reviewing only 5% of plays on the All-22, then that probably hurts their results.
 

Shelterdog

Well-Known Member
Lifetime Member
SoSH Member
Feb 19, 2002
15,375
New York City
Smiling Joe Hesketh said:
 
 
PFF is making up grades based on uneducated observations, and then assigning numbers to each player's performance and essentially saying "that's it." That's total garbage in, and we're getting garbage out.
 
 
I think it's fairer to say that we don't know what's going in--if they have 20 super nomarios churning tape I'd love to see their numbers, they probably are great, but if they have 20 shelterdogs well, not so much--but since we don't know more about the process and no one has shown a reason to trust that their numbers are useful then why bother figuring out if we're getting garbage or gold out of the process.   Now as I mentioned again if you could show some PFF success stories (if lots of PFF sleepers become superstars or got shockingly high free agent contracts showing actual teams loved them or what not) I'd look into them, but when they say Ben Hartsock was the number two tight end in football in 2013 and he's more or less out of the league now, well why bother?
 
If it hasn't been mentioned they're clearly popular because they give every writer an easy way to support whatever argument they want to make  " X was a bad draft pick because he's the lowest rated C by pff, Y is overrated because he's the worst corner in the league, etc.". 
 

Super Nomario

Member
SoSH Member
Nov 5, 2000
14,031
Mansfield MA
Shelterdog said:
 
 
I think it's fairer to say that we don't know what's going in--if they have 20 super nomarios churning tape I'd love to see their numbers, they probably are great, but if they have 20 shelterdogs well, not so much--but since we don't know more about the process and no one has shown a reason to trust that their numbers are useful then why bother figuring out if we're getting garbage or gold out of the process.   Now as I mentioned again if you could show some PFF success stories (if lots of PFF sleepers become superstars or got shockingly high free agent contracts showing actual teams loved them or what not) I'd look into them, but when they say Ben Hartsock was the number two tight end in football in 2013 and he's more or less out of the league now, well why bother?
 
If it hasn't been mentioned they're clearly popular because they give every writer an easy way to support whatever argument they want to make  " X was a bad draft pick because he's the lowest rated C by pff, Y is overrated because he's the worst corner in the league, etc.". 
I read your line about Hartsock a couple times and I was like, what the shit is Shelterdog talking about? Because I tweeted this in January:
Dave Archibald ‏@SOSH_davearchie  Jan 13
@PFF Cumulatively, TEs are -409.8 in run blocking. Only -53.8 in 2012. Still seems crazy though. Any thoughts?
 
They did reply:
Pro Football Focus ‏@PFF  Jan 13
@davearchie Will work to make all years the same in the offseason
 
I guess they did that, because the 2013 TE page is MASSIVELY different than it was the last time I saw it.
 

Shelterdog

Well-Known Member
Lifetime Member
SoSH Member
Feb 19, 2002
15,375
New York City
Super Nomario said:
I read your line about Hartsock a couple times and I was like, what the shit is Shelterdog talking about? Because I tweeted this in January:
Dave Archibald ‏@SOSH_davearchie  Jan 13
@PFF Cumulatively, TEs are -409.8 in run blocking. Only -53.8 in 2012. Still seems crazy though. Any thoughts?
 
They did reply:
Pro Football Focus ‏@PFF  Jan 13
@davearchie Will work to make all years the same in the offseason
 
I guess they did that, because the 2013 TE page is MASSIVELY different than it was the last time I saw it.
 
 
So what do you think they did? (A) watch every play of 2013 again and re-grade or (b) do some mathematical transformation (e.g., add 15 pff points per team split among all of the teams TE by rushing downs)? 
 

Super Nomario

Member
SoSH Member
Nov 5, 2000
14,031
Mansfield MA
Shelterdog said:
 
 
So what do you think they did? (A) watch every play of 2013 again and re-grade or (b) do some mathematical transformation (e.g., add 15 pff points per team split among all of the teams TE by rushing downs)? 
I asked them on Twitter. I'm guessing B - which, in my opinion, would be the wrong way to do it.
 
I don't have a huge problem with them going back and changing their figures - that's admirable in a lot of ways, though I wish they were more transparent about it - but they've constructed a system predicated on comparing players to "average" despite demonstrably having no idea what "average" really is.
 

EricFeczko

Member
SoSH Member
Apr 26, 2014
4,856
bowiac said:
Why? I'm serious - DVOA isn't very transparent, but I know as much about its predictive capability as I know about SRS (completely transparent and easy to calculate). Transparency is a good thing, but predictive capability isn't one of the the things it seems to be important for.
 
Which is why I referenced a system like the EPA model, which has been shown to have some predictive power and reliable across a sufficient sample size; are PFF's data actually that predictive? Has anyone done a reliability analysis on the data?
 
bowiac said:
I basically disagree with your entire premise. If we wanted something that confirmed EPA/the eye test, then we could just use EPA/the eye test. There's no reason to look for other sources of data if we just want to confirm what we already know. The value of PFF data comes from the places where it disagrees.
Then you misunderstand my premise, we want something that validates PFF, because the methodology and raw data are not transparent. Since we have no way of verifying the data internally, we are forced to use other metrics/the eye test in order to make our confirmations with things that should be accurate. For example, Brady was regarded as the unanimous MVP in 2010. By Brian's EPA model, Brady was ranked 2nd behind Rodgers.
 
PFF ranked Brady behind Phillip Rivers, Matt Ryan, Peyton Manning, Aaron Rodgers, and Drew Brees. There is no good explanation for why PFF disagrees:
 
 
Oh brother. This is going to take some reasoning. And let me start by saying I’m not trying to be controversial, nor am I anti-Brady. He’s a great player and he did a tremendous job of what was asked of him. And if this was a look at the 100 most valuable players, he’d be a lot higher up. But it’s not, and I’ll go back to that point about what was asked of him. I heard a Pats fan use a great analogy, that Brady got the highest mark in school test, but the test he took was easier than everyone else’s. It doesn’t mean he wouldn’t or couldn’t score the highest mark, just that the playing field was different. Not buying that reasoning? Okay, it was the dancing from that carnival.
 Obviously this is one example, but it seems to occur over and over again with different players in different seasons.
 
bowiac said:
Now, lets say I've come up with a new system to measure baseball player valuation, and it tells us that Nick Punto is the best player in baseball. We should be rightly skeptical of that model because we know a lot about baseball stats these days, and the rest of them tell us Nick Punto stinks. It's not really plausible that we've been missing Nick Punto's greatness this whole time.
 
Its one thing when a statistic disagrees with conventional wisdom consistently with some players, then we need to examine whether the given statistic may be capturing something real that is non-random. If the statistic disagrees with different players in different years, and the distribution of the disagreement is uniform, then the statistic is probably not capturing something that is real (i.e. different from random fluctuation). Of course with a stat like DVOA, WPA, EPA, or SR, we have some knowledge as to how they are calculated (though we may not know the precise weights), and can make some inference as to why a statistic may differ from another or the eye test. We can't really do that with PFF.
 
 
bowiac said:
That's not really the case with football. We don't know nearly as much, so disagreement with our other data sources isn't as big a deal. Further, the sources we do have, like EPA, are often times not even trying to measure the same thing as PFF, comparing the two is especially unhelpful.
 
This is a dangerous statement to make. You are basically arguing that disagreements shouldn't be given any attention, no big deal. Taken together with your previous statement (where you say that disagreements are the only thing we should care about), you seem to be arguing that if a stat disagrees with the eye or other metrics, the stat must be telling us something we're not seeing, so we should ignore the disagreement. I don't want to offend you, but that sounds like willful ignorance to me.
 
How is this unhelpful? I thought PFF was trying to grade individual player skills per position, and have a summary metric that enables comparisons across players in different positions (i.e. is Julian Edelman a better player than Joe Flacco). EPA doesn't measure individual player skills, but it can be used as a  reasonable summary metric to make comparisons across players in different positions. If the individual evaluations are valid for PFF, than the summary stat should be relatively consistent with another summary stat using a different system. We know what EPA is measuring because the metric is transparent. If the metric matches along with PFF's numbers, we would know that PFF's valuations of individual plays (via their summation) have some validity and can start making interpretations about the numbers.
 
Is PFF's purpose to do something other than evaluate a player's ability based on past history? If so, then put me in the "it's garbage" camp.
 

Deathofthebambino

Drive Carefully
SoSH Member
Apr 12, 2005
42,204
I literally just got a chance to read this thread, and forgive me if I didn't read every word, but I want to say that I am firmly, 100% in the camp that says that not only is PFF not adding value, they are IMO, absolutely harming the average fan's ability to digest and rate the performance of individual players and teams. 
 
You know what I did.  I went through the game last week, and I watched every play by Tom Brady, Edelman, Lafell and Stork, and I came up with this:
 
Brady:  5.0
Edelman:  3.3
Lafell:  2.1
Stork:  442,321
 
I'm just joking on that last number.  Stork was really a 121,144,332.  He had a really good game according to my eyes.
 
I hope it's clear what I'm doing here.  It's complete fucking horseshit.  A couple of guys, whose football acumen we know nothing about, are sitting in a room, watching every play and charting those plays based on what they think the player did.  They admit that they don't know what the assignment was, don't know what the guy next to him was supposed to do, and don't know shit about what the play was, and completely ignore the player and team on the other side of the field.  And yet, they feel like even though they lack all of that information, they can assign a "grade" to each play, add them all up and spit out a number that tells us how that player did that week?  It's got to be the most absurd thing I've ever heard. 
 
Something doesn't add value simply because it's "better than nothing," and that's basically the argument I read here a few times.  We don't really have statistics for interior lineman, so the numbers these guys spit out is an improvement.  UMM NO, it's not.  It's no more reliable than the numbers I puked on my keyboard up above for those four players.  The difference is that these guys are starting to get a following, and people that don't know better are starting to use these "grades" as the actual truth, when they have no idea what they are actually using and as a result, PFF is not adding value.  What they are doing is giving out false information that people who don't know better regurgitate.  It's actually detrimental to any real discussion of football.
 
I'm by no means a statistician, but I love statistics.  If I showed people the statistics I've been keeping on college and even high school players for over 25 years, I'd probably be checked into an insane asylum.  I believe in them in baseball, and I believe there is a place for them in other sports, and maybe even football.  However, without the actual playbook and game film and knowledge of what each player was supposed to do on a play, assigning a "grade" to each player and play for each game is a fool's errand.  It just plain is.  Football is a hard, hard game.  It is chess on a field.  Anyone who played the game understands just how integral every single player is to every single play's success.  The success or failure of almost every play is a result of the play of 1-2 players who, in a lot of cases, most fans wouldn't even realize were involved in the play.  The way the guy played next to you will have a huge effect on how you play, as much as and maybe more than, how the guy across from you played.  PFF doesn't account for any of this, because they can't, and because they don't even know how, and they admit that, so why anyone continues to debate the merits of their methodology is amazing.  I can't believe I allowed myself to do it.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
EricFeczko said:
Which is why I referenced a system like the EPA model, which has been shown to have some predictive power and reliable across a sufficient sample size; are PFF's data actually that predictive? Has anyone done a reliability analysis on the data?
Can you point me to something showing EPA for individual players has predictive power? I'm extremely skeptical of that.
 
This is a dangerous statement to make. You are basically arguing that disagreements shouldn't be given any attention, no big deal. Taken together with your previous statement (where you say that disagreements are the only thing we should care about), you seem to be arguing that if a stat disagrees with the eye or other metrics, the stat must be telling us something we're not seeing, so we should ignore the disagreement. I don't want to offend you, but that sounds like willful ignorance to me.
 
 
You are misreading me. I am not arguing that disagreement is proof of "value" - I'm arguing that predictive power is. Disagreements are the source of value, but they're not the proof of it. My (limited) look at PFF has shown it does have predictive power.

 
How is this unhelpful? I thought PFF was trying to grade individual player skills per position, and have a summary metric that enables comparisons across players in different positions (i.e. is Julian Edelman a better player than Joe Flacco). EPA doesn't measure individual player skills, but it can be used as a  reasonable summary metric to make comparisons across players in different positions. If the individual evaluations are valid for PFF, than the summary stat should be relatively consistent with another summary stat using a different system. We know what EPA is measuring because the metric is transparent. If the metric matches along with PFF's numbers, we would know that PFF's valuations of individual plays (via their summation) have some validity and can start making interpretations about the numbers. 
I think you are mistaken about the bolded. To my knowledge, nowhere does PFF say their goal is to do this, nor should they be. If they do say that, then I agree that's not very useful. The value comes from comparing performance within positions, and within assignments really.
 
EPA is trying to assign value, which is very different than what PFF is trying to do (measure ability). That's why I think they make for very poor comparison points. But seriously, does individual EPA have any predictive value?
 

Smiling Joe Hesketh

Throw Momma From the Train
Moderator
SoSH Member
May 20, 2003
36,057
Deep inside Muppet Labs
Deathofthebambino said:
I literally just got a chance to read this thread, and forgive me if I didn't read every word, but I want to say that I am firmly, 100% in the camp that says that not only is PFF not adding value, they are IMO, absolutely harming the average fan's ability to digest and rate the performance of individual players and teams. 
 
You know what I did.  I went through the game last week, and I watched every play by Tom Brady, Edelman, Lafell and Stork, and I came up with this:
 
Brady:  5.0
Edelman:  3.3
Lafell:  2.1
Stork:  442,321
 
I'm just joking on that last number.  Stork was really a 121,144,332.  He had a really good game according to my eyes.
 
I hope it's clear what I'm doing here.  It's complete fucking horseshit.  A couple of guys, whose football acumen we know nothing about, are sitting in a room, watching every play and charting those plays based on what they think the player did.  They admit that they don't know what the assignment was, don't know what the guy next to him was supposed to do, and don't know shit about what the play was, and completely ignore the player and team on the other side of the field.  And yet, they feel like even though they lack all of that information, they can assign a "grade" to each play, add them all up and spit out a number that tells us how that player did that week?  It's got to be the most absurd thing I've ever heard. 
 
Something doesn't add value simply because it's "better than nothing," and that's basically the argument I read here a few times.  We don't really have statistics for interior lineman, so the numbers these guys spit out is an improvement.  UMM NO, it's not.  It's no more reliable than the numbers I puked on my keyboard up above for those four players.  The difference is that these guys are starting to get a following, and people that don't know better are starting to use these "grades" as the actual truth, when they have no idea what they are actually using and as a result, PFF is not adding value.  What they are doing is giving out false information that people who don't know better regurgitate.  It's actually detrimental to any real discussion of football.
 
I'm by no means a statistician, but I love statistics.  If I showed people the statistics I've been keeping on college and even high school players for over 25 years, I'd probably be checked into an insane asylum.  I believe in them in baseball, and I believe there is a place for them in other sports, and maybe even football.  However, without the actual playbook and game film and knowledge of what each player was supposed to do on a play, assigning a "grade" to each player and play for each game is a fool's errand.  It just plain is.  Football is a hard, hard game.  It is chess on a field.  Anyone who played the game understands just how integral every single player is to every single play's success.  The success or failure of almost every play is a result of the play of 1-2 players who, in a lot of cases, most fans wouldn't even realize were involved in the play.  The way the guy played next to you will have a huge effect on how you play, as much as and maybe more than, how the guy across from you played.  PFF doesn't account for any of this, because they can't, and because they don't even know how, and they admit that, so why anyone continues to debate the merits of their methodology is amazing.  I can't believe I allowed myself to do it.
 
This is extremely well put and more or less what I've been trying to say for months now about PFF.
 
I won't do it, but I admit to being extremely tempted to ban all using of PFF stats here (kinda like the "no Eric Wilbur discussion" rule in the media forum), because they're actively detracting from knowledge. That would be too extreme, but I would love for us to stop using their numbers for, well, pretty much anything. They're garbage.
 

GregHarris

beware my sexy helmet/overall ensemble
SoSH Member
Jun 5, 2008
3,460
Super Nomario said:
I read your line about Hartsock a couple times and I was like, what the shit is Shelterdog talking about? Because I tweeted this in January:
Dave Archibald ‏@SOSH_davearchie  Jan 13
@PFF Cumulatively, TEs are -409.8 in run blocking. Only -53.8 in 2012. Still seems crazy though. Any thoughts?
 
They did reply:
Pro Football Focus ‏@PFF  Jan 13
@davearchie Will work to make all years the same in the offseason
 
I guess they did that, because the 2013 TE page is MASSIVELY different than it was the last time I saw it.
 
I am just catching up on this thread.  Great info here, and I'll never look at PFF the same way again.
 
My question is about the above numbers.  If you took all the TE in the league, ranked them on their run blocking ability, and simply added/averaged them together, shouldn't the end result be 0 or close to it?  I mean, it seems like they are grading TE blocking based on some sort of "ideal result" rather than ranking them as compared to the best and worst players at their position.  For awhile now, the TE position has been moving more toward the full time pass catcher - very part time blocker mold.  Teams know this, and game plan based on strengths and weaknesses.  PFF should take this fact into account: TEs are not nearly as good at blocking then just 10 years go (assuming this is the case), and rank them accordingly in the current season against their peers.  So instead of an average blocking tight end sucking at blocking, the average tight end would be average at blocking, the position just doesn't block as well as it used to.
 
-409.8 is just laughable and shows that there is virtually no context with these numbers.
 

Ed Hillel

Wants to be startin somethin
SoSH Member
Dec 12, 2007
44,853
Here
Smiling Joe Hesketh said:
 
This is extremely well put and more or less what I've been trying to say for months now about PFF.
 
I won't do it, but I admit to being extremely tempted to ban all using of PFF stats here (kinda like the "no Eric Wilbur discussion" rule in the media forum), because they're actively detracting from knowledge. That would be too extreme, but I would love for us to stop using their numbers for, well, pretty much anything. They're garbage.
 
Not picking on you in particular, but it seems strange that we can consider banning PFF on the one hand, and then on the other cite Bill Dahlen's UZR from the 1890s to compare his defense to someone from the modern generation.
 

Smiling Joe Hesketh

Throw Momma From the Train
Moderator
SoSH Member
May 20, 2003
36,057
Deep inside Muppet Labs
Ed Hillel said:
 
Not picking on you in particular, but it seems strange that we can consider banning PFF on the one hand, and then on the other cite Bill Dahlen's UZR from the 1890s to compare his defense to someone from the modern generation.
 
If anyone tried to do the latter we'd laugh them off the board. UZR limitations, hopefully, are well known by now.