These last sentences are the faint smell coming from this report. It seems like they started with the smoke of the texts and then tested to see if science would corroborate the suspicions they raised. Through a skeptical view of all of the variables they seemed to fond what they were looking for. But that is part of the problem. The investigation was looking for something specific. That all variable scenarios were seemingly not included in the report is what frustrates me most, not as a Patriots fan but as someone who wants logical thought to rule the day.yep said:They did, but the way they did it was... how to say this... they are playing a game called stupid or liar, where it doesn't matter what the outcome is, because either way, they win.
Their models are needlessly complicated, but are complicated in ways that maximize the importance of over-simplified input assumptions, while forcing the results to err on the side of minimum pressure difference due to ambient conditions. This might be the result of an engineer or scientist with exactly the right expertise, who is deliberately trying to obfuscate and/or to arrive at particular conclusions, but my guess is that it is actually the handiwork of a fairly common approach used in sales engineering, which is to have a generically very smart person, but who only sorta knows what they are doing on this particular question, run a bunch of different models based on guesswork and vague knowledge of related things, and then cherry-pick the approaches that seem to produce the best combination of:
1. Arriving at the desired conclusion, and;
2. Giving the appearance of rigorous and objective analytics.
For example, if you wanted to measure, say, a particular part of your body, and wanted for the measurement to produce the biggest result possible... I bet you could think of a whole bunch of different ways to use the exact same length of string or measuring tape, and genuinely come up with different measurements. If you were really good at expert witnessing in litigation, you could probably even write up a very technical-sounding report, describing several different measurement methods, while only actually including the ones that gave results on the favorable spectrum of outcomes. Your inclusion of a spectrum of results by different methodologies enhances the impression that your report is a disinterested technical assessment of the facts, rather than a biased piece of fact-based spin.
I am reluctant to weigh in with my own analysis, partly because I am not really qualified to do so, and partly because bickering engineers with different results is not really the way you do this stuff. But just for the sake of illustration, here's a pretty good example of a much simpler, but arguably more rigorous modelling approach:
Edit: the approach in the video is not perfect, either, and I don't mean to endorse its conclusions. Getting really good, precise answers to very specific real-world questions can be a complex, expensive, and time-consuming process, even when the underlying science is well-established. But there are ways to answer these questions, and solving these problems is certainly within the scope of budget and time that the Wells report had. For whatever reason, instead of hiring a genuinely independent 3rd-party firm with expertise in thermodynamics-type engineering to decisively and competently answer the question, they hired a team of litigation consultants to bumble their way through a spectrum of scenarios that all point towards a particular conclusion, while hand-waving a lot of variables, and using mealy-mouthed metrics such as "wet" balls but not "saturated" and talking about keeping them "fairly dry", when the shortest path is simply to look at things like differences in wet-bulb vs dry-bulb temperature of the playing field vs the test rooms, and so on... they make things that should be straightforward complicated and vague, while relying on stipulated input assumptions for things that are or should actually be knowable.
It is possible that the testing actually did everything right, but that the people who wrote the report just didn't know the correct terminology, the right words to use. But that's why, in scientific circles, you describe the methodology, instead of just summarizing your results. Spraying a dry 50-degree football with 70-degree water will produce a different change in pressure than submerging a room-temperature football in 50-degree water, even though both might theoretically produce a wet 60-degree football in a 70-degree room.
The methodology, to the degree it's described in the report, is not how honest and competent engineers would attempt to answer this question. Which doesn't mean that the conclusions in the report are wrong, just that they are making a case, rather than proving the essential facts.
I didn't even think of that, but I love it.