- Football Behavior
- Posts
- Full Week 5 Profit Report
Full Week 5 Profit Report
Back In The Black
Week 5 has come to close, and we ended it with a perfect MNF. Here is full accounting of how we did, and if you tailed, how YOU did.
Also, read to the end where we provide a behavior science lens into analytics and why some of the ones you hear the most about, are faulty…
What’s In Here?
Week 5 Profit Report
2023 Primetime Profit Report
Strategic Resources
Behavior Of Football
Week 5 Profit Report
Record
13-5
72.22% Hit Rate
Return On Investment
+9.62%
Profit
+1.7u
Analysis
As noted in previous newsletters, coming out of the volatility of Weeks 3 & 4, we made strategic changes to lessen risk and increase chance for profits.
We did this through Alternate spreads.
With alternate spreads, we got favorable odds but with lower payouts due to reduced risks
This conservative approach will continue through Week 8, at which time we will re-examine the state of behavioral stability
2023 Primetime Hit Rate Report
Thursday Night Football
Record
14-2
87.5% Hit Rate
Sunday Night Football
Record
10-6
62.5% Hit rate
Monday Night Football
Record
15-1-2
88.2% Hit Rate
Strategic Resources
We hope you all are not just blindly tailing, but also learning the “why” of what we are doing to produce a profit. While not literally this, we consider our subscribers to be investors in our little hedge fund. That is why it is important to us that you not only know what we are doing, but why.
Here is the breakdown on the different strategies that we use:
Behavior Of Football
Celeration vs. Averages
B.F. Skinner Discovered the S-R-S Contingency In Human Behavior
Most of the conversations around football analysis rely on averages. Whether it is EPA, DVOA, or box score metrics like points per game. While they all serve a purpose, sole reliance on them, especially in prediction, can be faulty and not how averages are supposed to be used.
Rather, we focus on celeration, the speed that behavior is changing over time.
The reality is that the NFL is a multiverse of 32 unique performance environments operating independently of one another. Every week, two of those environments collide with one another and form temporary sub-environments. One team’s unique offense meets the other team’s unique defense, and vice versa.
To ignore that leaves us susceptible to “Explanatory Fictions”, or us attempting to explain the connection between two events that don’t actually have a meaningful connection. Another common word for this is superstition. Like a rain dance, or throwing salt over your shoulder, we can fall into the trap of trying to explain events by connecting to concurrent events that are happening in parallel to one another, but never actually intersect.
The Rain Dance Fallacy
Dancing And Rain Fall Are Not Actually Connected, Yet, One Preceded The Other
A group of ancient people once started to dance at the moment it began to rain. Those two events happened independent of one another. But because of a lack of understanding of the water cycle, they drew a connection. Surely the dance led to the rain we so desperately needed.
At times, certain analytics or traditional statistics can have us believing the same thing, that certain events are connected that really aren’t. If this, then that. That is why it is important to remember the importance of the S-R-S (Stimulus-Response-Stimulus) contingency on human behavior.
Human Behavior isn’t just predictable based on what preceded it (the first stimulus), but mostly on what happens after it (the second stimulus, or consequence). Historical averages miss this crucial element.
When deciding if an analytic or stat is actually informing you about a player or team, ask yourself 2 questions:
Is this stat or metric focused on the independent behavior of this player or team or is this metric conflating a stat that actually is dependent on multiple players or unrelated historical averages?
Is this metric or stat accounting for the role of the unique environment this player or team is performing in?
To help us decide, one easy differentiation to make is to focus on trends instead of averages.
The Statistical Error of Using Averages
Averages like points per game, EPA per drop back, or yards per carry, are stats that attempt to standardize performance. The danger in relying on averages, especially historical ones, is that they are meant to be a measurement of individual samples in a similar environment. Think focus groups or polls. Even surveys taken of a community or state.
That could work if the individuals are actually operating in the same environment. In the NFL, they are not. Everything about those statistics and metrics relies on a common pitfall of averages. Here are the three that stick out relative to human behavior in the real world:
It is a statistical error to apply the average of a group of data points to a single point and assume it to be true. EPA takes the historical average of a group of data points from a certain play and attributes it to a particular play and/or player in the present, and then attempts to make a prediction about the future.
Even assuming data is normally distributed like a Bell Curve,, the probability that any one data point will be the exact same as the historical average is 50% — the same as a random guess.Additionally, averages often don’t filter out outliers and are susceptible to being influenced by them. Outliers in data pull the average in the direction of the outlier which skews the true meaningful point one is trying to make by referencing the average in the first place.
Also, the way these averages are talked about in common sports media is almost as a substitute for “typical”. As if to say Player X has an EPA per drop back of X, which is below the league average of Y, which insinuates that the league average is typical and what should be expected.
Again, this ignores and removes the importance of the environment. The league average is often irrelevant because each team is operating in a different environment from one another. Because each of the 32 teams has 2 unique environments within it (offense and defense) there are actually 64 independent performance environments.
We’ll End It With This Example:
When we assess offense, framing their performance behavior based on how long they actually had the opportunity to score allows us to be much more precise about their performance than using “per game” numbers.
Think of it like this, two teams score 30 points in a game. They are both averaging “30 points per game”.
Team A, scored 30 points in 22 minutes of possession time
Team B scored 30 points in 34 minutes of possession time.
Which Team Made The Most Of Their Opportunity?
Team A scored at a rate of 1.4 points per minute
Team B scored at a rate of 0.9 points per minute.
That’s a half point difference every minute each team possesses the ball. To put that in perspective, Over an additional 30 minutes, that’s 15 points, two touchdowns.
Using “per game” averages would have you believe those two teams score the same. But in reality, they could be upwards of two touchdowns apart. Their celeration trends would be very different, which is a much more precise predictor of behavior.