The fixtures for the AFL’s 2020 competitions came out this week, and as usual the discussion revolves around the “fairness” of the draw. Which, of course, isn’t particularly fair, as the League balances commercial imperatives, equalisation objectives, and keeping Collingwood happy by not giving them too many interstate trips. (OK, we’re maybe a little cynical there.)
Of course I am pretty much obliged to put in my two bobs worth, and posted my initial findings from my established models this morning, which could do with a bit more explanation.
Before I get into that, this is a really good opportunity to go over my method behind GRAFT’s seed ratings, since they are central to my simulation work.
Essentially, I start with a “weighted” ladder based on the home-and-away results on the season just past. This involves halving the results and points from those games between opponents, where those opponents have met twice. Simple enough.
On that basis, the weighted ladder ends up like this:
From this table I get the figures that I use to set up GRAFT’s seed ratings for the following season. Worth noting here that Essendon suffered badly in the re-weighting, but to a position perhaps more fit for their actual quality.
The difference between the weighted average Points For (WAPR) and Points Against (WAPA) becomes their GRAFT Net. The League Par is the weighted average score – 80.59, again pretty low for the modern era but in keeping with recent trends.
The attacking rating is the difference between WAPR and Par, the defensive rating is the difference between Par and WAPA. And those two figures sum up to the Net.
The Gross rating which gets headlined is the Net plus the League Par – it’s more an indication of what a team would score against an average team. For instance Geelong’s Gross rating will be about 105.
When comparing teams across eras, you would use the Net rating.
(This results in Richmond’s rating being pegged back severely from the over +30 they ended up at after the Grand Final, as I don’t take finals into account for these figures.)
So with our seed ratings established for 2020, I throw them in with the fixtures and set the washing machine going.
I run sims using the GRAFT Gamma model against two fixtures – one, the actual matches, and the second is what I call the Omnifixture, a hypothetical 34 round monster where each team plays each other home and away at their regular home state venues.
(Recap on the Gamma model: each team’s par score is worked out from their attack/defence ratings, with each used to generate a Gamma distribution for the Monte Carlo simulation.)
Then the two outputs are compared – bearing in mind there is a bit of “fuzz” since we are dealing with random simulations.
So on this basis, Richmond seems to have quite a tough draw – their double-ups are GWS, Collingwood, West Coast, Adelaide and Carlton, with 6 interstate trips.
In fact all of the top six have a tougher than average draw, but since they play at least 2, most 3 of their fellow top bracket, under what passes for the AFL’s equalisation criteria, that’s to be expected.
At this point I have a whole lot of caveats to be disclaimed about the model:
- The Gamma model effectively applies a 75% regression to mean for the average expected score
- The bog standard HGA of 12 points is applied against home-state/out-of-state encounters
- Days between games are not taken into account (because it’s largely noise)
- And of course it pays no heed to off-season list/staff changes, which I think is the most significant factor that is not accounted for here.
While we can have a good idea of who the good teams and bad teams will be in the following season, of course consecutive seasons rarely play out the same, so predictions made at the beginning of the season will look a bit bold at the end when the dust settles.