Post Season Wrap – 2018

I haven’t posted too much here during the AFL season, mainly, since I had bedded down the updated systems and was able to continue things as normal through the season there wasn’t too much too discuss about the back-end. But now as the season has wrapped up and the dust has settled it is of course time to reappraise and reset priorities.

The Finals

Before I get stuck into the nerdage, it’s worth looking at the how the actual footy turned out through the finals. The lead-up to the grand final were a bit anticlimactic, both preliminary finals being virtually sealed by half-time; Collingwood’s win possibly being more shocking as they were seen as slight underdogs going up against Richmond. That landed us with a grand final between West Coast and Collingwood, two clubs that both have, how shall we say, polarising fandoms.

Without my own team in the fight, I usually tend towards to the team that had gone longer without a flag, although with West Coast last saluting in 2006 with Collingwood doing so in 2010, there wasn’t much of a gap. So basically my view was, with the Melbourne fairytale snuffed out with napalm, after a fairly lacklustre finals series, the best we could hope for was a good close grand final. It delivered on that in spades.

It didn’t look like that early in the first quarter, of course, with the Pies putting on the first five goals, in a performance very reminiscent of both of those prelims, but West Coast eked out two before the first change, and after that it was a contest as the Eagles ground away at Collingwood’s early advantage. Sheed threading that final goal from the flank with Pies supporters hooting at him will go down as one of those great clutch acts that decides a premiership.

Nerdage

Model wise, GRAFT had an OK season, compared with the other models on the Squiggle Models Leaderboard. It was doing pretty well early in the season (particularly in BITS) but fell adrift later on, finishing with 142 tips, a significant gap to the leaders on 147 and 146. With most models hovering around 70% it was a more predictable season than 2017, yet still had a lot of interesting results as there were still up to twelve teams that had chances of making the finals up until the last one or two rounds of the home-and-away.

While GRAFT has essentially remained unchanged in principle even during its single factor days as RAFT, and since I want to keep it’s simplicity, there does come along certain events that make me think about spinning off a hybrid system that can deal with particular instances. That’s right, I’m looking at Geelong bullying away in their last two home-and-away games, stealing top spot in the GRAFT ratings, and then punking out in the Elimination final.

Looking at the ladder it’s easy to figure out what happened – Geelong only got 13 wins, in the 18 team era just sufficient to be considered fringe finalists (as they were) but they did so with a fairly healthy percentage of 131%. That marks a pythagorean anomaly (a what?) of -3 wins. Not as high as Brisbane’s -3.8 (that is, Brisbane won 5 games but on percentage should have had 9 wins) but nevertheless.

On this point it draws attention to GRAFT’s main weakness in that it doesn’t pay any credence towards wins and losses – it only cares about scores, and when a team runs the score up, as Geelong did against Fremantle (by 133) and Gold Coast (by 102), what is the actual difference between thrashing a team by twenty goals instead of ten? Anyway. That will be part of my homework for the off-season – not really an off-season, as I am about to detail.

Offseason Training

As far as the AFL-specific work goes, while the Gamma probability model worked really well, there are computational issues with working out the margins likelihoods – and therefore the win probabilities). This is because the model being based on two curves for each team’s potential score, the equations for those are well-defined, the difference of the two curves, not so much. For each game, I have to run a brute-force run, so for instance to predict the likelihood of a team winning by 30 points, I have to sum the probability of 90-60, 91-61, 92-62, etc.

It works fine once it’s on the front end but it seems to me that I should be able to figure out an actual equation for the difference curve and refer directly to that for the margin probabilities, thereby saving the computer a lot of crank when I update the tables. So basically, getting out my old calculus and statistics texts and trying to relearn everything I didn’t pay sufficient attention to. (Or just getting Wolfram Alpha to do it, although I still have to figure out the principles first).

Along with all of that, showing the actual probability curves on the site is on the list of things to do – a lot of concepts have been completed as to how these will actually look, however I am not satisfied with how they look just yet, particularly as the risk of misinterpretation is real. The current match tables are basically a soup of numbers and I will do my usual overhaul of the website to try and make them more comprehensible.

Also, I don’t have any historical tables up here apart from the archives of the previous seasons’ sites, so this is also on the agenda. I do think it would be good to make that data available and have historical graphs and records so you can compare clubs across seasons. We’ll see how that goes. I am thinking that I will probably use 1987 as the starting point for the public tables as this was when West Coast and Brisbane Bears entered the competition.

Meanwhile, there are also the summer sports to look at. I am in the second year of basic Elo tracking of the A-League and intend to do so for the W-League. I am cutting it fine as the season starts this week in the NBL, which might actually be a good place to start developing that hybrid system, although for now I am just going to stick with Elo tracking for that as well. Obviously the W-League and WNBL as well.

As far as the cricket goes, specifically the Big Bash League, this is something that could happen. I have a month or so to put something in place, which will have to include an analysis of previous results and all that, so it’s maybe a 50/50 chance at this point.

As this will necessarily involve further modularising of the code, so if that is in place, in the new year I can look at certain other competitions that have up to this point escaped notice. News on those further developments to come.