This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

The latest issue of the 9th Scroll is here! You can read all about it in the news.

Von Blonde Beer,

Greetings all and sundry!

The ETC season is upon us, and with it the usual debates about the utility or otherwise of teams data analyses. In addition, the usual rumors (which I can never pinpoint the start of ) about how the project does or doesn't do things start as well.
So, we got the Rules Team to put their heads together, along with some input from some other selected members of staff (e.g. some members of the Data Analysis team) in order to present to you the consensus opinion of how the project sees the utility of the ETC data.

We hope that this will make things clearer, as well as prevent rumours and arguments overly animated discussions on the forum.

Project consensus on the utility of ETC data:


  • ETC lists do contain some useful information, so should not be discarded out of hand.
  • Data analysis is best performed on teams and singles individually, and also combined, as any differences between them are instructive.
  • Frequency with which armies are taken, both overall and by the top teams, gives an indication as to what the tiers might be*.
  • Different popularity of units, both overall and by the top teams, gives an indication of internal balance*.
  • The lack of scenarios as part of the pairing process this year increases the meaningfulness of the lists compared to previous years.
  • We will look at the results, both overall and amongst the top teams, but this is of course where we need to be careful due to the matchup process. Mostly we are just looking to check that the teams were reasonably correct in their army and unit choices, and that one of the popular armies or units didn't turn out to a red herring or a damp squib.
Bottom line: there is information there but it needs to be treated sensitively and with nuance. It will neither be ignored nor used as the sole arbiter of decisions.* : Note that these statements apply to a snapshot in time. Metas do evolve over time, and what gets played is dependent on what else is being played. This adds a further complication to the analysis and conclusions that can be drawn from the data.

In case anyone is interested further, this is some excerpts from the scroll article by the DA team
Display Spoiler

Which kind of tourneys do we analyse?
We analyse all tourneys. But we analyse the different kind of Tourneys separated. So Single-Tourney data is thrown together with Single Tourney Data and not mixed with Team Tourney Data. Later we compare our results. You may ask yourself if that isn’t comparing apples with oranges, but to stay in the metaphor we can still find out if fruits share some resemblances this way. That means if both show the same result (= an army overperforming), than it is much more likely to be reality than if only one of the type of tourney kinds or none show it.

What are the reasons some don’t see team tourney results as relevant for single tourneys?
On Team tourneys one can try to influence which opponent and often which scenario and on which table one plays.
Of course the opponent can try to do that, too. Also some teams take armies for specific roles. So one army can be taken and designed as a counter for example monster Mash.
So it the team is in every game able to pair the list vs. the list it is designed to counter it should naturally score more than an allcomer list even if the allcomer list performs better against a mixed field.An army can also be taken as a block build which just has the job to play between 7-13 and 10-10 and if paired correctly to block the opponents scorer Armies. A scorer army is designed and meant to crush the opponent realy hard and to get many points, but it usually has some weaknesses which it has to be paired around.
All that can suggest, that the role of an army and the quality of the dude doing the pairing has more influence than the actual army strength. But there exist different theories, too.
One is, that a more succesfull team has a better hand picking the armies which fullfill the needed roles better, than a less successful team, that would mean that the dude who is better at pairing is also better at selecting the armies and so better armies for the needed purpose land in his team. Another theory is that better armies are not so dependant on the pairing as weaker armies and so land with higher percentages in higher placing teams than others.

Why we still collect and analyse them (team tournaments)?
Diversity, sample size and comparison are the keywords.

Why diversity?
Well our players like to play different kinds of tourneys. So I believe we should do our best to make each kind of tourney as balanced and as fun as we can. That doesn’t necessary mean that steps to improve team tournaments automatical have to influence single events and vice versus. Such things could easily be done through the Tournament rules themselves, and be specified for team or single events.

Why sample size?
Well, to be honest most of the time our sample size is far smaller than we like it. So team tourneys provide additional data.
Data which can’t be thrown together with singles without thought, but it is still additional data which can be used as control measure and as a comparison.

Why comparison?
Did I mention out sample sizes are smaller than we like it? Well, the way armies perform in different kinds of tourneys also can tell you something. If for example an army would literally rule in single tourneys but be crap in team tourneys, it would help to identify where the balance problems with that army are (meta, to expensive/specific counters to play in allcomer lists).


We hope this helps explain a bit of the reasoning behind our studying of team tournament events. We have a topic for discussing this, or further questions right here.
Special thanks to @DanT for writing this up!
No comments available