The CMA’s recent “economic working paper” on the use of algorithms to facilitate collusion and personalised pricing follows on the heels of other work in this area (including by CMA) but is a bit different because it focuses on economic evidence and analysis. While there is nothing in it about the ‘lawfulness’ of a given use of pricing algorithms, it’s clearly not just academic as the CMA hopes that it will help prioritise future complaints or calls for intervention.

Key points

Tacit collusion is the most interesting area.  The CMA says that, in relation to tacit coordination, simulation models do confirm that some pricing algorithms can lead to collusive outcomes even where firms are each setting prices unilaterally. However, CMA points out this “leaves unanswered” the question of whether individual firms would have an incentive to deviate, for example by changing the algorithm to undercut the collusive price.  So the CMA does not appear to be convinced that those models would tell the whole story.

However the CMA does see that pricing algorithms may be highly relevant to the analysis as they may exacerbate ‘traditional’ risk factors associated with tacit coordination, such as transparency and the speed of price setting. While the paper does describe some new ways in which algorithmic pricing could have an impact beyond traditional risk factors (see below), the CMA acknowledges that these are a bit exotic for now.

So the CMA doesn’t seem to regard pricing algorithms as a game-changer.  Rather, its conclusion is that algorithmic pricing is more likely to facilitate collusion in markets which are already susceptible to (human) coordination. For these “marginal” markets, the increasing use of data and algorithmic pricing may be the ‘last piece of the puzzle’ that could allow suppliers to move to a coordinated equilibrium.

The CMA goes further, listing factors which could give competition authorities an indication of whether a price-setting algorithm may result in tacit coordination:

  • Algorithm’s objective function: if the algorithm’s objective function is very short-term (e.g. to maximise profit on each and every sale, with no regard for the impact of its current actions on future profits) then the algorithm is less likely to lead to coordination.  Even for the most sophisticated algorithms, there should still be a set objective function that the algorithm computes to determine its success and which could in principle be audited by a competition authority.
  • Extent to which it leads firms to adopt very simple, transparent, and predictable pricing behaviour (like price matching, or price cycles)
  • Prevalence of similar pricing algorithms. If more firms use the same pricing algorithm in the same market, it makes it more likely that the market will move to an outcome where prices are higher
  • What data the algorithm is using – e.g. is it using data from multiple competitors, which may be a particular risk in markets where intermediaries receive data from multiple clients that are competitors.

The CMA also investigates the theories of algorithmic tacit collusion put forward by Ezrachi and Stucke (hub and spoke; predictable agent and autonomous machine).  It concludes that hub and spoke is likely to present the most immediate risk, highlighting situations where competitors decide, instead of using their own data and algorithms, that it is more effective to delegate their pricing decisions to a common intermediary which provides algorithmic pricing services.  CMA also says that third party providers of pricing algorithm services may be a natural (and potentially ‘unwitting’) ‘hub’ for hub-and-spoke collusion. It hints that the existing competition rules could catch this though (provided certain criteria met).

As far as personalised pricing is concerned, the CMA did not see much evidence of this in practice but does say that, if there were extensive use of personalised pricing in a market, this might make it significantly less likely that algorithms could lead to tacit coordination.

Implications?

The CMA usually likes to see some return on its investment in these types of reports.  One of the big questions (and perhaps a reason for this report) is whether the existing antitrust rulebook is sufficient – or could pricing algorithms dampen competition without infringing competition rules leading to an enforcement gap?

I got a good sense from the paper that the existing rules are sufficient. For example, while pricing algorithms may make explicit and tacit coordination more stable/possible, they do it in a way that can be assessed.  The complex self-learning algorithms have been the subject of a lot of debate but the CMA points out the significance of the algorithm’s objective, stating that a competition authority might be able to look beyond the complexity of its workings into the whites of its eyes (my words, not theirs!) to see what it is really focussed on.

The mention of hub and spoke – that it was the most immediate concern as far as tacit coordination is concerned – was interesting.  I think the CMA really has in mind the situation where competitors decide, instead of using their own data and algorithms, that it is more effective to delegate their pricing decisions to a common intermediary which provides algorithmic pricing services. So it is more than just using the same algorithm. However the CMA thinks that the existing competition law analysis of hub-and-spoke could be sufficient to address competition concerns if certain criteria can be established.  This is a bit cryptic as the paper steers clear of the law.  But maybe the CMA thinks that in practice any companies using that common intermediary would have to be aware of what is going on – and the hub may act as a confidence builder linking the rivals together. So it stops being unilateral. What is perhaps less clear is how the simple use of an algorithm by multiple players (another type of hub and spoke described by Ezrachi) would be caught by the law.  However, the CMA notes the temptation to cheat and the fact that the algorithm is only ever going to be one factor in the always complex price-setting activity.   So perhaps it is less concerned about the risk of economic impact here.

What about merger control? Given the conclusion that pricing algorithms could be the final piece of puzzle for tacit coordination, query whether, in the merger control context, the competition authorities like the CMA might be more inquisitive and sensitive about the use of pricing algorithms and the significance that they may have for coordinated effect?. In other words, if the merger affects online markets characterised by a high use of pricing algorithms (especially using data from multiple competitors) might the authority develop concerns about coordinated effects at lower level of market concentration than would otherwise be the case?.   To explore this, they might ask questions about the factors above : the extent to which the hub and spoke model is in play (are rivals using the same algorithm or is pricing outsourced to a third party service provider); what time horizon is the algorithm set to, what is market coverage of algorithm and nature of input data (e.g. does it come from an intermediary collecting competitor data).  More likely perhaps is that market studies will become a favoured option where algorithms may be affecting traditional (or even the non traditional) risk factors.


________________________

To make sure you do not miss out on regular updates from the Kluwer Competition Law Blog, please subscribe here.


Kluwer Arbitration
This page as PDF

Leave a Reply

Your email address will not be published. Required fields are marked *