I. Introduction
Algorithms may raise competition concerns especially when assessing collusive practices.
Very often, the term “algorithm” is used improperly. When we imagine the cartels of the future, we are not talking, in fact, about a simple “algorithm”, which in itself is a neutral instrument.
Rather, we refer to much more specific techniques for implementing artificial intelligence: we are talking about machine learning algorithms or, better still, of deep learning.
The OECD has tried to define algorithmic collusion as thus: “Algorithmic collusion consists in any form of anti-competitive agreement or coordination among competing firms that is facilitated or implemented through means of automated systems”.
Algorithms change certain structural characteristics of the market, such as transparency and frequency of interaction. As a result, they increase the likelihood of collusion.
Indeed, as observed by OECD, if markets are transparent and companies react instantaneously to any deviation, the payoff from deviation is zero. Therefore, collusion can always be sustained as a balancing strategy.
At international level, it is debated whether these types of risks could have similarities with the classic “oligopoly problem”, that is, tacit collusion between companies, underlining how this problem could be extended to non-oligopolistic market structures.
However, the difficulty of identifying the exchange of wills as a decisive ingredient for a violation of the art. 101 TFUE in the presence of sophisticated algorithms, is, at this stage, complicated.
In November 2019 the Bundeskartellamt and the Autorité de la concurrence published a report on “Algorithms and Competition”.
The report deeply analyses how algorithms could facilitate anticompetitive conducts and what are the potential competition law aspects. Moreover, it points out practical challenges when investigating algorithms.
II. Description of the report
a. Algorithms and economic principles of horizontal collusion
The report reminds the economic principles of horizontal collusion. In particular, it points out how algorithms may strengthen factors which may facilitate the stability of collusion or the emergence of collusion.
As for the former, algorithms may reduce coordination costs among companies; raise up entry barriers (i.e. collection of big data, analytic tools, hiring of data-scientists or data-engineer etc.); decrease barriers to entry (for instance, it is easier for companies to find a collusive pricing strategy); increase the frequency of interactions; increase market transparency (this is particularly relevant for the detection of any deviation of cartel members).
As for the latter, algorithms may facilitate the creation of a focal point which may be considered by the companies as a “natural collusive equilibrium”. Indeed, algorithms may enable companies to coordinate without the need to communicate. Thus, algorithms may favor tacit collusion.
b. The use of algorithms and collusion
The report identifies three main situations where algorithms could be used to collude.
1) Algorithms may support or facilitate traditional cartels
In this case, the algorithm could be used in order to implement collusive practices such as price-fixing or market segmentation.
Although algorithms are involved, the German and French Authorities underlined that this scenario may not raise competition law issues since – prior the utilization of the algorithm – the parties entered into an agreement or concerted practice which can be assessed under Art. 101 TFEU.
Moreover, the two Authorities suggest a case-by-case approach in order to assess the role of algorithms. Indeed, algorithms may create efficiency gains to be taken into account or strengthen the negative effects of an ascertained anticompetitive practice. Furthermore, algorithms could help to understand the intentionality of the collusive conduct.
2) Algorithms may facilitate “hub and spoke” scenarios
Such scenarios, could be characterized by a third party who could provide the same algorithm or coordinated algorithms to competitors.
Therefore, by using the same algorithm, the parties can reach a common collusive equilibrium without the need to communicate directly. Indeed, the algorithm of each member of the cartel will react in the same way to changes in production costs or demand.
In this framework, the two Authorities stressed that it is important to understand whether the competitors are aware of using the same algorithm or if they could reasonably foresee it.
For this purpose, they distinguish two scenarios.
In the first one, at least two competitors know that they are using the same or somehow coordinated algorithm provided by a third party. In the second one, at least two competing companies use the same or somehow coordinated algorithm provided by a third party but all or all but one of them do not know it.
As for the first scenario, the report specifies that alignment of algorithmic decision-making may arise for different reasons such as alignment at code level (i.e. algorithms with shared purpose and a similar implemented methodology); alignment at data level (i.e. algorithms may enable competitors to know respective sensitive information or the parties may use the same dataset whereby a common algorithm maximizes joint profits).
In assessing potential competition law aspects of the aforementioned scenarios, the report observes that they resemble to hub and spoke cases.
However, in these scenarios the third party do not only merely passes sensitive information from one party to the other but it has a more active role consisting in developing the shared algorithm.
Therefore, the legal assessment of such situations should only consider vertical contacts between each competitor and the third party. Moreover, the competition concerns caused by such indirect contacts may depend on the algorithmic alignment observed.
With regard to the second scenario, the report underlines the importance that competitors should at least be aware of or foreseen the anticompetitive conduct of the third party, in order to establish an antitrust infringement by the competitors. Where this is not the case, the anticompetitive conduct may be considered a parallel behavior on the side of the competitors.
3) Collusion induced by the (parallel) use of individual algorithms
In this case, collusion among competitors would be implemented by algorithms without any communications between the parties.
In particular, the complexity of such scenario may vary according to algorithms.
Indeed, when the algorithm is “descriptive”, it may be possible to identify the colluding strategy through the study of the code of the algorithm.
Whereas, in case of a black box algorithm, the strategy of this kind of algorithm can hardly be interpreted by accessing its code.
Moreover, the report emphasizes also the question of whether algorithm could engage in explicit forms of collusion.
The report may suggest that we are at a too early stage in order to understand whether algorithms could reach a kind of communication. Most likely, “algorithmic communications” may develop through black box algorithms. This makes even harder to study these kinds of interactions.
Notwithstanding, as the OECD pointed out, the report states that a specific form of communication could be signaling practices.
As regard to the plausibility of purely algorithmic collusion, the report says that there is a growing research. In particular, experiments carried out on learning algorithms tend to show that there is evidence of collusion between algorithms.
However, these experiments are based on strong assumptions such as:
- a certain degree of common knowledge between competitors;
- time horizon, algorithmic may need long interactions before colluding;
- the stability of the competitive environment, significant market changes may destabilize algorithmic interactions;
- degrees of freedom and complexity of algorithms, an increase in degrees of freedom for sophisticated strategies may have controversial effects on the likelihood of collusion;
- initialization, which determines the behavior of the algorithm at the very beginning, usually depends both on the companies’ knowledge of the environment as well as the know-how of the developer who transfers this knowledge into the parameters of the algorithm;
- symmetry in terms of algorithms and companies, some algorithms may more easily achieve collusive equilibria than others.
As for potential legal aspects, algorithmic collusion may be categorized as “intelligent adaptations” rather than coordination. Another legal issue, is to whether the behavior of a self-learning algorithm can be attributed to a company.
c. Practical challenges when investigating algorithms
The report addresses practical challenges when assessing anticompetitive conducts involving algorithms.
Among these challenges, the report underlines that a more in-depth analysis on algorithms may be beneficial. For this kind of analyses, the report identifies two approaches: (i) the analysis of the source code in connection (in particular for descriptive algorithms); (ii) comparing real past inputs/outputs couples; (iii) possibilities of simulating the behavior of an algorithm on generated inputs; (iv) comparing the algorithm to other more easily interpretable algorithms or methods. Moreover, when analyzing algorithms, the report states that it is important to appropriately define the time under span, since, for instance, parametrized algorithm can adjust parameters automatically. Other considerations concern data storage, data cleaning process and processing methods.
For these purposes, the report points out that some scholars suggest to strengthen the powers of the authorities in gathering information by introducing a requirement on companies to preserve an auditable record of the development and use of their algorithms.
d. Conclusion
The report concludes that there is no need to reconsider the current legal regime and the antitrust toolbox. At this stage, the current antitrust tools seem applicable to cases involving algorithmic behaviors (i.e. algorithms facilitating traditional cartels or “hub and spoke” scenarios concerning algorithms). Moreover, algorithmic collusion does not seem to pose imminent or significant threats.
However, algorithms are rapidly evolving. Therefore, competition authorities should constantly monitor further developments. For this purpose, competition authorities should deepen their knowledge about algorithms and should collaborate with academics and regulators.
________________________
To make sure you do not miss out on regular updates from the Kluwer Competition Law Blog, please subscribe here.
Kluwer Competition Law
The 2022 Future Ready Lawyer survey showed that 79% of lawyers are coping with increased volume & complexity of information. Kluwer Competition Law enables you to make more informed decisions, more quickly from every preferred location. Are you, as a competition lawyer, ready for the future?
Learn how Kluwer Competition Law can support you.