Europe has been a frontrunner in the regulation of artificial intelligence on a global scale. The adoption of the Artificial Intelligence Act (AI Act) defines one – despite important – step of the puzzle of European policy on AI. After the adoption of the Council last week, such an ambitious approach is still surrounded by scepticism, particularly concerning its potential impact on the competitiveness of the European technological ecosystem from a global perspective.
Regulation is often implemented with the intention of protecting fundamental rights and achieving public interest goals including consumer protection, and fair market practices stability, and the AI Act is not an exception to this rule. However, the AI Act has a limited scope when looking at the protection of fundamental rights. Despite the focus on “European values” recalling Article 2 TEU and the important step towards the introduction of the fundamental rights impact assessment, still, its approach is far from more human-centric oriented legislation such as the Digital Services Act or the General Data Protection Regulation, as particularly underlined by the lack of judicial remedies for infringements of this Regulation.
Within this framework, the limited regulatory choice to protect fundamental rights in the AI Act could not counterbalance the potential unintended negative outcomes for competition as one of the pillars of the EU system. Indeed, the main issues are not only related to the risk of over-regulation in the EU (which is very much welcome) but to the potential (legal) barrier to competition, particularly for the entry reinforcing market consolidation, which would not only affect the internal market but would contribute to creating areas of the market and political power affecting constitutional democracies.
The duality of European values
The expansion of European regulation in the field of AI is part of a broader trend which is not connected to the mere booming and spread of artificial intelligence applications. Since the launch of the Digital Single Market Strategy in 2015, the European Union has changed its approach moving from a framework mainly dominated by a narrative of digital liberalism to a framework of digital constitutionalism characterised by a larger attention on the protection of rights and freedoms, or what we know today in the European AI Act as European values.
These values are primarily focused on the respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child, as ensured in Art. 2 TUE and the European Charter of Fundamental Rights. The objective is to ensure that overriding reasons of public interest, such as a high level of protection of health, safety, and fundamental rights, are not left behind. At the same time, we cannot underestimate how the EU has also built its identity based on the need to ensure fundamental freedoms and competition which have played a foundation role in the EU economic integration process since the beginning, they will still play an important role in creating a market for AI in Europe. The regular reliance on Article 114 TFEU for the purposes of harmonising the internal market in areas which primarily are related to democracy, as in the case of the European Media Freedom Act, demonstrates an increasing convergence between market and democracy in Europe (on this same point, Cseres explores this point here).
This European regulatory brutality (a concept coined by Papakonstantinou and De Hert, see paper here) is not the only source of trouble once one looks ahead to the AI Act’s application but competing European values driven by the rise of European digital constitutionalism are progressively permeating the legal discourse and narrative around the soul of antitrust (if it still exists) and competition regulation. In the absence of any immediate benchmark to set out as the main objective of EU competition law, the narrative around competition regulation’s purpose is being constantly tested out by the European Commission as somewhat more expansive and all-encompassing than what it initially is.
Recently, both the revised Market Definition Notice and the policy brief acknowledging the Commission’s priorities in the enforcement of Article 102 TFEU demonstrate as much. The former establishes that competition policy can “contribute to preventing excessive dependency and increasing the resilience of the Union economy by enabling strong and diversified supply chains and can complement the Union’s regulatory framework on environmental sustainability” (para 3 of the Market Definition Notice). The policy brief issued by the Commission to document the state of play of the application of the prohibition of abuse explicitly reproduced Vestager’s words recognising that “EU competition policy is able to pursue multiple goals, such as fairness and level-playing field, market integration, preserving competitive processes, consumer welfare, efficiency and innovation, and ultimately plurality and democracy”. In fact, the authors of the policy brief went on to establish that the case law (aka the EU courts) has also confirmed that competition law can achieve broader objectives, as ensuring consumer choice is a means to ultimately guarantee plurality in a democratic society by referencing the General Court’s ruling in Google Android (Case T-604/18; Ezrachi and Robertson also discuss the role of antitrust in safeguarding the democratic ideal in a recent working paper).
One could argue, however, on this last point, that the policy brief draws too much of its attention towards the necessary preservation of a plurality in a democratic society, and less of it to the particular context where the General Court delivered that same pronouncement, i.e., the broader analysis of Google’s abusive practices – and not an overinclusive statement of the expansion of the objectives of competition regulation. Even though we can agree with the fact that the wider EU regime seeks to secure democratic values and societies, the increasing overlap of each one of these values under the common denomination of ‘European values’ may well be an exaggeration of an increasingly expeditious manner in which to do away with legal standards, thresholds and procedural safeguards.
In this context, the European Commission’s recent decision fining Apple for its conduct in the market of music streaming services is quite prone to questioning whether the European values of now are those that a competition enforcer would like to uphold in the future. On one side, the exploitative theory of harm points to the fact that informed choice (albeit not in the sense of the GDPR) is key to understanding commercial relationships within digital ecosystems. On the other side, the Commission rushed to detail not only the direct harm caused to consumers in monetary terms but also in non-monetary terms, including the user’s frustration when lacking sufficient information about how to conclude a transaction online. One would be right to assert that frustration is not a parameter of competition, just as time waste and the consumer’s inconvenience are not, either. By this same token, not one of these elements is a European value, so we can only go back to the ulterior concept of an informed choice in the sense of the GDPR, in its approximation to information self-determination (on the complexity of interpreting user consent across regulations, see Botta’s and Borges’ recent working paper).
The enforcement of risk
Such complexity in the conflation of applicable regulations and European values can also make enforcement more unpredictable. The shift towards European values will require competent authorities to interpret the regulatory framework and to strike a balance between competing constitutional interests. Particularly, the questions around enforcement will be primarily connected to the enforcement of the rules on risk assessment. The different layers of risks, as specified in Annex III for high-risk applications, raise critical interpretative issues with reference to the evolution of different technological applications. Risk indeed is a notion open to possibilities. Unlike traditional legal approaches based on a white-and-black approach shaped by interpretation, risks provide multiple possibilities which could lead to a certain legal consequence. The complexity of enforcing a risk-based approach will be a critical challenge for enforcement authorities, also considering the different approaches to risk followed by their different legal instruments and the interpretation of them by the Court of Justice of the European Union.
Furthermore, the complex vertical and horizontal relationship of power between the European Commission, competition authorities and other public authorities (for one, the European AI Office embedded within DG Connect) raises primary questions for the coordination and collaboration between EU institutions, the Member States and their specialised organisations and institutions. The interplay of competition law with other pieces of regulation may be useful in more than one aspect. It is true that EU competition law cannot be directly influenced by regulation in terms of outcomes. That is to say, a regulatory breach is not the same as an antitrust violation, or vice versa. However, the Court of Justice recognised in Meta Platforms and Others (Case C-252/21, see a comment of the ruling here) that Article 102 TFEU may well regard (lack of) compliance with regulation – may that be EU data protection regulation or, for instance, rules applying to operators using AI applications into their own services – as a vital clue to assess whether a dominant undertaking’s conduct entails resorting to methods prevailing under competition on the merits. On the other hand, the principle of sincere cooperation contained in Article 4(3) TEU that must cross-section the efforts of public authorities in enforcing their corresponding rules and competences illuminates this sense of enforcement with regards to risk (not only on the merits of one’s own jurisdiction but regarding its interplay with distinct pieces of legislation). Similar overlaps are likely to make the enforcement of the AI Act increasingly challenging due to its horizontal nature introducing an increasing uncertainty in the internal market.
The enforcement of the AI Act will still see national authorities designated by each of the Member States as protagonists. Even if the AI Act will not be enforceable until 2026, with some exceptions for specific provisions such as the prohibited AI systems and the provisions relating to generative AI, which will be applicable after 6 and 12 months respectively, the Member States are still called to build an enforcement infrastructure of their own. In this case, the European Artificial Intelligence Office will play a critical role with coordinating enforcement efforts. Unlike in other areas including the GDPR or the DSA, the AI Act provides a coordinating authority for enforcement. It is also important to consider that part of the enforcement in the AI Act will also be in the hands of certifying organisations for high-risk systems and the role of private enforcement, despite limited, pushing groups to lodge complaints to the supervisory authority in order to trigger the application of sanctioning mechanism which in the case of the AI Act could go up to 35000000 euro or, if the offender is a company, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Perspectives for the internal market
The adoption of the AI Act marks a significant step in Europe’s proactive stance towards regulating artificial intelligence. While aimed at protecting fundamental rights and achieving public interest goals, the AI Act also raises concerns about potential unintended consequences, particularly in terms of market competitiveness (the buzzword in European fora for the last months) and the rule of law (see Neves’ paper on the freedom to conduct business for further reading). The AI Act introduces layers of risk assessment and procedural safeguards for AI systems, reflecting Europe’s commitment to upholding European values such as human dignity, freedom, equality, and democracy. However, the rigid framework established by the AI Act may impact the internal market, potentially reinforcing market consolidation and impacting the internal market dynamics.
The pressure for competition thriving in the internal market would be the result of a broader stretch of resources which only some players can dedicate to understanding the complex web of risk regulation in Europe. Considering the horizontal application of AI across different sectors and the deep connection with the processing of (personal) data, the AI Act has already raised questions about its coordination with other legal measures, particularly the GDPR. It would be enough to mention how the introduction of the fundamental rights impact assessment in the AI Act raises questions about other risk obligations coming from the GDPR, mostly the Data Protection Impact Assessment, and the obligation to assess risks for very large online platforms under the Digital Services Act. Furthermore, the enforcement of the AI Act will require navigating the concept of risk, which is inherently subjective and open to interpretation, and the complex relationship between the supranational and national system of coordination. This poses a significant challenge for enforcement authorities and raises questions about the consistency of enforcement actions.
The introduction of the AI Act is likely to bring consequences on the functioning of the internal market which, however, are not still measurable. Even if it could produce anti-competitive effects in the internal market, the AI Act aims to reposition the rule of law in the digital age by limiting the reliance on self-regulation, including ethical narratives, related to the spread of these technologies. In this sense, it is a central piece of European digital constitutionalism. As a result, the complex framework for competition raises the bar of standards to protect European values which are not merely related to the protection of fundamental freedoms but the protection of fundamental rights and democratic values.
_________
If you, like us, are still wondering about how EU competition law and digital constitutionalism intersect, we are co-chairing a panel including Marco Botta (EUI and University of Vienna), Kati Cseres (University of Amsterdam), Katarzyna Sadrak (DG Competition) and Inês Neves (University of Porto/Morais Leitao) to be held online on 12 June at 5.00 pm CEST time discussing the topic. We’d be glad if you would join us!o do so, just click here to register to the event.
________________________
To make sure you do not miss out on regular updates from the Kluwer Competition Law Blog, please subscribe here.