Dr Kathrin Hamenstädt - Lecturer in Law, Brunel University London
Simona Demková - University of Luxembourg, Faculty of Law, Economics and Finance
Delia Lucía Martínez Lorenzo - Ph.D candidate at Hasselt University and Maastricht University
Matteo Pressi - PhD student in Administrative Law University of Verona, Dept of Legal Sciences
Alberto Nicòtina - Ph.D. Candidate in Constitutional Law, University of Antwerp
Filipe Brito Bastos - Guest Assistant Professor at NOVA School of Law, Lisbon
Andriani Kalintiri
The Decisional Value of Information in European Semi-automated Decision-making
This article asserts that the automated processing of information, such as via large-scale information systems in the Area of Freedom, Security and Justice (AFSJ), alters the ‘value’ of information from a means of assistance to a key decisional asset. Information in its different forms, whether paper-based or digital, has always been fundamental to decision-making conduct. Concretely, the premise holds that decisions shall be based on correct, full, and adequate knowledge and reasoning. Technological innovation has, however, magnified information capacities and significance. The article in that respect maintains that the ‘decisional value’ of automatically-processed information consequently also alters the nature of the respective decision-making – from a conventional type where the agent exercises discretion to a ‘semi-automated’ conduct in which automation inhibits the agent’s decision-making capacity. The recognition of such transformation is necessary for the law to keep up with the technological progress and safeguard rights of individuals who are subjects of such semi-automated decisions.
Information in its different forms, whether paper-based or digital, has always been fundamental to decision-making conduct. Technological innovation has, however, magnified information capacities and significance.Clevend describes information as ‘the dominant resource in “post-industrial society”’: H Cleveland, ‘The Twilight of Hierarchy: Speculations on the Global Information Society’ (1985) 45 Public Administration Review 185, 185. See also ‘Advanced Research Projects Agency Network’ in D Ince (ed), A Dictionary of the Internet (Oxford University Press 2019) <www.oxfordreference.com/view/10.1093/acref/9780191884276.001.0001/acref-9780191884276-e-65> accessed 28 February 2021. In the European Union (EU) multilevel legal order, technological innovation primarily seeks to boost efficiency in integrated decision-making, which relies on cooperation among actors from different jurisdictions.HCH Hofmann, ‘Composite Decision Making Procedures in EU Administrative Law’ in HCH Hofmann and AH Türk (eds), Legal Challenges in EU Administrative Law: Towards an Integrated Administration (Edward Elgar 2009); HCH Hofmann and AH Türk, ‘Legal Challenges in EU Administrative Law by the Move to an Integrated Administration’ in HCH Hofmann and AH Türk (eds), Legal Challenges in EU Administrative Law: Towards an Integrated Administration (Edward Elgar 2009) and O Jansen and B Schöndorf-Haubold (eds), The European Composite Administration (Intersentia 2011). The efficiency-driven innovation gradually also changes the nature of the respective decision-making. This article asserts that the automated processing of information, such as via large-scale information systems in the Area of Freedom, Security and Justice (AFSJ), alters the ‘value’ of information from a means of assistance to a key decisional asset (2).This contribution focuses on individual decision-making, i.e. on actions taken by competent, primarily Member State, authorities in the implementation of Union law vis-à-vis private persons. These can include decisions on issuance of entry or stay permits for third-country nationals; for arrest or surrender of individuals for the purposes of criminal proceedings; and for placing individuals, including children in protective schemes. The automatically-processed information consequently also alters the nature of the respective decision-making – from a conventional type where the agent exercises discretion to a ‘semi-automated’ conduct in which automation inhibits the agent’s decision-making capacity (3). The recognition of such transformation is necessary for the law to keep up with the technological progress and safeguard rights of individuals who are subjects of such semi-automated decisions.
The reliance on information in public decision-making rests on the requirement that decisions shall be based on correct, full, and adequate knowledge and reasoning.Article 41(2)(c) of the Charter of Fundamental Rights of the European Union [2016] OJ C202/389 [CFR] enshrines ‘the obligation of the administration to give reasons for its decisions’. See J Mendes, ‘Good Administration in EU Law and the European Code of Good Administrative Behaviour’ (2009) EUI Law Working Papers 2009/09, 13 <https://cadmus.eui.eu/handle/1814/12101> accessed 9 June 2021; HP Nehl, ‘Good Administration as Procedural Right and/or General Principle?’ in HCH Hofmann and AH Türk, Legal Challenges in EU Administrative Law: Towards an Integrated Administration (Edward Elgar 2009); HCH Hofmann and BC Mihaescu-Evans, ‘The Relation between the Charter’s Fundamental Rights and the Unwritten General Principles of EU Law: Good Administration as the Test Case’ (2013) 9 European Constitutional Law Review 73 and P Craig, ‘Article 41 – Right to Good Administration’ in S Peers and others (eds), The EU Charter of Fundamental Rights (Hart-Nomos 2014). In the move from a paper-based to a digital society, the nature of information and its effects on decision-making have however changed fundamentally.Some authors refer to ‘hyperhistory’ as a new era in human development characterised by unthinkable quantities of data: See e.g. L Floridi, The Fourth Revolution: How the Infosphere Is Reshaping Human Reality (1st edn, Oxford University Press 2014) 24.
Decision-making implementing Union law often relies on either vertical or horizontal cooperation between EU and Member States actors through ‘multi-step’ composite procedures.For typologies of composite procedures, see HCH Hofmann, ‘Multi-Jurisdictional Composite Procedures - the Backbone to the EU’s Single Regulatory Space’ (2019) University of Luxembourg Law Working Paper No. 2019-003, 26 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3399042> accessed 9 June 2021 and M Eliantonio, ‘Judicial Review in an Integrated Administration: The Case of “Composite Procedures”’ (2015) 7 Review of European Administrative Law 65. The key feature of composite decision-making is interdependence between the input and theoutput, i.e. between the assistance from one authority (of one jurisdiction) given in the preparation of a final decision by an authority from another jurisdiction. Informational cooperation involving collection, processing, and automatic generating of ‘suggestions’ for the purposes of decision-making increasingly rests at the core of such decision-making.
Nevertheless, composite decision-making is also characterised by a degree of separation of the preparatory (factual) input from the final legally-binding output, both in the legislative design of the decision-making procedures as well as in the European arrangement of judicial review.S Alonso de León, ‘Composite Administrative Procedures in the European Union’ (Doctoral Thesis, Universidad Carlos III de Madrid 2016) <https://e-archivo.uc3m.es/handle/10016/23445> accessed 28 February 2021; F Brito Bastos, ‘Derivative Illegality in European Composite Administrative Procedures’ (2018) 55 Common Market Law Review 101 and HCH Hofmann, ‘Composite Decision Making Procedures in EU Administrative Law’ in HCH Hofmann and AH Türk (eds), Legal Challenges in EU Administrative Law: Towards an Integrated Administration (Edward Elgar 2009). See also the review of the respective CJEU case-law in Case C-219/17 Berlusconi (Fininvest), EU:C:2018:502, Opinion of AG Campos Sánchez-Bordona, points 57 et seq. The latter considers composite decision-making as a single procedure,At least in situations where the final decision-maker is an EU body, the CJEU finds that it holds ‘exclusive competence’ to review the procedure as a whole: Case C-219/17 Berlusconi, EU:C:2018:502. Critical questions, however, remain concerning the practice of review which would extend to the preparatory conduct of authorities from a different jurisdiction: HCH Hofmann, ‘Multi-Jurisdictional Composite Procedures - the Backbone to the EU’s Single Regulatory Space’ (2019) University of Luxembourg Law Working Paper No. 2019-003, 26 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3399042> accessed 9 June 2021. yet it is limited, except in exceptional circumstances,For a discussion on the circumstances under which factual conduct is reviewable incidentally, see HCH Hofmann, GC Rowe and AH Türk, Administrative Law and Policy of the European Union (1st edn, Oxford University Press 2011) 667 et seq. to the conduct of the final decision-maker under the competent court’s jurisdiction. In other words, the preparatory conduct often remains outside the scope of judicial review.
Generally, informational cooperation as a preparatory conduct is deemed a ‘means of assistance’ with no or only indirect legal effects on the final act taken vis-à-vis an individual. In that respect, Filipe Brito Bastos suggests nuances regarding the notion of composite administration, in that not all forms of composite cooperation encompass jointdecision-making. Brito Bastos classifies three kinds of composite cooperation: (a) informational (the exchange of information); (b) institutional (the representation of different levels of governance in the same bodies); and (c) procedural (through joint decision-making processes).F Brito Bastos, ‘Beyond Executive Federalism: The Judicial Crafting of the Law of Composite Administrative Decision-Making’ (Doctoral Thesis, European University Institute 2018) ch 3 <http://cadmus.eui.eu//handle/1814/55824> accessed 28 February 2021. With increasing automation, however, further nuance is also needed regarding the classifying of informational cooperation alone, especially from the perspective of its role within (i.e. its effects on) the decision-making.
Initially, informational cooperation mushroomed into the EU through mutual assistance obligations grounded in a territorially limited concept of public authority.HCH Hofmann, ‘Composite Decision Making Procedures in EU Administrative Law’ in HCH Hofmann and AH Türk (eds), Legal Challenges in EU Administrative Law: Towards an Integrated Administration (Edward Elgar 2009) 5. Accordingly, for instance, one Member State authority’s investigatory powers were limited in the territory of another Member State.HCH Hofmann, GC Rowe and AH Türk, Administrative Law and Policy of the European Union (1st edn, Oxford University Press 2011) 5. Mutual informational assistance became necessary due to the limited capacities and resources of national authorities to ascertain all necessary information for the fulfilment of the responsibilities under EU law. The Union institutions were entrusted with facilitating information exchanges by providing appropriate platforms.D-U Galetta and others, ‘Book V – Mutual Assistance’ in ReNEUAL Model Rules on EU Administrative Procedure (ReNEUAL SC 2014) 200 <www.reneual.eu/index.php/projects-and-publications/reneual-1-0> accessed 9 June 2021. Such platforms would be purely procedural in terms of designing processes enabling requests for information from one authority to another. Increasingly, however, where information exchanges demanded not only ad hoc forwarding of information, but also large-scale collection and storing of information useful for more authorities at the same time, more advanced information exchanges began to evolve with the EU’s construction of shared databases and information systems.
In that context, the authors of the ReNEUAL Model Rulescf ibid and D-U Galetta and others, ‘Book VI – Administrative Information Management’ in ReNEUAL Model Rules on EU Administrative Procedure (ReNEUAL SC 2014) <www.reneual.eu/index.php/projects-and-publications/reneual-1-0> accessed 9 June 2021. emphasise distinguishing the degrees of integration in informational cooperation.See categorisations of EU information management in J-P Schneider, ‘Information Exchange and Its Problems’ in C Harlow, P Leino and G della Cananea (eds), Research Handbook on EU Administrative Law (Edward Elgar 2017); D-U Galetta, HCH Hofmann and J-P Schneider, ‘Information Exchange in the European Administrative Union: An Introduction’ (2014) 20 European Public Law 65; J Sommer, ‘Information Cooperation Procedures – with European Environmental Law Serving as an Illustration’ in O Jansen and B Schöndorf-Haubold (eds), The European Composite Administration (Intersentia 2011); HCH Hofmann, GC Rowe and AH Türk, ‘Information and Administration’ in HCH Hofmann, GC Rowe and AH Türk, Administrative Law and Policy of the European Union (1st edn, Oxford University Press 2011) and HCH Hofmann, ‘Composite Decision Making Procedures in EU Administrative Law’ in HCH Hofmann and AH Türk (eds), Legal Challenges in EU Administrative Law: Towards an Integrated Administration (Edward Elgar 2009). Accordingly, the varied nature of informational cooperation – one which is based on exact steps in the cooperation, in contrast to vastly integrated cooperation – renders a one-size-fits-all approach of legal standards inadequate, or ‘(at-best) ill-suited’.D-U Galetta and others, ‘Book V – Mutual Assistance’ in ReNEUAL Model Rules on EU Administrative Procedure (ReNEUAL SC 2014) 200 <www.reneual.eu/index.php/projects-and-publications/reneual-1-0> accessed 9 June 2021. Jens-Peter Schneider highlights that ‘less integrated forms’ demand fewer legal guarantees than the highly integrated ones.J-P Schneider, ‘Information Exchange and Its Problems’ in C Harlow, P Leino and G della Cananea (eds), Research Handbook on EU Administrative Law (Edward Elgar 2017) 86. This is because the legal issues which arise from the two opposite ends of the spectrum of informational integration vary in their complexity.The legal issues in the informational mutual assistance cooperation include: compliance with correct and non-arbitrary requests for assistance; the grounds for refusal to comply with a request; and the relevant safeguards for the individuals concerned by such requests. More integrated forms of informational cooperation that enable direct access to the information by authorities of either national or EU agencies and that do not rely on prior requests pose more multifaceted challenges, including: compliance with complex access conditions; concerns with direct availability; retrieval and long-term storing of information initially collected for different purposes; and the emerging questions concerning linking of the data from various sources through interoperability between the independent information systems. This (expanded) list is borrowed from ibid, 86, 92–93.
In the large-scale form of informational cooperation, variations in the degree of integration exist. First, some information exchanges are operationalized through the web as a simple mutual assistance tool. This is the case of the Internal Market Information System (IMI),‘About IMI-Net’ (Internal Market Information System, — —) <https://ec.europa.eu/internal_market/imi-net/about/index_en.htm> accessed 28 February 2021. which unlike the systems of the AFSJ does not store data itself. Instead, IMI is an online tool enabling mutual assistance cooperation among national authorities of a large-scale, in terms of the number of information exchanges for all internal market policies being facilitated through this platform.See e.g. M Lottini, ‘An Instrument of Intensified Informal Mutual Assistance: The Internal Market Information System (IMI) and the Protection of Personal Data’ (2014) 20 European Public Law 107. Second, there are information-sharing platforms, which serve as large-scale storages of information, and which operate as a sort of ‘warning system’. This is the case in risk regulation of, for instance, food and non-food products safety (RASSF or RAPEX, respectively).European Commission, ‘Safety Gate: The Rapid Alert System for Dangerous Non-Food Products’ (European Commission, — —) <https://ec.europa.eu/consumers/consumers_safety/safety_products/rapex/alerts/repository/content/pages/rapex/index_en.htm> accessed 28 February 2021 and European Commission, ‘RASFF - Food and Feed Safety Alerts’ (European Commission, — —) <https://ec.europa.eu/food/safety/rasff_en> accessed 28 February 2021. RAPEX functions as an online database of information provided by national authorities, thus facilitating warnings for both consumers and EU legislators about the products that need to be removed from the market. Lastly, there are the AFSJ information systems, which, like the alert mechanisms in risk regulation, facilitate for national authorities the identification of persons that are sought for different visa, asylum, customs, or criminal and judicial proceedings purposes. The latter two types of informational cooperation, due to the nature of the processing of information therein (i.e. through automated means), are capable of ‘affecting’ the final decisions in a distinct – though seemingly indirect – manner. Novel technologies which enable algorithmic evaluation of the risks of hazards in food, feed, or medicine products can directly determine the decisions adopted regarding the placing of a product within the single market.Interoperability reforms are also projected in the field of risk regulation: Commission Implementing Regulation (EU) 2019/1715 of 30 September 2019 laying down rules for the functioning of the information management system for official controls and its system components (the IMSOC Regulation) [2019] OJ L261/37. This Regulation sets up a computerised information management system for official controls to manage and automatically exchange relevant data, information, and documents. According to point 2 of the Preamble, the IMSOC is an ‘interoperability schema connecting [certain information systems managed by the Commission and certain national systems of the Member States and information systems of third countries and international organisations]’. The Commission-managed information systems to be thusly integrated include, in addition to RASFF: the system for notifying and reporting information on animal diseases (ADIS) – to be established pursuant to Regulation (EU) 2016/429; the system for notifying and reporting the presence of pests in plants and plant products (EUROPHYT) – to be established pursuant to Regulation (EU) 2016/2031; the technical tools for administrative assistance and cooperation (AAC); and the TRACES system referred to in Regulation (EU) 2017/625. See Regulation (EU) 2016/429 of the European Parliament and of the Council of 9 March 2016 on transmissible animal diseases and amending and repealing certain acts in the area of animal health (‘Animal Health Law’) [2016] OJ L84/1; Regulation (EU) 2016/2031 of the European Parliament of the Council of 26 October 2016 on protective measures against pests of plants, amending Regulations (EU) No 228/2013, (EU) No 652/2014 and (EU) No 1143/2014 of the European Parliament and of the Council and repealing Council Directives 69/464/EEC, 74/647/EEC, 93/85/EEC, 98/57/EC, 2000/29/EC, 2006/91/EC and 2007/33/EC [2016] OJ L317/4 and Regulation (EU) 2017/625 of the European Parliament and of the Council of 15 March 2017 on official controls and other official activities performed to ensure the application of food and feed law, rules on animal health and welfare, plant health and plant protection products, amending Regulations (EC) No 999/2001, (EC) No 396/2005, (EC) No 1069/2009, (EC) No 1107/2009, (EU) No 1151/2012, (EU) No 652/2014, (EU) 2016/429 and (EU) 2016/2031 of the European Parliament and of the Council, Council Regulations (EC) No 1/2005 and (EC) No 1099/2009 and Council Directives 98/58/EC, 1999/74/EC, 2007/43/EC, 2008/119/EC and 2008/120/EC, and repealing Regulations (EC) No 854/2004 and (EC) No 882/2004 of the European Parliament and of the Council, Council Directives 89/608/EEC, 89/662/EEC, 90/425/EEC, 91/496/EEC, 96/23/EC, 96/93/EC and 97/78/EC and Council Decision 92/438/EEC (Official Controls Regulation) [2017] OJ L95/1. Similarly, such decisive effects can arise in algorithmic matching of biometric data as the key – if not sole – means to verify a person’s identity. Although not yet fully materialised,M Hildebrandt and K O’Hara, ‘Introduction’ in M Hildebrandt and K O’Hara (eds), Life and the Law in the Era of Data-Driven Agency (Edward Elgar 2020) 3. automated informational output shall be viewed as part and parcel of the decision-making cycle. Indeed, finding a hit or a match in a system instructs the authorities to act in a specific way.A hit occurs in SIS II where a search of certain information by the competent authority reveals an alert, i.e. the alert in SIS matches the searched data. According to the legal provisions, further actions are to be taken as a result of the hit. eu-LISA, Report on the Technical Functioning of Central SIS II and the Communication Infrastructure, Including the Security Thereof and the Bilateral and Multilateral Exchange of Supplementary Information between Member States (eu-LISA 2015) 25 <www.eulisa.europa.eu/Publications/Reports/SIS%20II%20Technical%20Report%202015.pdf> accessed 9 June 2021. In that context, Brouwer warns of the risk of creating ‘digital entry bans’ against which no effective remedies would exist.E Brouwer, ‘Large-Scale Databases and Interoperability in Migration and Border Policies: The Non- Discriminatory Approach of Data Protection’ (2020) 26 European Public Law 71, 72. Before elaborating more on the meaning of ‘decisional’ effects of automatised informational cooperation, it is worth recalling the rationales for classifying informational cooperation as a means of assistance rather than as a part of the decision-making.
There are at least three characteristics of the Union legal order which inhibit the recognition of the ‘decisional value’ of information. First, the governing legislation defines informational cooperation as a means of assistance. For instance, information sharing in the AFSJ is conceived as a compensatory measure ‘supporting operational cooperation between national competent authorities’,See point 1 of the Preamble to the SIS-recast. Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) [SIS II] in the field of border checks, and amending the Convention implementing the Schengen Agreement, and amending and repealing Regulation (EC) No 1987/2006 [2018] OJ L312/14; Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the establishment, operation and use of the Schengen Information System (SIS) in the field of police cooperation and judicial cooperation in criminal matters, amending and repealing Council Decision 2007/533/JHA, and repealing Regulation (EC) No 1986/2006 of the European Parliament and of the Council and Commission Decision 2010/261/EU [2018] OJ L312/56 and Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28 November 2018 on the use of the Schengen Information System for the return of illegally staying third-country nationals [2018] OJ L312/1. The SIS-recast is in force, replacing the former SIS II legislation (Regulation (EC) No 1987/2006 of the European Parliament and of the Council of 20 December 2006 on the establishment, operation and use of the second generation Schengen Information System (SIS II) [2006] OJ L381/4; Council Decision 2007/533/JHA of 12 June 2007 on the establishment, operation and use of the second generation Schengen Information System (SIS II) [2007] OJ L205/63 and Regulation (EC) No 1986/2006 of the European Parliament and of the Council of 20 December 2006 regarding access to the Second Generation Schengen Information System (SIS II) by the services in the Member States responsible for issuing vehicle registration certificates [2006] OJ L381/1). The reforms shall be implemented gradually, and in their entirety by no later than 28 December 2021. hence only indirectly obliging the authorities to rely on existing information systems to identify or verify an individual or an object that is being searched for specific purposes.e.g. authorities are ‘authorised to search directly the data contained in the [system, such as the SIS]’, rather than obliged: Notices from Member States, ‘List of competent authorities which are authorised to search directly the data contained in the [SIS II] pursuant to Article 31(8) of Regulation (EC) No 1987/2006 and Article 46(8) of Council Decision 2007/533/JHA. Only under the Schengen Border Code do requirements exist obliging competent authorities to perform systematic checks against existing databases concerning decisions for entry or stay of third-country nationals.Regulation (EU) 2017/458 of the European Parliament and of the Council of 15 March 2017 amending Regulation (EU) 2016/399 as regards the reinforcement of checks against relevant databases at external borders, [2017] OJ L74/1. Border guards must now perform systematic identity checks against the existing databases, including the [SIS], the Interpol Stolen and Lost Travel documents, as well as against the national databases, concerning everyone, not only third country nationals, entering and exiting the Schengen zone. Otherwise, the legislation mostly enlists binding rules in relation to the use of a concrete system. These apply only once the authorities decide and succeed (considering the common implementation difficulties)See the latest Commission evaluation: Commission, ‘Report from the Commission to the Council and the European Parliament on the Functioning of the Schengen Evaluation and Monitoring Mechanism pursuant to Article 22 of Council Regulation (EU) No 1053/2013, First Multiannual Evaluation Programme (2015-2019)’ (2020) COM(2020)779 final. to make use of the system in the first place. The fact is that the reliance on information systems is indeed a popular practice across all Member States.In 2019 alone, SIS II was accessed 6,666,377,199 times by all Member States, it stored over 91 million alerts, out of almost 1 million are the alerts on persons. A total of 283,713 hits on foreign alerts (issued by another State) were reported in the same year: eu-LISA, SIS II - 2019 Statistics (eu-LISA 2020) <www.eulisa.europa.eu/Publications/Reports/SIS%20II%20-%202019%20-%20Statistics.pdf> accessed 28 February 2021. This separation of preparatory conduct of a general nature (informational cooperation) from concrete decision-making (e.g. a decision for withdrawing a residence permit) is the most salient aspect undermining the recognition that informational cooperation is acquiring ‘decisional value’ for the purposes of – although procedurally separated from – a factually dependent act of specific decision-making.
Second, Union legislation governing decision-making ‘supported’ by informational cooperation is highly fragmented, due to the variety of policy objectives covered.See the collections in HCH Hofmann, GC Rowe and AH Türk (eds), Specialized Administrative Law of the European Union: A Sectoral Review (Oxford University Press 2018). The all-encompassing hybrid nature of many of the currently existing information systems in the AFSJ means that the legislation governing informational cooperation ‘supports’ decision-making on matters ranging from visa applications, to granting or rejecting asylum statuses, to issuing arrest warrants or decisions on placing vulnerable persons in protective schemes. Legislation thus remains flexible, general, and of rather wide-ranging scope (see Table 1 below).See also the breakdown of the access rules in Table 1 in N Vavoula, ‘Consultation of EU Immigration Databases for Law Enforcement Purposes: A Privacy and Data Protection Assessment’ (2020) 22 European Journal of Migration and Law 139, 151–2.
Table 1 Summary of the authorities with access rights with respect to the AFSJ Information Systems
* Source: European CommissionEuropean Commission, ‘EU Information Systems: Security and Borders’ <https://ec.europa.eu/ home-affairs/sites/homeaffairs/files/what-we-do/policies/european-agenda-security/20190416_agenda_security-factsheet-eu-information-systems-security-borders_en.pdf> (European Commission, April 2019) accessed 28 February 2021. See also the outline of the authorities in N Vavoula, ‘Consultation of EU Immigration Databases for Law Enforcement Purposes: A Privacy and Data Protection Assessment’ (2020) 22 European Journal of Migration and Law 139.
Lastly, the need to maintain this comprehensive nature of informational cooperation arises from the fact that the related decision-making procedures are structured differently under the laws of the Member States. With respect to national procedural autonomy, Member States are free to arrange the relevant procedures for the implementation of Union legislation, as long as these accomplish the prescribed objectives.D-U Galetta (ed), Procedural Autonomy of EU Member States: Paradise Lost? A Study on the ‘Functionalized Procedural Competence’ of EU Member States (Springer 2010) and P Craig, ‘Competence and Member State Autonomy: Causality, Consequence and Legitimacy’ in H-W Micklitz and B de Witte (eds), The European Court of Justice and the Autonomy of the Member States (Intersentia 2012). Operationalisation of procedural autonomy thus plays against the recognition of the decisional importance of information for the final action taken concerning the individual, as this would require further harmonisation in the domestic law in order to allow for effective cross-jurisdictional control.On the idea of cross-jurisdictional review, see e.g. the recent discussion in C Warin, ‘A Dialectic of Effective Judicial Protection and Mutual Trust in the European Administrative Space: Towards the Transnational Judicial Review of Manifest Error?’ (2020) 13 Review of European Administrative Law 7.
These deeply ingrained characteristics of structuring European composite conduct practically impede the adjustment of the current legal system from accounting for the changing nature of decision-making. However, there are important factors supporting the claim that the effects of informational cooperation on decision-making surpass mere means of support and assistance.
Conceptually, as well as practically, portraying informational cooperation as a decisional asset necessitates clarifying its potential legal effects on the final decision. This clarification is required from the perspective of the reviewability of the respective conducts.
The justiciability of European public conduct depends on the definition of a reviewable act. The notion essentially distinguishes ‘acts’Acts are ‘a category of events that occur in the factual world’: N Xanthoulis, ‘Administrative Factual Conduct: Legal Effects and Judicial Control in EU Law’ (2019) 12 Review of European Administrative Law 39, 46. producing legal effects (i.e. binding acts) from non-binding acts. Concretely, the distinction is between legal acts that are capable of changing the legal position of an individual and physical or purely ‘factual acts’, which merely produce ‘some change in the physical world’ without directly or indirectly altering the original ‘legal relation’, consisting of rights and/or duties.AH Türk and N Xanthoulis, ‘Legal Accountability of European Central Bank in Bank Supervision: A Case Study in Conceptualizing the Legal Effects of Union Acts’ (2019) 26 Maastricht Journal of European and Comparative Law 151, s B: the conceptual framework builds on the proposition that ‘a person’s legal position and any legal relation between two persons [or a person and a public authority] consists of rights and obligations created within a legal system’. See further elaboration in section C in N Xanthoulis, ‘Administrative Factual Conduct: Legal Effects and Judicial Control in EU Law’ (2019) 12 Review of European Administrative Law 39. See also C Warin, ‘Individual Rights under Union Law: A Study on the Relation between Rights, Obligations and Interests in the Case Law of the Court of Justice’ (Doctoral Thesis, University of Luxembourg 2019) <https://orbilu.uni.lu/handle/10993/34222> accessed 28 February 2021. In composite decision-making that relies on automated-processing of personal information, the seemingly consecutive ‘acts’ of information processing and decision-taking sit uneasily within the legal/factual dichotomy. In this respect, Türk and Xanthoulis offer a useful conceptual framework for the classification of legal effects. The authors define legal effects as primary and secondary, depending on how a specific act ‘is linked to and brings about a change in the legal position of a person’.AH Türk and N Xanthoulis, ‘Legal Accountability of European Central Bank in Bank Supervision: A Case Study in Conceptualizing the Legal Effects of Union Acts’ (2019) 26 Maastricht Journal of European and Comparative Law 151, 154. Primary legal effects can arise both directly (where the act itself alters the person’s legal position) or indirectly (where the act imposes certain obligations or determines the rights or duties vis-à-vis the person concerned).The acts producing primary effects, directly or indirectly, are reviewable both through direct actions under Articles 263 or 265 of the Consolidated Version of the Treaty on the Functioning of the European Union [2016] OJ C202/49 [TFEU] and through the indirect preliminary reference procedure under Article 267 TFEU, or incidentally under Article 277 TFEU. Instead, secondary legal effects arise from acts which by themselves do not alter the person’s legal position, but which are nonetheless ‘determinative for the adoption or content of a subsequent act, which produces binding legal effects’.AH Türk and N Xanthoulis, ‘Legal Accountability of European Central Bank in Bank Supervision: A Case Study in Conceptualizing the Legal Effects of Union Acts’ (2019) 26 Maastricht Journal of European and Comparative Law 151, 154. These acts can be reviewed only indirectly through a preliminary reference from a national court. This conceptual framework also reflects the substantive approach of the Court of Justice (CJEU) to identifying reviewable acts. Indeed, pursuant to the CJEU case-law, the mere form or label of an act does not determine whether it produces legal effects.Pursuant to the formula developed by the Court in Case 60/81 IBM v Commission, EU:C:1981:264, para 9: a reviewable act is ‘any measure the legal effects of which are binding on, and capable of affecting the legal interests of, the applicant by bringing about a change in his legal position’.
While the Court’s approach provides some consolidation, it is far from reflecting the novel forms of effects generated by automated factual conduct. A quasi-autonomous output generated through algorithmic processing of information embodies a distinct form of ‘factual conduct’ which on the face of it produces only secondary legal effects (in determining the action to be taken concerning an individual), but which increasingly also alters the legal position of the individual. Independently from the decision-making that follows the processing of information, the European Data Protection Framework also recognises the primary legal effects arising from data processing conduct. This is evident from the fact that individuals whose personal data are processed for the purposes of decision-making enjoy substantive rights as data subjects. These rights can be directly enforced before the competent supervisory authorities, including courts.Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1 [GDPR], arts 77–79; Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC [2018] OJ L 295, 21.11.2018, arts 63–64 and Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision 2008/977/JHA [2016] OJ L119/89 [Law Enforcement Directive (EU) 2016/680], arts 52-53. For instance, in cooperation based on the SIS II, a ‘concentration of judicial remedies’M Böse, M Bröcker and A Schneider, ‘Introduction’ in M Böse, M Bröcker and A Schneider (eds), Judicial Protection in Transnational Criminal Proceedings (Springer International Publishing 2021) 11 and ‘Report on the Exercise of the Rights of the Data Subject in the Schengen Information System (SIS)’ (European Data Protection Supervisor, October 2014) <https://edps.europa.eu/ sites/edp/files/publication/14-10-28_report_on_the_exercise_of_the_rights_of_the_data_subject_in_sis_ii_en.pdf> accessed 28 February 2021. exists, in principle, for direct enforceability of data subjects’ rights also before a court of any of the Member States involved.Articles 54(1) SIS-recast Regulation (EU) 2018/1861checks, and Art. 68(1) SIS-recast Regulation (EU) 2018/1862 on police and judicial cooperation. Without discussing the likelihood and tendencies of data subjects exercising their rights,European Union Agency for Fundamental Rights, ‘Access to Data Protection Remedies in EU Member States’ (European Union Agency for Fundamental Rights 2013) <https://fra.europa. eu/sites/default/files/fra-2014-access-data-protection-remedies_en_0.pdf> accessed 9 June 2021; P Vogiatzoglou and others, ‘From Theory To Practice: Exercising The Right Of Access Under The Law Enforcement And PNR Directives’ (2021) 11 Journal of Intellectual Property, Information Technology and E-Commerce Law 274 <www.jipitec.eu/issues/jipitec-11-3-2020/5191> accessed 28 February 2021 and F Brito Bastos and D Curtin, ‘Interoperable Information Sharing and the Five Novel Frontiers of EU Governance: A Special Issue’ (2020) 26 European Public Law 59, 64. The authors explain that ‘(…) even if it were possible to clearly ascribe the entering of erroneous data to one particular national jurisdiction, it is much less clear whether the person affected will – or could – ever know of the wrong information or the causal link with a decision that affects their rights and interests. If a ‘hit’ in terms of personal data for law enforcement cannot be conventionally described as a binding decisional measure, this means that it would rather constitute what is termed a ‘purely factual’ conduct. It is unclear whether the latter could as such be justiciable and that in turn will vary according to different national rules or potentially according to the remedial rules established for proceedings before the [CJEU].’ a legal relation clearly exists in the context of personal data processing – one entailing persons’ rights as well as duties falling upon the competent authorities.
What remains troubling from the perspective of justiciability of the decision-making that relies on such informational cooperation, however, is acknowledging the interdependence between the two: in the terminology of Türk and Xanthoulis, acknowledging the primary indirect legal effects of the processing on decision-taking that follows a retrieval of an automated informational output.AH Türk and N Xanthoulis, ‘Legal Accountability of European Central Bank in Bank Supervision: A Case Study in Conceptualizing the Legal Effects of Union Acts’ (2019) 26 Maastricht Journal of European and Comparative Law 151, 154. Accordingly, an ‘automated factual conduct’ which precedes the act of decision-making requires a distinct appreciation of its potential legal, i.e. ‘decisional’ effects.
A study by eu-LISA declares that ‘the areas of border management, internal security and migration management have been going through a major transformation, moving from the physical to the virtual world’.eu-LISA, ‘Elaboration of a Future Architecture for Interoperable IT Systems at Eu-LISA - Summary of the Feasibility Study’ (eu-LISA 2019) 4 <www.eulisa.europa.eu/Publications/Reports/eu-LISA%20Feasibility%20Study%20-%20Interoperability.pdf> accessed 28 February 2021. The transformation undoubtedly also includes novel processing capacities such as automation, introduced in the AFSJ cooperation. However, not all automation is equally transformative.For early research on the effects of technology (including automation) on public administration, see Council of State Governments, Automated Data Processing in State Government: Status, Problems, and Prospects; A Study by the Council of State Governments and Public Administration Service. (National Association of State Budget Officers Public Administration Service 1965) <https://catalog.hathitrust.org/Record/001143642> accessed 28 February 2021. See also F Bannister, ‘Plus Ça Change? ICT and Structural Change in Government’ in IThM Snellen, M Thaens and WBHJ van de Donk (eds), Public Administration in the Information Age: Revisited (IOS Press 2012) 137–8: the author cautions against ‘utopian rhetoric’ of ‘cyber-exceptionalism’ surrounding the use of Big Data by outlining some common reasons for misjudging the impact of technology on public administration. Accordingly, automation is also not a ‘unitary concept’.M Brkan, ‘Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond’ (2019) 27 International Journal of Law and Information Technology 91, 94–5. Instead, automation comprises a spectrum of technological applications ranging from weaker forms of computation to those resembling human intelligence levels.M Robles Carrillo, ‘Artificial Intelligence: From Ethics to Law’ (2020) 44 Telecommunications Policy 101937, 10 and M Brkan, ‘Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond’ (2019) 27 International Journal of Law and Information Technology 91, 94–5. Brkan differentiates between procedural and substantive automated decision-making; algorithmic and non-algorithmic automated decision-making; and rule-based as opposed to law-based decisions. What is currently in place in the informational cooperation in AFSJ mostly comprises a ‘medium-level’ automation, which relies on pre-programmed algorithms (meaning ‘rules followed by a computer, as programmed by humans, which translate input data into outputs’),European Union Agency for Fundamental Rights, ‘Data Quality and Artificial Intelligence – Mitigating Bias and Error to Protect Fundamental Rights’ (European Union Agency for Fundamental Rights 2019) 2 <https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-data-quality-and-ai_en.pdf> accessed 9 June 2021. rather than self-learning capabilities of the system.A full exploration of different technologies and their uses in public decision-making is beyond the scope of this article. For lengthy accounts, see e.g. D Veit and J Huntgeburth, Foundations of Digital Government Leading and Managing in the Digital Era (1st edn, Springer 2014); Y-C Chen and MJ Ahn, Routledge Handbook on Information Technology in Government (Routledge 2017) and L Reins (ed), Regulating New Technologies in Uncertain Times (Springer 2019).
As a recent example, the Interoperability FrameworkInteroperability means ‘the ability of information systems to exchange data and to enable the sharing of information. It is about a targeted and intelligent way of using existing data to best effect, without creating new databases or changing the access rights to the existing information systems’: see European Commission, ‘Security Union: Closing the information Gap’ (European Commission, 12 December 2017) <https://ec.europa.eu/home-affairs/sites/default/files/what-we-do/policies/european-agenda-security/20171212_security_union_closing_the_information_gap_en.pdf> accessed 28 February 2021. See also Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration and amending Regulations (EU) 2018/1726, (EU) 2018/1862 and (EU) 2019/816 [2019] OJ L135/85 and Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa and amending Regulations (EC) No 767/2008, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1726 and (EU) 2018/1861 of the European Parliament and of the Council and Council Decisions 2004/512/EC and 2008/633/JHA [2019] OJ L135/27 [hereafter the ‘Interoperability Regulations’]. for the AFSJ IT systems introduces means of automation, especially concerning the detection capacities of existing and future IT systems on the basis of biometric data. Interoperability aims at enhancing national and EU authorities’ decision-making by increasing the accuracy and availability of the stored data in the existing information systems.See, however, EUobserver, ‘Inaccurate data in Schengen system “threatens rights”’ (EUobserver, 8 January 2018) <https://euobserver.com/tickers/140468> accessed 28 February 2021. To that end, interoperability shall be established through tools that further integrate informational cooperation, from the perspective of both its cross-sectoral objectives and the degree of discretion enjoyed by its end-users. Four interoperability components – the European Search Portal (ESP), the shared biometric matching service (sBMS), the multiple-identity detector (MID), and the common identity repository (CIR) – present different degrees of automation.It is beyond the scope of this paper to discuss the details of these tools. See T Bunyan, ‘Analysis: The “Point of No Return” Interoperability Morphs into the Creation of a Big Brother Centralised EU State Database Including All Existing and Future Justice and Home Affairs Databases’ (statewatch, July 2018) <www.statewatch.org/analyses/no-332-eu-interop-morphs-into-central-database-revised.pdf> accessed 28 February 2021 and T Quintel, ‘Connecting Personal Data of Third Country Nationals: Interoperability of EU Databases in the Light of the CJEU’s Case Law on Data Retention’ (2018) University of Luxembourg Law Working Paper No. 002-2018 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3132506> accessed 28 February 2021. Yet, such automation thus far differs from the applications embodying self-learning capacities, known as artificial intelligence (AI).eu-LISA, Artificial Intelligence in the Operational Management of Large-Scale IT Systems: Research and Technology Monitoring Report: Perspectives for Eu LISA (eu-LISA 2020) 7 <https://op.europa. eu/en/publication-detail/-/publication/ba7d760f-d857-11ea-adf7-01aa75ed71a1/language-en> accessed 9 June 2021: AI ‘has the ability to function as an autonomous actor capable of engaging in activity that has not been explicitly pre-programmed’. While some of these tools merely encompass a search engine which enables faster access to the systems (namely the ESP), others are designed with new algorithmic processing capacities (especially the sBMS). Concerning the latter, new information systems, the Entry/Exit System (EES), the European Travel Authorisation Information System (ETIAS), and the updated European Criminal Record Information System on Third Country Nationals (ECRIS-TCN), shall enable processing of biometric information on all persons entering the EU. The EES, for instance, allows for the first time facial recognition for verification purposes under EU law. Automating Society Report 2020 (AlgorithmWatch 2020) 27 <https://automatingsociety.algorithmwatch.org> accessed 28 February 2021. The EES is also the first of the systems that became interoperable through its own BMS tool, with the Visa Information System serving as ‘the first stone in the building of the future’ sBMS.Ms Ruginis Andrei (the Head of Architecture Sector) in eu-LISA, The New Information Architecture as a Driver for Efficiency and Effectiveness in Internal Security: 16 October 2019 Tallinn, Estonia: Annual Conference Report (eu-LISA 2019) 17 <https://data.europa.eu/doi/10.2857/477811> accessed 28 February 2021. Similarly, biometric processing tools are introduced to the existing systems, such as the SIS, which started operation of the Automated Fingerprint Identification System (AFIS SIS) ‘as a new digital tool’ which ‘include[s] fingerprints, latent prints, and palm prints, making those available for search by Member States.’ibid, 27.
The report by eu-LISA concerning the opportunities for the application of artificial intelligence (AI) in EU informational cooperation also reminds us to distinguish AI ‘from simple automation based on pre-programmed algorithms, which has existed for a long time’.eu-LISA, Artificial Intelligence in the Operational Management of Large-Scale IT Systems: Research and Technology Monitoring Report: Perspectives for Eu LISA (eu-LISA 2020) 7 <https://op.europa. eu/en/publication-detail/-/publication/ba7d760f-d857-11ea-adf7-01aa75ed71a1/language-en> accessed 9 June 2021. The main areas where intelligent computation is projected to be implemented in the work of the agency so far concern its business services rather than decision-making systems. Nevertheless, while interoperability also entails simple forms of automation (the search tools), other applications such as biometric searches involve algorithmic matching of information; while this relies on a set of pre-determined rules, it will eventually also involve machine learning techniques.ibid, 5: as the authors recall, ‘[w]ith the entry into force of the [EES Regulation], eu-LISA has been mandated to develop the new system, which incorporates a component for automated biometric matching, which will rely on machine learning techniques for biometric matching’. Furthermore, behind such ‘support systems’ there is an array of actors involved in the design and implementation of the application, including private parties.M Smith, M Noorman and A Martin, ‘Automating the Public Sector and Organizing Accountabilities’ (2010) 26 Communications of the Association for Information Systems, 6 <https:// aisel.aisnet.org/cais/vol26/iss1/1> accessed 28 February 2021. The resulting cooperation thus embodies an accountability ‘gap between the designer’s control and algorithm’s behaviour (…) wherein blame can potentially be assigned to several moral agents simultaneously’.BD Mittelstadt and others, ‘The Ethics of Algorithms: Mapping the Debate’ [2016] Big Data & Society 1, 11. It is thus necessary ‘to take better account of the human scenes where algorithms, code, and platforms intersect’.M Ananny and K Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability’ (2018) 20 New Media & Society 973, 983.
The ‘medium-level’ automation that characterises informational cooperation in AFSJ, as far as it relies on pre-programmed algorithms, thus means that a human agent remains responsible for any action taken on the basis of the automatically-generated informational output. Consequently, such decision-making could best be characterised as ‘semi-automated.’ This notion was proposed inter alia by AlgorithmWatch, as meaning ‘[a]lgorithmically controlled, automated decision-making or decision support systems’ that are ‘procedures in which decisions are initially – partially or completely – delegated to another person or corporate entity, who then in turn use automatically executed decision-making models to perform an action’.AlgorithmWatch and Bertelsmann Stiftung, Automating Society: Taking Stock of Automated Decision Making in the EU (1st edn, AlgorithmWatch 2019) 9 <www.ivir.nl/publicaties/download/Automating_Society_Report_2019.pdf> accessed 28 February 2021. To rephrase this definition in light of the European composite decision-making: semi-automated could be understood as algorithmically underpinned or otherwise automated decision-making support systems, in which decisions are initially – partially or completely – delegated to different competent authorities that rely on automatically executed information to fulfil their tasks in implementing the Union legislation.
Emphasizing the ‘semi-automated’ nature of such decision-making avoids making a misleading implication of fully-automated decision-making in the sense of the GDPR prohibition (Article 22(1) GDPR, as it was under its predecessors and its equivalents).Article 22 GDPR (ex-Article 15 Regulation 95/46) and the equivalent Article 24 Regulation 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC [2018] OJ L295/30 (ex-Article 19 Regulation (EC) 45/2001) applying to EU institutions and bodies, and Article 11 Law Enforcement Directive (EU) 2016/680. See also Article 29 Data Protection Working Party, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’ (Guidelines) WP251rev.01. Automated decision-making (including profiling) in the GDPR sense concerns ‘processes that aim to divide groups of individuals into different categories based on common characteristics in order to base decisions on their belonging to a specific group.’G De Gregorio and S Ranchordas, ‘Breaking down Information Silos with Big Data: A Legal Analysis of Data Sharing’ in Joe Cannataci, Valeria Falce and Oresto Pollicino (eds), Legal Challenges of Big Data (Edward Elgar 2020) 226. This prohibition is, however, limited. For instance, it can be evaded where automated decision-making is authorised by EU or national law (Article 23 GDPR) where required for matters of national security, given that the law provides the necessary safeguards for the protection of individual rights and legitimate interests. Furthermore, the application of this prohibition is narrowly construed since it only concerns automated processing ‘which produces legal effects’ or ‘serious impactful effects’. This means a decision shall ‘significantly affect the circumstances, behaviour or choices of the individuals concerned; have a prolonged or permanent impact on the data subject; or at its most extreme, lead to the exclusion or discrimination of individuals’.Article 29 Data Protection Working Party, ‘Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679’ (Guidelines) WP251rev.01.
The minimum safeguards required in automated processing of personal data, especially the guarantee of ‘at least the right to a human intervention’,Article 22(3) GDPR requires that: ‘(…) the data controller shall implement suitable measures to safeguard the data subject's rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision’. These are considered minimum safeguards, i.e. Member States can offer additional guarantees where necessary. See M Brkan, ‘Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond’ (2019) 27 International Journal of Law and Information Technology 91, 107. have been subject of much concern.LA Bygrave, ‘Minding the Machine v2.0: The EU General Data Protection Regulation and Automated Decision-Making’ in K Yeung and M Lodge (eds), Algorithmic Regulation (Oxford University Press 2019) 249. The author recalls that at the roots of the safeguard of human intervention was the ‘fear for the future of human dignity in the face of machine determinism’. Accordingly, the rationale of the data protection provisions ‘was grounded in a concern to ensure that humans maintain ultimate control of, and responsibility for decisional processes that significantly affect other humans, and that they thereby maintain the primary role in “constituting” themselves’. Jurisprudence concerning the existing safeguards is still in the early phases of development, as we come to experience novel challenges to automated processing of personal data from the plaintiffs or in the law enforcement actions of the national authorities.M Brkan, ‘Do Algorithms Rule the World? Algorithmic Decision-Making and Data Protection in the Framework of the GDPR and Beyond’ (2019) 27 International Journal of Law and Information Technology 91, 115. The author highlights that ‘[d]rafting legislation in such a way also demonstrates that the legislator seemed to intentionally leave the final decision on the existence of this right to the CJEU, which is, as it has been repeatedly demonstrated in the recent case law, rather purposeful and activist when interpreting data protection legislation’. Ultimately, asserting that human intervention would guarantee a meaningful exercise of decision-making discretion could be misleading.As the study by the Council of Europe also claims ‘[i]n fact, boundaries between human and automated decision-making are often blurred.’ Instead, it suggests the notion of ‘quasi- or semi-automated decision-making’: The Committee of Experts on Internet Intermediaries (MSI-NET), Algorithms and Human Rights: Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications (Council of Europe 2018) 7 <https:// edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html> accessed 28 February 2021. To demonstrate this, the following sub-sections outline two spheres of interrelated effects of automation: on the nature of information retrieved (3.1) and on the decision-making conduct (3.2).
There are several effects of automation underpinning the decisional importance of the informational output generated. At least two, interrelated, effects are particularly prevalent: output obsessionM Smith, M Noorman and A Martin, ‘Automating the Public Sector and Organizing Accountabilities’ (2010) 26 Communications of the Association for Information Systems, 4 <https:// aisel.aisnet.org/cais/vol26/iss1/1> accessed 28 February 2021. (or automation bias)LJ Skitka, KL Mosier and M Burdick, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies 991. and algorithmic opacity.M Veale, ‘Logics and Practices of Transparency and Opacity in Real-World Applications of Public Sector Machine Learning’ (arXiv.org, 19 June 2017) <http://arxiv.org/abs/1706.09249> accessed 28 February 2021. The two effects are transformative, as they undermine (or even prevent) a proper exercise of human agency.B Wagner, ‘Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems’ (2019) 11 Policy & Internet 104.
The output obsession, also known as the phenomenon of automation bias, refers to the authorities’ tendency to trust a computationally obtained output to be correct and objective.M Smith, M Noorman and A Martin, ‘Automating the Public Sector and Organizing Accountabilities’ (2010) 26 Communications of the Association for Information Systems, 4 <https:// aisel.aisnet.org/cais/vol26/iss1/1> accessed 28 February 2021. As a result of this tendency,C Hall, ‘Challenging Automated Decision-Making by Public Bodies: Selected Case Studies from Other Jurisdictions’ (2020) 25 Judicial Review 8, 14. a human agent endowed with the decision-making responsibility is less likely to question the obtained output, and thus less likely to perform an effective revision of the correctness of the obtained information.Kuziemski and Misuraca, for instance, report on a study which found that less than 1 in 100 decisions made by the algorithm have been questioned by the responsible clerks: M Kuziemski and G Misuraca, ‘AI Governance in the Public Sector: Three Tales from the Frontiers of Automated Decision-Making in Democratic Settings’ (2020) 44 Telecommunications Policy 101976, 8. Other reasons for not effectively questioning the obtained information include especially the need for a timely action from the responsible agent. Indeed, the key incentive for introducing automation in decision-making processes rests on enhancing efficiency and effectiveness through minimising the bureaucratic workload.C Coglianese and D Lehr, ‘Regulating by Robot: Administrative Decision Making in the Machine-Learning Era’ (2016) 105 Georgetown Law Journal 1147, 1150. The author calls for a ‘measured optimism about the potential benefits’ of novel technologies for the government functions.
Closely related to the tendency to trust computational output is the fact that the automated ‘advice’ is beyond the understanding of the human agent relying on it in their decision-making. This is because of the algorithmic opacity, i.e. a limited explainability of the automated output.S Wachter, B Mittelstadt and C Russell, ‘Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR’ (2017) 31 Harvard Journal of Law & Technology 841 and B Waltl and R Vogl, ‘Increasing Transparency in Algorithmic-Decision-Making with Explainable AI’ (2018) 42 Datenschutz und Datensicherheit 613. The algorithmic opacity, however, not only undermines transparency and the exercise of individual rights; it also effectively removes the ability of the decision-maker to exercise their discretion.M Veale, ‘Logics and Practices of Transparency and Opacity in Real-World Applications of Public Sector Machine Learning’ (arXiv.org, 19 June 2017) <http://arxiv.org/abs/1706.09249> accessed 28 February 2021; M Ananny and K Crawford, ‘Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability’ (2018) 20 New Media & Society 973 and Reuben Binns, ‘Human Judgement in Algorithmic Loops; Individual Justice and Automated Decision-Making’ (2019) <https://papers.ssrn.com/abstract=3452030> accessed 28 February 2021 (under review).
The two effects of automation on the informational output consequently also alter the nature of the decision-making. Automation biases and algorithmic opacity render it more difficult for the responsible agent to understand, consult, or otherwise verify the output, i.e. to meaningfully exercise their decision-making discretion. Such ‘rubber-stamping’M Lodge and A Mennicken, ‘Reflecting on Public Service Regulation by Algorithm’ in K Yeung and M Lodge (eds), Algorithmic Regulation (1st edn, Oxford University Press 2019) 193. of ‘advice’ is not unique to semi-automated decision-making. Indeed, it is common in decision-making that relies on technical or scientific ‘advice’,M Busuioc, European Agencies: Law and Practices of Accountability (Oxford University Press 2013) 210–12. for instance in the context of risk regulation or banking supervision.Such individual decision-making or quasi-regulatory powers are vested with scientific/technical standards agencies, such as Community Plant Variety Office, European Aviation Safety Agency, European Medicines Agency, or European Food Safety Authority, and the banking and internal market agencies, European Banking Authority, European Securities and Markets Authority, and European Insurance and Occupational Pensions Authority. However, the concern with a meaningful human intervention where the ‘advice’ entails a computerised/automatised information differs from relying on ‘expert or scientific advice’. The latter is subject to concrete procedural requirements, ranging from occupational qualifications of the experts to the duty of care and the related reasoning obligations,The duty of the competent authority ‘to examine carefully and impartially all the relevant elements of the case,’ seminally elaborated in Case C-269/90 TUM, EU:C:1991:438, para 14. See also Case T-410/03 Hoechst, EU:T:2008:211, para 129; Case T-326/07 Cheminova a.o., EU:T:2009:299, para 228 and Case C-505/09 P Estonia v Commission, EU:C:2012:179, para 95. compliance with which is, albeit to a narrow extent, subject to effective review.It is beyond the scope of this paper to discuss the limits of such review: see A Fritzsche, ‘Discretion, Scope of Judicial Review and Institutional Balance in European Law’ (2010) 47 Common Market Law Review 361, 371. As a standard formula, the review verifies that the institution respects a) the rules governing the procedure from the statement of reasons, b) it states the law and facts correctly and c) it does not commit any manifest error of assessment of those facts or a misuse of powers: Case T-589/08 Evropaïki Dynamiki, EU:T:2011:73, para 24, with further references.
Instead, in semi-automated decision-making, the effects of automation bias in conjunction with the algorithmic opacity render the agent almost unable to contradict the algorithmic output. Some authors report the tendency of human agents to reverse their own opinion or judgment concerning a particular situation as a result of the algorithmic output.LJ Skitka, KL Mosier and M Burdick, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies 991 and K Goddard, A Roudsari and JC Wyatt, ‘Automation Bias: A Systematic Review of Frequency, Effect Mediators, and Mitigators’ (2012) 19 Journal of the American Medical Informatics Association 121. Such behavioural impacts of automation ‘elevat[e] the [technology’s] level of autonomy beyond the supporting role’.European Parliament Directorate General for Parliamentary Research Services, Artificial Intelligence: How Does It Work, Why Does It Matter, and What We Can Do about It? (European Union 2020) 28 <https://data.europa.eu/doi/10.2861/44572> accessed 28 February 2021. This is because automation as ‘part of system design’ ‘effectively confer[s] “authority” to the ADM system’, ‘reduces human re-evaluation as a final safeguard, and thus increases the scope for agency loss’.TD Krafft, KA Zweig and PD König, ‘How to Regulate Algorithmic Decision-Making: A Framework of Regulatory Requirements for Different Applications’ [2020] Regulation & Governance 18, 12. The ‘human in the loop’ challenge shall therefore be addressed with a set of procedural guarantees,B Wagner, ‘Liable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems’ (2019) 11 Policy & Internet 104, 114–16. The author provides a rather ambitious set of criteria to be taken into account in ‘cases of quasi-automation’, including: the amount of time which the agent has in relation to the task; the agent’s degree of qualification to fulfil the specific task; the degree of liability to be assigned to the agent for non-compliance; the level of support that the agent receives; adaption capabilities of the agent to the system changes; the agent’s access to all relevant information required for the decision; and, lastly, their agency (authority to change the decision). which would reflect the ‘decisional’ value of information in light of the effects of automation on the decision-making agency.
Beyond the possibilities for meaningful exercise of human agency, the decision-making ‘automatises’ also as a result of wider automatization in public administration. Bovens and Zouridis already two decades ago identified a structural shift from ‘street-level bureaucrats’ to ‘system-level bureaucrats’, suggesting that ‘[t]he members of the organization are no longer involved in handling individual cases, but direct their focus toward system development and maintenance, toward optimizing information processes, and toward creating links between systems in various organizations’.M Bovens and S Zouridis, ‘From Street-Level to System-Level Bureaucracies: How Information and Communication Technology Is Transforming Administrative Discretion and Constitutional Control’ (2002) 62 Public Administration Review 174, 178. The authors provoke that ‘[i]nstead of noisy, disorganized decision-making factories populated by fickle officials, many of these executive agencies are fast becoming quiet information refineries, in which nearly all decisions are pre-programmed by algorithms and digital decision trees. Today, a more true-to-life vision of the term “bureaucracy” would be a room filled with softly humming servers, dotted here and there with a system manager behind a screen’. The nature of the conduct by all sections of the government – including law enforcement authorities, visa or customs authorities – is shifting as well, and becoming intrinsically interdependent with the information processes. In that respect, Snellen, Thaens, and van de Donk suggest a concept of i-Government, i.e. an information government instead of an electronic government, as better suited to the degree of ‘informatization’ that is currently ingrained in public conduct. The authors define i-Government as ‘the awareness of necessary dependencies between forms of digitalization and informatization across borders of (sub) sectors in society and across layers of government’.IThM Snellen, M Thaens and WBHJ van de Donk, Public Administration in the Information Age: Revisited (IOS Press 2012) 5.
Lastly, automation also alters decision-making where the latter operates on mutual trust.E Keymolen, C Prins and C Raab, ‘Trust and ICT: New Challenges for Public Administration’ in IThM Snellen, M Thaens and WBHJ van de Donk (eds), Public Administration in the Information Age: Revisited (IOS Press 2012). While the ideal of mutual trust is the founding principle of much European cooperation, the ‘black box’The expression ‘black box’ is used in reference to the uncertainty stemming from the algorithmic opacity. I borrow the expression to highlight the transfer of such uncertainty also to the notion of trust in European integrated conduct, which when reinforced by the technological black box, extends to the uncertainty among the authorities concerning each other’s conduct. Katherine Kwong, ‘The Algorithm Says You Did It: The Use of Black Box Algorithms to Analyze Complex DNA Evidence’ (2017) 31 Harvard Journal of Law & Technology 275. nature of the kind of trust underpinning ‘i-Government’ relations differs from its operational logic in other forms of interactions, which essentially allow, yet limit, the possibility for mutual verification.E Brouwer, ‘Mutual Trust and Judicial Control in the Area of Freedom, Security, and Justice: An Anatomy of Trust’ (2016) EUI Law Working Papers 2016/13, 59 <https://cadmus.eui.eu/handle/1814/41486> accessed 9 June 2021; S Prechal, ‘Mutual Trust Before the Court of Justice of the European Union’ (2017) 2 European Papers 75 and E Xanthopoulou, ‘Mutual Trust and Rights in EU Criminal and Asylum Law: Three Phases of Evolution and the Uncharted Territory beyond Blind Trust’ (2018) 55 Common Market Law Review 489. The outlined transformation resulting from wider automatization of public conduct thus also affects the dynamics in the accountability axis in specific cases of individual decision-making.
This article conceptualises the ongoing transformation resulting from introducing automation techniques in European composite decision-making. In traditional forms of composite cooperation, ‘information’ would be requested by one authority to assist another authority in acting in a specific case. Here, the authority receiving the information is capable of making its own assessment thereof (although this is not necessarily likely) before the action taken as a final measure, i.e. the exercise of their human agency (discretion) is possible. In mutual informational assistance, the decision-maker’s discretion is reduced with the demands for mutual trust and recognition. The fact that the competent authority is, at the end of the day, capable of making the verification nonetheless justifies placing the locus of responsibility with that authority.
Increasingly, however, informational ‘assistance’ is delivered through computerized applications promising to deliver a highly reliable, unbiased, and fully accurate output that can be trusted by the decision-maker. This is not to claim that comprehensive procedures for verification of the correctness, availability, lawfulness, and accuracy of the processed information do not exist. Still, the complex technical architecture underlying the opaque trajectory of the automated informational output – starting as a set of data entered by a competent authority of one Member State, linked through algorithmic processing against numerous other inputs from authorities of other sections of competence or jurisdictions, and finally retrieved through further processing and soon through searches against information stored in other existing IT systems – is restricting the space and capacity for a meaningful exercise of human agency. The ‘decisional value’ of information in semi-automated decision-making is not yet fully reflected in the recognition of the ‘legal effects’ of the automated factual conduct, both in the relevant legislation as well as in the hitherto case-law of the Court of Justice.