Stel je voor dat een computer beslist of je een lening krijgt, of dat een programma bepaalt of je wel of niet mag werken. Het klinkt misschien als sciencefiction, maar zulke algoritmes bestaan vandaag al en hebben een enorme impact op ons dagelijks leven.
Wat staat er op het spel?

Algoritmes worden steeds vaker ingezet om belangrijke keuzes te maken: van sollicitaties en kredietaanvragen tot de toekenning van sociale uitkeringen. Ze zijn snel, efficiënt en beloven objectief te zijn. Toch maken ze fouten, die soms moeilijk te begrijpen of recht te zetten zijn.
Een klassiek voorbeeld is een algoritme dat automatisch fraude opspoort bij mensen die een uitkering krijgen. Als het systeem zich vergist, kan iemand onterecht als fraudeur worden bestempeld. In Nederland leidde het beruchte toeslagenschandaal tot een nationale crisis: tienduizenden gezinnen werden onterecht als fraudeur bestempeld, raakten in diepe schulden en in sommige gevallen zelfs hun kinderen kwijt. Algoritmes die risicoprofielen opstelden bleken daarbij een sleutelrol te spelen.
De grote vraag is dus: wie houdt de touwtjes in handen?
Wat onderzocht ik in mijn thesis?
In mijn masterproef heb ik onderzocht hoe de Europese wetgeving dit probleem probeert aan te pakken. Ik heb daarbij twee belangrijke Europese regelgevingen onder de loep genomen:
Mijn centrale onderzoeksvraag was: Hoe versterkt of wijkt de AI Act af van het GDPR-kader voor menselijk toezicht?
Om die vraag te beantwoorden, heb ik Europese wetgeving en rechtspraak grondig geanalyseerd en dit aangevuld met richtlijnen van toezichthouders en academisch debat. Zo kreeg ik een volledig beeld van hoe het recht vandaag omgaat met algoritmische beslissingen en menselijk toezicht.
De grenzen van de GDPR
De GDPR geeft mensen het recht om niet te worden onderworpen aan een beslissing die volledig automatisch tot stand komt, én die een grote impact heeft op je leven. In theorie klinkt dat sterk, maar in de praktijk zitten er veel gaten in het net.
Eerst en vooral geldt het verbod enkel bij volledig geautomatiseerde beslissingen. Zodra er ergens een mens formeel “een vinkje zet”, kan een bedrijf of overheid de regels omzeilen. Onderzoek toont aan dat zo’n menselijke tussenkomst vaak puur symbolisch is.
Bovendien is het criterium van “een grote impact” heel vaag. Wanneer verandert een beslissing iemands leven écht? Gaat het enkel om jobs en leningen, of ook om subtielere vormen van discriminatie, zoals algoritmes die mensen uit kwetsbare buurten vaker controleren?
Rechtbanken hebben intussen verduidelijkt dat dit recht eerder gezien moet worden als een verbod en dat je de naleving dus niet zelf moet opeisen. Toch blijft het moeilijk om in de praktijk het ongelijk van een algoritme aan te tonen.
Wat verandert er met de AI Act?
De AI Act pakt het probleem op een andere manier aan. In plaats van te kijken naar het effect van een beslissing, werkt deze wet met een risicobenadering.
Sommige toepassingen, zoals AI die gebruikt wordt in sociale zekerheid, onderwijs of justitie, worden aangeduid als hoog risico. Voor zulke systemen legt de AI Act strikte regels op. Een van de belangrijkste: er moet doeltreffend menselijk toezicht mogelijk zijn.
Dat betekent dat een mens het algoritme moet kunnen begrijpen, opvolgen en ingrijpen wanneer dat nodig is. In de praktijk gaat het om zaken zoals een “noodstopknop”, duidelijke uitleg voor de gebruiker en het vermijden van “blind vertrouwen” in het systeem.
Dit is een stap vooruit. De wet voorziet wel degelijk dat de bredere gevolgen voor de samenleving mee in het oog gehouden worden. Maar die verplichting geldt slechts in specifieke gevallen, waardoor veel risico’s voor groepen mensen toch nog onder de radar kunnen blijven.
Concrete voorbeelden
Tijdens mijn onderzoek heb ik verschillende cases geanalyseerd. Een paar sprekende voorbeelden:
Deze voorbeelden tonen aan dat wetgeving en praktijk vaak botsen, en dat het moeilijk blijft om algoritmes écht verantwoordelijk te houden.
Mijn bijdrage
Mijn thesis probeert duidelijk te maken waar de GDPR in deze context tekortschiet en hoe de AI Act dat gedeeltelijk opvangt. Ik laat zien dat de GDPR te vaag en te beperkt is, terwijl de AI Act strikter is maar tegelijk ook hiaten laat.
Daarbij stel ik dat menselijk toezicht niet alleen een juridische verplichting is, maar ook een fundamentele vraag naar democratische controle: Wie draagt de verantwoordelijkheid wanneer een algoritme schadelijke gevolgen heeft?
Waarom dit ook voor jou van belang is
Misschien denk je dat dit een ver-van-mijn-bedshow is. Maar algoritmes beslissen al vandaag over jobs, leningen, verzekeringen en zelfs de kans op een huisbezoek van de politie. Als de regels onvoldoende duidelijk zijn, kunnen burgers slachtoffer worden van onzichtbare fouten of vooroordelen in code.
De inzet is dus niet louter technisch, maar gaat over mensenrechten zoals privacy, gelijkheid en menselijke waardigheid.
Slot
Algoritmes kunnen nuttig en efficiënt zijn, maar ze mogen nooit zonder menselijk toezicht beslissen over ons leven. De Europese Unie zet stappen vooruit, maar het debat is nog lang niet voorbij.
Mijn onderzoek toont aan dat de kern van de uitdaging niet alleen gaat over wetteksten of systemen, maar over iets fundamenteler: de spanning tussen menselijke controle en niet-menselijke besluitvorming. Is het niet, in essentie, een onontkoombare paradox om een menselijk element verplicht te maken binnen systemen die net als geautomatiseerd worden gekwalificeerd?
Door die fundamentele vragen te stellen, blijft het onderzoek oog hebben voor het grotere plaatje: hoe we als samenleving willen omgaan met de macht van algoritmes.
Consolidated version of the Treaty on European Union [2012] OJ C326/13
Consolidated version of the Treaty on the Functioning of the European Union [2012] OJ C326/47
Charter of Fundamental Rights of the European Union [2012] OJ C326/391
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L281/31 (‘DPD’)
Loi n° 78-17 du 6 janvier 1978 relative à l’informatique, aux fichiers et aux libertés
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC [2016] OJ L119/1 (‘GDPR’)
Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing Council Framework Decision [2016] OJ L119/89 (‘LED’)
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC [2022] OJ L227/1 (‘DSA’)
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 [2024] OJ L2024/1689 (‘AI Act’)
Wet van 8 december 1992 tot bescherming van de persoonlijke levenssfeer ten opzichte van de verwerking van persoonsgegevens, BS 3 februari 1999
Case C-203/22 Dun & Bradstreet Austria GmbH ECLI:EU:C:2024:745, Opinion of AG De La Tour
Case C-203/22 Dun & Bradstreet Austria GmbH ECLI:EU:C:2025:117
Case C-634/21 Case C-634/21 SCHUFA Holding [2023] ECLI:EU:C:2023:957
Bundesgerichtshof, VI ZR 156/13 (28 January 2014) (Germany).
Rechtbank Amsterdam, C/13/687315 / HA RK 20-207, ECLI:NL:RBAMS:2021:1020 (11 March 2021) (Netherlands)
Rechtbank Amsterdam, C/13/689705 / HA RK 20-258, ECLI:NL:RBAMS:2021:1019 (11 March 2021) (Netherlands)
Rechtbank Amsterdam, C/13/692003 / HA RK 20-302, ECLI:NL:RBAMS:2021:1018 (11 March 2021) (Netherlands)
Rechtbank Amsterdam, 742407 / HA RK 23-366, ECLI:NL:RBAMS:2024:4019 (4 July 2024) (Netherlands)
Rechtbank Den Haag, C/09/550982 / HA ZA 18-388, ECLI:NL:RBDHA:2020:1878 (5 February 2020) (Netherlands)
Rechtbank Den Haag, C/09/585239 / HA ZA 19-1221, ECLI:NL:RBDHA:2020:865 (11 February 2020) (Netherlands)
Tribunal Administratif de Marseille, La Quadrature du Net, No 1901249 (27 November 2020) (France)
Ústavný súd Slovenskej republiky (Constitutional Court of Slovakia), Case 492/2021 Z. z. (10 November 2021) (Slovakia)
Hamilton and others v Post Office Limited [2021] EWCA Crim 577 (United Kingdom)
State v Loomis, Supreme Court of Wisconsin (United States)
Alexander V, Blinder C and Zak PJ, ‘Why Trust an Algorithm? Performance, Cognition, and Neurophysiology’ (2018) 89 Computers in Human Behavior 279 <https://www.sciencedirect.com/science/article/pii/S0747563218303480> accessed 5 August 2025
Almada, Marco, ‘Automated Decision-Making as a Data Protection Issue’ [2021] Social Science Research Network <https://www.researchgate.net/publication/350759609_Automated_decision-m…; accessed 8 August 2025
Anderson B, ‘Tesla Says It’s Driverless But Someone’s Always Watching’ (Carscoops, 23 June 2025) <https://www.carscoops.com/2025/06/tesla-robotaxis-arrive-in-austin-but-…; accessed 22 July 2025
Bainbridge L, ‘Ironies of Automation’ (1983) 19 Automatica 775 <https://www.sciencedirect.com/science/article/pii/0005109883900468> accessed 3 August 2025
Bauer K and others, ‘Expl(AI)n It to Me – Explainable AI and Information Systems Research’ (2021) 63 Business & Information Systems Engineering 79 <https://doi.org/10.1007/s12599-021-00683-2> accessed 8 August 2025
Bertolini A, ‘Artificial Intelligence and Civil Liability’ (Policy Department for Justice, Civil Liberties and Institutional Affairs 2025) <https://www.europarl.europa.eu/RegData/etudes/STUD/2025/776426/IUST_STU…;
Bhabendu Kumar Mohanta, Soumyashree S Panda, and Debasish Jena, ‘An Overview of Smart Contract and Use Cases in Blockchain Technology’, ResearchGate (2025) <https://www.researchgate.net/publication/328581609_An_Overview_of_Smart…; accessed 22 July 2025
Binns R and Veale M, ‘Is That Your Final Decision? Multi-Stage Profiling, Selective Effects, and Article 22 of the GDPR’ (2021) 11 International Data Privacy Law 319 <https://doi.org/10.1093/idpl/ipab020> accessed 26 July 2025
Brkan M, ‘Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond’ (Social Science Research Network, 1 August 2017) <https://papers.ssrn.com/abstract=3124901> accessed 29 June 2025
Burrell J, ‘How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms’ (2016) 3 Big Data & Society 2053951715622512 <https://doi.org/10.1177/2053951715622512> accessed 28 June 2025
Busuioc M, Curtin D and Almada M, ‘Reclaiming Transparency: Contesting the Logics of Secrecy within the AI Act’ (2023) 2 European Law Open 79 <https://www.cambridge.org/core/journals/european-law-open/article/recla…; accessed 29 July 2025
Cobbe J, Lee MSA and Singh J, ‘Reviewable Automated Decision-Making: A Framework for Accountable Algorithmic Systems’, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery 2021) <https://dl.acm.org/doi/10.1145/3442188.3445921> accessed 8 August 2025
Crootof R, Kaminski M and W. Price II, ‘Humans in the Loop’ (2023) 76 Vanderbilt Law Review 429 <https://scholarship.law.vanderbilt.edu/vlr/vol76/iss2/2>
Ebers M, ‘Truly Risk-Based Regulation of Artificial Intelligence - How to Implement the EU’s AI Act’ (Social Science Research Network, 19 June 2024) <https://papers.ssrn.com/abstract=4870387> accessed 24 July 2025
——, ‘AI Robotics in Healthcare Between the EU Medical Device Regulation and the Artificial Intelligence Act’ (2024) 11 Oslo Law Review 1 <https://www.scup.com/doi/10.18261/olr.11.1.2> accessed 25 July 2025
Edwards L, ‘Regulating AI in Europe: Four Problems and Four Solutions (Ada Lovelace Institute Expert Opinion)’ (Social Science Research Network, 1 March 2022) <https://papers.ssrn.com/abstract=5026691> accessed 21 July 2025
Elish MC, ‘Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction (Pre-Print)’ (Social Science Research Network, 1 March 2019) <https://papers.ssrn.com/abstract=2757236> accessed 3 August 2025
Enarsson T, Enqvist L and Naarttijärvi M, ‘Approaching the Human in the Loop – Legal Perspectives on Hybrid Human/Algorithmic Decision-Making in Three Contexts’ (2022) 31 Information & Communications Technology Law 123 <https://doi.org/10.1080/13600834.2021.1958860> accessed 28 April 2024
Enqvist L, ‘“Human Oversight” in the EU Artificial Intelligence Act: What, When and by Whom?’ (2023) 15 Law, Innovation and Technology 508 <https://doi.org/10.1080/17579961.2023.2245683> accessed 22 December 2024
Ferraris V and others, ‘Defining Profiling’ (Social Science Research Network, 11 December 2013) <https://papers.ssrn.com/abstract=2366564> accessed 22 July 2025
Fink M, ‘Human Oversight under Article 14 of the EU AI Act’ (Social Science Research Network, 14 February 2025) <https://papers.ssrn.com/abstract=5147196> accessed 25 June 2025
Fischhoff B, Watson SR and Hope C, ‘Defining Risk’ (1984) 17 Policy Sciences 123 <https://doi.org/10.1007/BF00146924> accessed 25 July 2025
Frank Knight, Risk, Uncertainty, and Profit (1921) <https://fraser.stlouisfed.org/files/docs/publications/books/risk/riskun…; accessed 11 August 2025
Gil González E and De Hert P, ‘Understanding the Legal Provisions That Allow Processing and Profiling of Personal Data—an Analysis of GDPR Provisions and Principles’ (2019) 19 ERA Forum 597 <http://link.springer.com/10.1007/s12027-018-0546-z> accessed 23 June 2025
Green B, ‘The Flaws of Policies Requiring Human Oversight of Government Algorithms’ (2022) 45 Computer Law & Security Review 105681 <https://www.sciencedirect.com/science/article/pii/S0267364922000292> accessed 28 April 2024
Green B and Chen Y, ‘Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts’ (2021) 5 Proc. ACM Hum.-Comput. Interact. 418:1 <https://dl.acm.org/doi/10.1145/3479562> accessed 3 August 2025
Gültekin-Várkonyi DG, ‘AI Literacy for Legal AI Systems: A Practical Approach’ (Social Science Research Network, 20 May 2025) <https://papers.ssrn.com/abstract=5309725> accessed 21 July 2025
Harasimiuk D and Braun T, Regulating Artificial Intelligence: Binary Ethics and the Law (Routledge 2021)
Heather Roff and Richard Moyes, ‘Meaningful Human Control, Artificial Intelligence and Autonomous Weapons’ (2016) <https://article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS-FINAL.p…;
Hildebrandt M, ‘The Disconnect Between “Upstream” Automation and Legal Protection Against Automated Decision Making’ (Technology Law, 7 April 2022) <https://cyber.jotwell.com/the-disconnect-between-upstream-automation-an…; accessed 26 July 2025
Holzinger A, Zatloukal K and Müller H, ‘Is Human Oversight to AI Systems Still Possible?’ (2025) 85 New Biotechnology 59 <https://www.sciencedirect.com/science/article/pii/S1871678424005636> accessed 25 June 2025
Kaminski ME and Malgieri G, ‘The Right to Explanation in the AI Act’ (Social Science Research Network, 8 March 2025) <https://papers.ssrn.com/abstract=5194301> accessed 7 July 2025
Koen Smit and Martijn Zoet, ‘A Governance Framework for (Semi) Automated Decision-Making’, Proceedings of the Tenth International Conference on Information, Process, and Knowledge Management (2018) <https://www.researchgate.net/publication/325975696_A_Governance_Framewo…; accessed 5 August 2025
Kuner C and others, ‘The EU General Data Protection Regulation: A Commentary/Update of Selected Articles’ [2021] SSRN Electronic Journal <https://www.ssrn.com/abstract=3839645> accessed 27 April 2024
Laux J, ‘Institutionalised Distrust and Human Oversight of Artificial Intelligence: Towards a Democratic Design of AI Governance under the European Union AI Act’ (2024) 39 AI & SOCIETY 2853 <https://doi.org/10.1007/s00146-023-01777-z> accessed 25 June 2025
Lazcoz G and de Hert P, ‘Humans in the GDPR and AIA Governance of Automated and Algorithmic Systems. Essential Pre-Requisites against Abdicating Responsibilities’ (2023) 50 Computer Law & Security Review 105833 <https://www.sciencedirect.com/science/article/pii/S0267364923000432> accessed 15 March 2025
Levitina A, ‘Humans in Automated Decision-Making under the GDPR and AI Act’ [2024] Revista CIDOB d’Afers Internacionals 121 <https://cidob.org/en/publications/humans-automated-decision-making-unde…; accessed 26 July 2025
Mahieu RLP and Ausloos J, ‘Recognising and Enabling the Collective Dimension of the GDPR and the Right of Access’
Malgieri G, ‘“Just” Algorithms: Justification (Beyond Explanation) of Automated Decisions Under the General Data Protection Regulation’ (2021) 1 Law and Business 16 <https://sciendo.com/article/10.2478/law-2021-0003> accessed 8 August 2025
Malgieri G and Comandé G, ‘Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation’ (Social Science Research Network, 13 November 2017) <https://papers.ssrn.com/abstract=3088976> accessed 24 June 2025
Malgieri G and Pasquale F, ‘Licensing High-Risk Artificial Intelligence: Toward Ex Ante Justification for a Disruptive Technology’ (2024) 52 Computer Law & Security Review 105899 <https://www.sciencedirect.com/science/article/pii/S0267364923001097> accessed 4 July 2025
Mantelero A, Beyond Data: Human Rights, Ethical and Social Impact Assessment in AI (Springer Nature 2022) <https://library.oapen.org/handle/20.500.12657/57009> accessed 7 August 2025
——, ‘The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, Legal Obligations and Key Elements for a Model Template’ (2024) 54 Computer Law & Security Review 106020 <http://arxiv.org/abs/2411.15149> accessed 1 August 2025
McCorduck P and Cfe C, Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2nd edn, A K Peters/CRC Press 2004)
Mendoza I and Bygrave LA, ‘The Right Not to Be Subject to Automated Decisions Based on Profiling’ (Social Science Research Network, 8 May 2017) <https://papers.ssrn.com/abstract=2964855> accessed 26 June 2025
Nathalie A. Smuha, ‘Beyond the Individual: Governing AI’s Societal Harm’ (2021) 10 Internet Policy Review
Novelli C and others, ‘Taking AI Risks Seriously: A New Assessment Model for the AI Act’ (2024) 39 AI & SOCIETY 2493 <https://doi.org/10.1007/s00146-023-01723-z> accessed 24 July 2025
Okpala I, Golgoon A and Kannan AR, ‘Agentic AI Systems Applied to Tasks in Financial Services: Modeling and Model Risk Management Crews’ (arXiv, 29 April 2025) <http://arxiv.org/abs/2502.05439> accessed 22 July 2025
Parasuraman R and Manzey DH, ‘Complacency and Bias in Human Use of Automation: An Attentional Integration’ (2010) 52 Human Factors 381 <https://doi.org/10.1177/0018720810376055> accessed 24 June 2025
Pasquale F, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press 2015) <https://www.jstor.org/stable/j.ctt13x0hch> accessed 7 August 2025
Rai A, Constantinides P and Sarker S, ‘Next Generation Digital Platforms : Toward Human-AI Hybrids’ (2019) 43 MIS Quarterly iii <https://misq.org/misq/downloads/> accessed 21 July 2025
Riikka Koulu, ‘Human Control over Automation : EU Policy and AI Ethics’ (2020) 12 European journal of legal studies <https://hdl.handle.net/1814/66992>
Ruschemeier H, ‘AI as a Challenge for Legal Regulation – the Scope of Application of the Artificial Intelligence Act Proposal’ (2023) 23 ERA Forum 361 <https://doi.org/10.1007/s12027-022-00725-6> accessed 26 July 2025
Schmidt P, Biessmann F and Teubner T, ‘Transparency and Trust in Artificial Intelligence Systems’ (2020) 29 Journal of Decision Systems 260 <https://doi.org/10.1080/12460125.2020.1819094> accessed 8 August 2025
Schwemer SF, Tomada L and Pasini T, ‘Legal AI Systems in the EU’s Proposed Artificial Intelligence Act’ (Social Science Research Network, 21 June 2021) <https://papers.ssrn.com/abstract=3871099> accessed 1 August 2025
Selbst AD and Barocas S, ‘The Intuitive Appeal of Explainable Machines’ (Social Science Research Network, 2 March 2018) <https://papers.ssrn.com/abstract=3126971> accessed 24 June 2025
Skitka LJ, Mosier KL and Burdick M, ‘Does Automation Bias Decision-Making?’ (1999) 51 International Journal of Human-Computer Studies 991 <https://linkinghub.elsevier.com/retrieve/pii/S1071581999902525> accessed 11 December 2024
Stephan Dreyer and Wolfgang Schulz, ‘The General Data Protection Regulationand Automated Decision-Making:Will It Deliver?’ [2019] Bertelsmann Foundation
Sterz S and others, ‘On the Quest for Effectiveness in Human Oversight: Interdisciplinary Perspectives’, The 2024 ACM Conference on Fairness, Accountability, and Transparency (ACM 2024) <https://dl.acm.org/doi/10.1145/3630106.3659051> accessed 22 December 2024
Stuart Russel and Peter Norvig, Artificial Intelligence A Modern Approach (Fourth Edition Global Edition, Pearson 2021)
Sweeney L, ‘Discrimination in Online Ad Delivery’ (2013) 56 Commun. ACM 44 <https://dl.acm.org/doi/10.1145/2447976.2447990> accessed 2 August 2025
Teresa Rodríguez de las Heras Ballell, ‘Guiding Principles for Automated Decision-Making in the EU’ (2022) ELI Innovation Paper
Tiago Sérgio Cabral, ‘AI and the Right to Explanation: Three Legal Bases under the GDPR, Data Protection and Privacy (2021) <https://www.researchgate.net/publication/349496743_AI_and_the_Right_to_…; accessed 30 June 2025
Tosoni L, ‘The Right To Object to Automated Individual Decisions: Resolving the Ambiguity of Article 22(1) of the General Data Protection Regulation’ (Social Science Research Network, 14 May 2021) <https://papers.ssrn.com/abstract=3845913> accessed 2 July 2025
Turing AM, ‘Computing Machinery and Intelligence’ (1950) LIX Mind 433 <https://doi.org/10.1093/mind/LIX.236.433> accessed 21 July 2025
Veale M and Borgesius FZ, ‘Demystifying the Draft EU Artificial Intelligence Act’ (2021) 22 Computer Law Review International 97 <http://arxiv.org/abs/2107.03721> accessed 24 December 2024
Veale M and Edwards L, ‘Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling’ (2018) 34 Computer Law & Security Review 398 <https://www.sciencedirect.com/science/article/pii/S026736491730376X> accessed 1 July 2025
Wachter S, Mittelstadt B and Floridi L, ‘Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 International Data Privacy Law 76 <https://doi.org/10.1093/idpl/ipx005> accessed 28 April 2024
‘Xenophobic Machines: Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal’ (Amnesty International, 25 October 2021) <https://www.amnesty.org/en/documents/eur35/4686/2021/en/> accessed 24 June 2025
Zuiderveen Borgesius F and Poort J, ‘Online Price Discrimination and EU Data Privacy Law’ (2017) 40 Journal of Consumer Policy 347 <https://doi.org/10.1007/s10603-017-9354-z> accessed 2 August 2025
Agencia Española de Protección de Datos (AEPD), Procedimiento Nº: E/03624/2021(2021) <https://www.aepd.es/es/documento/e-03624-2021.pdf>
‘AI Act | Shaping Europe’s Digital Future’ (24 July 2025) <https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-…; accessed 24 July 2025
‘AI Watch: Global Regulatory Tracker - China | White & Case LLP’ (29 May 2025) <https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulato…; accessed 21 September 2025
Amended proposal for a Council Directive on the protection of individuals with regard to the processing of personal data and on the free movement of such data 1992
Anderson B, ‘Tesla Says It’s Driverless But Someone’s Always Watching’ (Carscoops, 23 June 2025) <https://www.carscoops.com/2025/06/tesla-robotaxis-arrive-in-austin-but-…; accessed 22 July 2025
Article 29 Data Protection Working Party, ‘Opinion 02/2013 on Apps on Smart Devices’ <https://ec.europa.eu/justice/article-29/documentation/opinion-recommend…;
Autoriteit Persoonsgegevens, ‘Boete Belastingdienst kinderopvangtoeslag’ <https://www.autoriteitpersoonsgegevens.nl/documenten/boete-belastingdie…; accessed 4 July 2025
CNPD, Deliberação n.º 2021/622, May 11, 2021; Rechtbank Den Haag, Case C/09/585239 / HA ZA 19-1221, ECLI:NL:RBDHA:2020:865 (11 February 2020)
Data Protection Authority of Belgium, ‘Artificial Intelligence Systems and the GDPR: A Data Protection Perspective’ (2024) <https://www.autoriteprotectiondonnees.be/publications/artificial-intell…;
‘Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms’ POLITICO (29 March 2022) <https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-e…; accessed 24 June 2025
EDPB, ‘Guidelines on Data Protection Impact Assessment (DPIA) and Determining Whether Processing Is “Likely to Result in a High Risk” for the Purposes of Regulation 2016/679’ (2017) <https://ec.europa.eu/newsroom/article29/items/611236> accessed 6 July 2025
——, ‘Guidelines 01/2022 on Data Subject Rights - Right of Access’ (2023) <https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guid…; accessed 6 July 2025
——, ‘Guidelines 4/2019 on Article 25 Data Protection by Design and by Default’ <https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guid…; accessed 2 August 2025
——, ‘Guidelines 05/2020 on Consent under Regulation 2016/679’ <https://www.edpb.europa.eu/our-work-tools/our-documents/guidelines/guid…; accessed 28 June 2025
——, ‘Guidelines 07/2020 on the Concepts of Controller and Processor in the GDPR’
‘EDPB-EDPS Joint Opinion 5/2021 on the Proposal for a Regulation of the European Parliament and of the Council Laying down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)’
EDPB/WP29, ‘Article 29 Working Party Guidelines on Transparency under Regulation 2016/679’ (2018) <https://www.edpb.europa.eu/system/files/2023-09/wp260rev01_en.pdf>
——, ‘Article 29 Data Protection Working Party, Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679, WP251rev.01’ (2018) <https://ec.europa.eu/newsroom/article29/items/612053/en> accessed 13 March 2025
European Commission, ‘Impact Assessment of the Regulation on Artificial Intelligence’ (2021)
European Commission, Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) COM(2021) 206 final
European Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence and Trust’ COM(2020) 65 final
Garante per la protezione dei dati personali, Ordinanza ingiunzione nei confronti di Deliveroo Italy S.r.l. (22 July 2021) [9685994] <https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/do…;
Gegevensbeschermingsautoriteit (GBA), Decision on the Merits No. 109/2024 (29 August 2024) <https://www.gegevensbeschermingsautoriteit.be/publications/beslissing-t…;
Gianmarco Gori, Figure ‘Software, Data and AI Value Chain’ in Legal Professionals in the Digital Age.(Regulatory Framework and Capita Selecta) (Vrije Universiteit Brussel, 2024)
‘Guidance on the AI Auditing Framework’ (UK Information Commissioner’s Office 2020)
High-Level Expert Group on AI, ‘Ethics Guidelines for Trustworthy AI | Shaping Europe’s Digital Future’ (2019) <https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trus…; accessed 22 December 2024
Hunt T, ‘How Artificial Intelligence Will Change Administrative Law: The Government of Canada’s Directive on automated Decision-Making | DLA Piper’ <https://www.dlapiper.com/en-us/insights/publications/2023/05/how-artifi…; accessed 3 August 2025
IMY, Decision in case DI-2019-4062 concerning Klarna Bank AB (28 March 2022) https://www.imy.se/globalassets/dokument/beslut/klarna/beslut-tillsyn-k… accessed 5 July 2025, 18
Kostiantyn Ponomarov, ‘Global AI Regulations Tracker: Europe, Americas & Asia-Pacific Overview’ (25 August 2025) <https://legalnodes.com/article/global-ai-regulations-tracker> accessed 21 September 2025
Lisa McClory, ‘Legal Requirements for Automated Decision-Making in the EU & UK’ (GLI, 19 April 2024) <https://www.globallegalinsights.com/practice-areas/ai-machine-learning-…; accessed 5 August 2025
‘Model Rules on Impact Assessment of Algorithmic Decision-Making Systems Used by Public Administration’ (European Law Institute)
‘MPNE Patient Consensus on Data and AI 2.0’ (Issuu, 18 October 2024) <https://issuu.com/mpne/docs/mpne_consensus_2.0_long> accessed 29 July 2025
Sebastião Barros Vale, ‘GDPR and the AI Act Interplay: Lessons from FPF’s ADM Case-Law Report - Future of Privacy Forum’ (Future of Privacy Forum, 2022) <https://fpf.org/blog/gdpr-and-the-ai-act-interplay-lessons-from-fpfs-ad…; accessed 5 August 2025
Sebastião Barros Vale and Gabriela Zanfir-Fortuna, ‘Automated Decision-Making Under the GDPR: Practical Cases from Courts and Data Protection Authorities’ (Future of Privacy Forum 2022) <https://fpf.org/wp-content/uploads/2022/05/FPF-ADM-Report-R2-singles.pd…; accessed 3 July 2025
Tambiama Madiega, Artificial Intelligence Act (EU Legislation in Progress Briefing, European Parliamentary Research Service, PE 698.792, September 2024)
‘Xenophobic Machines: Discrimination through Unregulated Use of Algorithms in the Dutch Childcare Benefits Scandal’ (Amnesty International, 25 October 2021) <https://www.amnesty.org/en/documents/eur35/4686/2021/en/> accessed 24 June 2025