Weet jij wat er in je AI zit? Je koopt toch ook geen blik soep zonder ingrediëntenlijst? Waarom zouden we AI dan blind vertrouwen? Toch doen we dat vandaag wél. Daarom besloot ik een oplossing te ontwikkelen: digitale etiketten die de eigenschappen van AI zichtbaar en betrouwbaar maken.

Vandaag de dag is de opmars van AI-systemen niet te stoppen. Zelfs Coca-Cola beweert producten te creëren met AI. Maar die razendsnelle adoptie van AI brengt ook problemen met zich mee. AI-systemen zijn zelden transparant, veilig en conform met nieuwe regelgevingen. In essentie functioneren ze als black boxes.
We weten nauwelijks met welke data ze werden getraind of welke keuzes programmeurs onderweg maakten. En dat kan gevaarlijk zijn. Zo ontdekte Stanford University dat in een populaire dataset, LAION-5B, zelfs afbeeldingen van kindermisbruik zaten. Toch werd die dataset wereldwijd gebruikt om modellen te trainen.

Ik vond dat AI veel meer transparantie nodig heeft. Daarom ontwikkelde ik een verifieerbare AI Bill of Materials (AIBOM): een soort ingrediëntenlijst voor elk AI-model.
Het idee van een Bill of Materials komt uit de softwarewereld, waar zogenaamde SBOMs al langer bestaan. Onder begeleiding van mijn promotors bouwde ik voort op dit concept en paste het toe op AI-modellen.
Met mijn tool, AIBoMGen, maak ik zo’n etiket automatisch. Geen gedoe: je ziet meteen welke data en software erachter zitten. Net zoals je op een verpakking ziet hoeveel suiker en zout een product bevat, toont een AIBOM wat er precies in een algoritme zit.
Wat AIBoMGen bijzonder maakt, is dat het systeem onafhankelijk observeert. In plaats van dat de ontwikkelaar zelf netjes moet opschrijven wat hij of zij gebruikt heeft, wat vaak onnauwkeurig of onvolledig gebeurt, genereert het platform automatisch een digitaal ondertekend etiket.
Dankzij cryptografische beveiliging is dat etiket niet te vervalsen. Elke dataset, parameter en meetwaarde krijgt een unieke digitale vingerafdruk. Zelfs de kleinste wijziging nadien wordt opgemerkt. Het resultaat is een verifieerbare checklist die door iedereen kan worden gecontroleerd.
Die transparantie is niet alleen handig, ze wordt binnenkort ook verplicht. De Europese AI Act legt strengere regels op voor bedrijven die AI ontwikkelen en inzetten, vooral in kritieke omgevingen. Wie straks geen verantwoording kan afleggen over zijn modellen, loopt risico op boetes.
Met gelijkaardige systemen kunnen ontwikkelaars makkelijker aantonen dat ze de regels respecteren. De oplossing biedt dus niet enkel een wetenschappelijke meerwaarde, maar ook een praktische uitweg voor de industrie. Handig dus voor de wetgever, maar ook voor ontwikkelaars zelf: het systeem blijkt bovendien efficiënt én veilig in gebruik.
Een belangrijk punt: dit alles vertraagt de AI-training nauwelijks. Uit de evaluatie blijkt dat de extra rekentijd verwaarloosbaar is en dat ook de opslagimpact minimaal blijft.
Daarnaast werd het systeem getest tegen mogelijke manipulatiepogingen. Conclusie: de digitale etiketten bleven betrouwbaar, ook onder druk van aanvallen.
Met AIBoMGen wil ik aantonen dat AI niet noodzakelijk een black box hoeft te zijn. Net zoals we van voedselproducenten verwachten dat ze hun ingrediënten openlijk tonen, kunnen we van AI-ontwikkelaars verwachten dat ze dat ook doen.
AI heeft net zo goed een etiket nodig als voeding. Alleen zo weet je of je kan vertrouwen wat je consumeert.
Aguirre, A., & Millet, R. (2024, juli 23). Verifiable Training of AI Models. Future of Life Institute. https://futureoflife.org/ai/verifiable-training-of-ai-models/
Amazon Web Services. (2025). Firecracker. https://firecracker-microvm.github.io/
Aqua Security. (2025). Trivy. Trivy. https://trivy.dev/v0.62/
Ask Solem & contributors. (2023). Celery 5.5.2 documentation: Periodic Tasks. Celery 5.5.2. https://docs.celeryq.dev/en/stable/userguide/periodic-tasks.html
AssemblyAI (Regisseur). (2022, april 3). Getting Started With Hugging Face in 15 Minutes | Transformers, Pipeline, Tokenizer, Models [Video recording]. https://www.youtube.com/watch?v=QEaBAZQCtwE
Bardenstein, D. (2023a). Manifest: AIBOMs, Transparency into AI. https://www.manifestcyber.com/aibom
Bardenstein, D. (Regisseur). (2023b). Webinar: AIBoM (Manifest) [Video recording].
Bardenstein, D. (2023c). Whitepaper: Driving AI Transparancy: The AI Bill of Materials (Manifest). manifestcyber.com
Bardenstein, D. (2024). Manifest-cyber/aibom [En-US]. Manifest. https://github.com/manifest-cyber/aibom (Original work published 2023)
Bell, J. (2025). Jasebell/ai-bill-of-materials [TeX]. https://github.com/jasebell/ai-bill-of-materials (Original work published 2023)
Courtois, S. (2024, juli 5). JSON vs YAML: What’s the Difference, and Which One Is Right for Your Enterprise? SnapLogic. https://www.snaplogic.com/blog/json-vs-yaml-whats-the-difference-and-which-one-is-right-for-your-enterprise
Creese, S. (2023, december 5). Why we need to reflect on the need for cybersecurity of AI [Weforum]. World Economic Forum. https://www.weforum.org/stories/2023/12/cybersecurity-ai-ethics-responsible-innovation/
Docker. (2023). Docker SDK for Python 7.1.0 documentation. Docker SDK. https://docker-py.readthedocs.io/en/stable/
Docker. (2025a). Docker Compose [Documentation]. Docker Documentation. https://docs.docker.com/compose/
Docker. (2025b). Isolate containers with a user namespace. Docker Documentation. https://docs.docker.com/engine/security/userns-remap/
Docker. (2025c). Running containers. Docker Documentation. https://docs.docker.com/engine/containers/run/
Docker. (2025d). Services top-level elements. Docker Documentation. https://docs.docker.com/reference/compose-file/services/
Docker. (2025e). Tmpfs mounts. Docker Documentation. https://docs.docker.com/engine/storage/tmpfs/
Docker. (2025f, april 9). Docker. Accelerated Container Application Development. https://www.docker.com/
Docker & TensorFlow. (2024). Image Layer Details—Tensorflow/tensorflow:2.16.1-gpu. Docker Hub. https://hub.docker.com/layers/tensorflow/tensorflow/2.16.1-gpu/images/sha256-4ab9ffddd6ffacc9251ac6439f431eb38d66200d3f52397b5d77f9bc3298c4e9
Dremio. (2025). Adversarial Attacks in AI. Dremio. https://www.dremio.com/wiki/adversarial-attacks-in-ai/
ECMA international. (2021). ECMA-424 CycloneDX - Bill of Materials (BOM) specification (No. ECMA-424). https://ecma-international.org/wp-content/uploads/ECMA-424_1st_edition_june_2024.pdf
EU. (2022, september 15). Cyber Resilience Act | Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act
EU. (2024a). EU AI Act Compliance Checker | EU Artificial Intelligence Act. https://artificialintelligenceact.eu/assessment/eu-ai-act-compliance-checker/
EU. (2024b). EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act. https://artificialintelligenceact.eu/
EU. (2024c). Regulation—EU - 2024/1689—EN - EUR-Lex [Regulation]. Eur-Lex. https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng
EU. (2024d). The AI Act Explorer | EU Artificial Intelligence Act. https://artificialintelligenceact.eu/ai-act-explorer/
EU. (2024e, februari 27). High-level summary of the AI Act | EU Artificial Intelligence Act. https://artificialintelligenceact.eu/high-level-summary/
EU. (2024f, september 30). AI Act—Shaping Europe’s digital future. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
EU. (2025). Governance and enforcement of the AI Act | Shaping Europe’s digital future. Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/ai-act-governance-and-enforcement
EU Made Simple (Regisseur). (2024, september 21). The EU’s AI Act Explained [Video recording]. https://www.youtube.com/watch?v=s_rxOnCt3HQ
FastAPI. (2025). FastAPI. FastAPI. https://fastapi.tiangolo.com/
Fireship (Regisseur). (2021, oktober 7). Go in 100 Seconds [Video recording]. https://www.youtube.com/watch?v=446E-r0rXHI
Gaynor, A., & Kehrer, P. (2025). Cryptography/hazmat [Software]. https://github.com/pyca/cryptography/blob/main/src/cryptography/hazmat/primitives/ciphers/algorithms.py
Ghosh, B. (2024, juli 7). AI Bill of Materials (AI BOM). Medium. https://medium.com/@bijit211987/ai-bill-of-materials-ai-bom-80d48f9d75e0
Gracias, S. (2024). Comparing the EU AI Act to Proposed AI-Related Legislation in the US | [Review]. The University of Chicago Business Law Review. https://businesslawreview.uchicago.edu/print-archive/comparing-eu-ai-act-proposed-ai-related-legislation-us
Hart, R. (2024). Clearview AI—Controversial Facial Recognition Firm—Fined $33 Million For ‘Illegal Database’. Forbes. https://www.forbes.com/sites/roberthart/2024/09/03/clearview-ai-controversial-facial-recognition-firm-fined-33-million-for-illegal-database/
Hassan, N. (2024). How and why to create an AI bill of materials. TechTarget. https://www.techtarget.com/searchenterpriseai/tip/How-and-why-to-create-an-AI-bill-of-materials
Hugging Face. (2025, mei 16). Hugging Face. The AI community building the future. https://huggingface.co/
imec. (z.d.-a). ‘GPULab and JupyterHub introduction’ slidedeck. Geraadpleegd 18 november 2024, van https://doc.ilabt.imec.be/ilabt/_downloads/a94a14498e29a054885db6019351534d/GPULab%20and%20JupyterHub%20introduction.pdf
imec. (z.d.-b). GPULab—Imec iLab.t documentation. Geraadpleegd 18 november 2024, van https://doc.ilabt.imec.be/ilabt/gpulab/
Intel Technology (Regisseur). (2021, augustus 6). Confidential Computing with Intel® SGX: Multi-Party Analytics | Intel Technology [Video recording]. https://www.youtube.com/watch?v=8w32MZptbPc
International Organization for Standardization and International Electrotechnical Commission. (2021). ISO/IEC 5962:2021 Software package data exchange (SPDX) (No. ISO/IEC 5962:2021). https://www.iso.org/standard/81870.html
in-toto. (2025a). In-toto. In-Toto. https://in-toto.io/
in-toto. (2025b). In-toto 3.0.0 documentation. in-toto docs. https://in-toto.readthedocs.io/en/latest/
Jasserand, C. (2022, april 28). Clearview AI: Illegally collecting and selling our faces in total impunity? (Part I). CiTiP blog. https://www.law.kuleuven.be/citip/blog/clearview-ai-illegally-collecting-and-selling-our-faces-in-total-impunity-part-i/
Kennedy, C. (2025). Comparing in-toto and Sigstore: Two Approaches to Software Supply Chain Security. https://testifysec.com/blog/sigstore-vs-in-toto/
Keras Team. (1994). Keras documentation: MNIST digits classification dataset [Dataset]. https://keras.io/api/datasets/mnist/
Keras Team. (2009). Keras documentation: CIFAR10 small images classification dataset [Dataset]. https://keras.io/api/datasets/cifar10/
khan, S. (2023, oktober 24). Unlocking SBOM Formats: SPDX vs. CycloneDX — Know the Key Differences. Medium. https://medium.com/@meer-khan/unlocking-sbom-formats-spdx-vs-cyclonedx-know-the-key-differences-39015a8cbc5f
Kjell, J., & Meadows, T. (Regisseurs). (2024, april 26). A Step Closer to in-Toto’lly Secure: Using in-Toto and OPA Gatekeeper [Video recording]. https://www.youtube.com/watch?v=b_ImE70Vhd8
Krantz, T. (2024, december 10). What Is Data Poisoning? IBM. https://www.ibm.com/think/topics/data-poisoning
Kubeflow. (2025). Kubeflow. Kubeflow. https://www.kubeflow.org/
Linux Foundation. (2023a). AI – SPDX. AI – SPDX. https://spdx.dev/learn/areas-of-interest/ai/
Linux Foundation. (2023b). SPDX. SPDX. https://spdx.dev/
Linux Foundation. (2024a). Implementing AI Bill of Materials (AI BOM) with SPDX 3.0. https://www.linuxfoundation.org/research/ai-bom
Linux Foundation. (2024b). SPDX AI Github Specification 3.0.1. AI - SPDX Specification 3.0.1. https://spdx.github.io/spdx-spec/v3.0.1/model/AI/AI/
Linux Foundation. (2024c). SPDX Github Specification 3.0.1. SPDX Specification 3.0.1. https://spdx.github.io/spdx-spec/v3.0.1/
Microsoft. (2025). Microsoft Entra ID (previously Azure Active Directory). Microsoft Security. https://www.microsoft.com/nl-nl/security/business/identity-access/microsoft-entra-id
Microsoft. (2025). AzureAD/microsoft-authentication-library-for-js [TypeScript]. Azure Active Directory. https://github.com/AzureAD/microsoft-authentication-library-for-js (Original work published 2017)
MinIO. (2025). MinIO - S3 Compatible Storage for AI. MinIO. https://min.io
Mithril Security (Regisseur). (2024, juni 28). AICert fine-tuning demo [Video recording]. https://www.youtube.com/watch?v=-GQeXQyunYc
Mithril Security. (2025). AICert. AICert Docs. https://aicert.mithrilsecurity.io/en/latest/
Mithril Security. (2025). Mithril-security/aicert [Python]. Mithril Security. https://github.com/mithril-security/aicert (Original work published 2023)
MLflow. (2025). MLflow. https://mlflow.org/
Movsisyan, M. & contributors. (2023). Flower 2.0.0 documentation. Flower 2.0.0. https://flower.readthedocs.io/en/latest/
NTIAGov (Regisseur). (2021). NTIAGov: SBOM Explainer [YouTube]. https://www.youtube.com/playlist?list=PLO2lqCK7WyTDpVmcHsy6R2HWftFkUp6zG
NTIAGov. (2023). NTIAGov: Software Bill of Materials. Software Bill of Materials. https://www.ntia.gov/page/software-bill-materials
Oakley, H. (Regisseur). (2024, mei 8). AIBOM Workshop @ RSAC 2024 [Video recording]. https://www.youtube.com/watch?v=C-m2IHzBVBE
Oakley, H. (2024). Aibom-squad/rsa-2024 [En-US]. AIBOM Squad. https://github.com/aibom-squad/rsa-2024 (Original work published 2024)
Open Source Initiative. (2024). Open Source AI Deep Dive. https://opensource.org/deepdive
OpenInfra Foundation. (2025). Kata Containers—Open Source Container Runtime Software. Kata Containers. https://katacontainers.io/
Oracle. (2025). MySQL. MySQL. https://www.mysql.com/
OWASP Foundation. (2024a). Authorative Guide to SBOM. OWASP Foundation. https://cyclonedx.org/guides/OWASP_CycloneDX-Authoritative-Guide-to-SBOM-en.pdf
OWASP Foundation. (2024b). CycloneDX - Machine Learning Bill of Materials (ML-BOM). Machine Learning Bill of Materials (ML-BOM). https://cyclonedx.org/capabilities/mlbom/
OWASP Foundation. (2024c, juli 1). CycloneDX - Software Bill of Materials (SBOM). CycloneDX. https://cyclonedx.org/
OWASP Foundation. (2025a). CycloneDX Python Library 10.0.1 documentation [OWASP Foundation]. CycloneDX Python Library 10.0.1 documentation. https://cyclonedx-python-library.readthedocs.io/en/latest/
OWASP Foundation. (2025b). CycloneDX Python Library 10.0.1 documentation: V1.6 schema Support. CycloneDX Python Library 10.0.1 documentation: v1.6 schema Support. https://cyclonedx-python-library.readthedocs.io/en/latest/schema-support.html
Ozoani, E., GerChick, M., & Mitchell, M. (2022). Model Cards. Hugging Face Articles. https://huggingface.co/blog/model-cards
Pierce Portfolio (Regisseur). (2024, januari 28). MLFlow: A Quickstart Guide [Video recording]. https://www.youtube.com/watch?v=cjeCAoW83_U
Pühringer, L. (2025). Secure-systems-lab/securesystemslib [Python]. Secure Systems Lab at NYU. https://github.com/secure-systems-lab/securesystemslib (Original work published 2016)
Python. (2022). Python Release Python 3.11.0. Python.Org. https://www.python.org/downloads/release/python-3110/
RabbitMQ. (2025). RabbitMQ: One broker to queue them all. https://www.rabbitmq.com/
Ray. (2025). Ray by Anyscale. Ray by Anyscale: Scale Machine Learning & AI Computing. https://ray.io
Righi, T. (2022, oktober 5). TensorFlow Remote Code Execution with Malicious Model [TensorFlow Remote Code Execution with Malicious Model]. CyberBlog. https://splint.gitbook.io/cyberblog/security-research/tensorflow-remote-code-execution-with-malicious-model
Savaete, L. (2025). Slowapi [Python]. https://github.com/laurentS/slowapi (Original work published 2020)
Scarcella, M. (2025, maart 21). US judge approves ‘novel’ Clearview AI class action settlement. Reuters. https://www.reuters.com/legal/litigation/us-judge-approves-novel-clearview-ai-class-action-settlement-2025-03-21/
Sebrechts, M. (2024). Open Source AI and the risks of “Openwashing”.
Securin. (2024a, juni 11). Bringing in the BoM Squad: Defining and Generating AI Bill of Materials. https://www.securin.io/articles/bringing-in-the-bom-squad-defining-and-generating-ai-bill-of-materials/
Securin. (2024b, juli 30). How Evolving AI Regulations Impact Cybersecurity—Securin. https://www.securin.io/articles/how-evolving-ai-regulations-impact-cybersecurity/
shadcn. (2025). Shadcn/ui. Build Your Component Library - Shadcn/Ui. https://ui.shadcn.com/
Sigstore Community. (2025). Sigstore. Sigstore. https://www.sigstore.dev/undefined/
Solem, A. & contributors. (2023). Celery 5.5.2 documentation. Celery 5.5.2. https://docs.celeryq.dev/en/stable/
Sonatype. (2025). Complete Guide to AIBOMs. Sonatype. https://www.sonatype.com/resources/articles/aiboms
Svensson, J. K. (2025). FastAPI-Azure-Auth. https://intility.github.io/fastapi-azure-auth/
Tailwind Labs. (2025). Tailwind CSS. Tailwind CSS - Rapidly Build Modern Websites without Ever Leaving Your HTML. https://tailwindcss.com/
Tambiama, M. (2023). General-purpose artificial intelligence. EU. https://www.europarl.europa.eu/RegData/etudes/ATAG/2023/745708/EPRS_ATA(2023)745708_EN.pdf
TechWorks. (2024, juni 13). Compliance with the EU AI Act The Techworks Trusted AI Bill Of Materials (TAIBOM) project. TechWorks. https://www.techworks.org.uk/ai-taibom/compliance-with-the-eu-ai-act-the-techworks-trusted-ai-bill-of-materials-taibom-project
TechWorks. (2025a). SDK for Creating & Verifying TAIBOMs. TAIBOM. https://taibom.nqminds.com/sdk/sdk/
TechWorks. (2025b). TAIBOM. TechWorks. https://taibom.nqminds.com/docs/
TechWorks & NquiringMinds. (2023). Trustworthy AI - Practical Collaborative Engineering. https://www.techworks.org.uk/wp-content/uploads/2024/01/Engineering-Trustworthy-AI.pdf
TensorFlow. (2024a). TensorFlow v2.16.1: Tf.data.TFRecordDataset. https://www.tensorflow.org/api_docs/python/tf/data/TFRecordDataset
TensorFlow. (2024b, juni 7). Tf.keras.models.load_model. TensorFlow v2.16.1 API. https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model
Thiel, D. (2023, december 20). Investigation Finds AI Image Generation Models Trained on Child Abuse. Stanford University. https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse
UC Irvine. (2009). Winequality-red.csv [Dataset]. https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv
Uslaner, J. D., Coquin, A., Uslaner, J. D., & Coquin, A. (2025, mei 30). ‘AI washing’: Regulatory and private actions to stop overstating claims. Reuters. https://www.reuters.com/legal/legalindustry/ai-washing-regulatory-private-actions-stop-overstating-claims-2025-05-30/
Vandendriessche, W. (2025a). Wievdndr/AIBoMGen [Software]. https://github.ugent.be/wievdndr/AIBoMGen
Vandendriessche, W. (2025b). AIBoMGen-experiments Scripts & Results (Versie v1.2) [Software]. Zenodo. https://doi.org/10.5281/zenodo.15505280
Vandendriessche, W. (2025c). AIBoMGen (Versie v1.0) [Software]. Zenodo. https://doi.org/10.5281/zenodo.15536533
Vercel. (2025). Next.js by Vercel. Next.Js by Vercel - The React Framework. https://nextjs.org/
vinaypamnani-msft. (2024, juli 10). Trusted Platform Module Technology Overview. Microsoft. https://learn.microsoft.com/en-us/windows/security/hardware-security/tpm/trusted-platform-module-overview
Webb, E. (z.d.). CEO of Clearview AI, the startup that scraped billions of online face images, resigns. Business Insider. Geraadpleegd 30 mei 2025, van https://www.businessinsider.com/clearview-ai-ceo-resigns-hal-lambert-richard-schwartz-2025-2
Wikipedia. (2025). Large language model. In Wikipedia. https://en.wikipedia.org/w/index.php?title=Large_language_model&oldid=1288064259
Wiz Experts Team. (2025, januari 31). AI-BOM: Building an AI-Bill of Materials. Wiz.Io. https://www.wiz.io/academy/ai-bom-ai-bill-of-materials
World Wide Web Consortium. (2025). W3C. W3C. https://www.w3.org/