“Dromen computers over neurale bomen?”, een nieuwe techniek voor het begrijpen van slimme algoritmen en hoe ze denken.

Senne
Deproost

Introductie

AI is de laatste uitvinding die de mensheid ooit nog zal moeten maken. ” Een uitspraak van Oxford filosoof Nick Bostrom waarbij de toekomstige potentie van slimme computers en zelflerende algoritmen (ook wel Artificiële Intelligentie genoemd) wordt aangehaald als de oplossing tot alle problemen die we vandaag de dag hebben. Een prachtig voorbeeld van vooruitgangsoptimisme dat wel inhoudt dat de AI die we creëren “gehoorzaam” is aan onze wil. Skynet uit de Terminator, HAL 9000 uit 2001: A Space Odyssey en Ultron uit The Avengers zijn voorbeelden van bloeddorstige AI’s wiens doel het is de mensheid uit te roeien of het toch zeer onaangenaam te maken. Hollywood heeft nog vele voorbeelden van deze verschikkelijke machines, maar voor elke kwaadaardige Ultron is er ook een vriendelijke WALL.E die uiteindelijk de mensheid redt. Voor elke moordende Terminator een helpende CHAPPiE die leert wat het is om gevoelens te hebben zoals wij. Wanneer we op het punt zouden staan de volgende revolutionaire AI te ontwikkelen, moeten we deze ook kunnen het concept van menselijke moraliteit aanleren: wat is slecht en wat is goed. Als ontwikkelaar van zulke systemen is het onmisbaar te verstaan wat er precies in het digitale brein van de AI omgaat en hoe het precies denkt. Hoe dan ook, dit is makkelijker gezegd dan gedaan.

 

Een glimp in de zwarte doos

AI heeft zich reeds een weg gebaand in vele producten en diensten die we vandaag de dag gebruiken, met potentieel intrede in vele andere toepassingen in de nabije toekomst. Van het voospellen van tumors in het lichaam tot het besturen van een auto. Je persoonlijke voice assistent op je telefoon tot het overtreffen van menselijke tegenstanders in spellen zoals schaken, go en StarCraft. Eén van de meest geavanceerde manieren om een computer iets aan te leren, ook wel Machine Learning genoemd, is door een computationeel model van het menselijk brein te simuleren. Dit digitale brein wordt vaardigheden aangeleerd op een gelijkaardige manier als het trainen van een huisdier door het te belonen wanneer het iets correct doet (het verrichten van de gevraagde taak) en het te straffen wanneer het niet gehoorzaam is. Door dit proces meermaals te herhalen zal de AI uiteindelijk leren wat juist is en wat fout is, resulterend in het gewenste gedrag.  Maar net zoals we nog niet volledig begrijpen hoe het menselijk brein werkt, kunnen we niet verstaan hoe de digitale versie denkt. Dit wordt vaak het probleem van de zwarte doos genoemd.

Binnen het gebied van Artificiële Intelligentie is er een recente focus op het interpreteren van deze digitale breinmodellen. De bedoeling is om de zwarte dozen op zijn minst deels verstaanbaar te maken voor een ervaren gebruiker of om zinvolle verklaringen eruit te genereren. Wanneer de AI een motivatie heeft waarom het in een bepaalde manier handelde, kan de gebruiker deze aansturen en verbeteren. Dit geeft een verhoogd niveau van controle over de AI waardoor het vertrouwen erin aanzienlijk toeneemt. Gemaakte uitleg kan ook helpen met het aanpassen van het leerprocess door een ontwikkelaar zodat de AI sneller kan leren.

Een bewezen methode om AI meer begrijpbaar te maken is door een complex model te trainen en deze dan een kleiner, simpeler model iets aan te laten leren. Dit is analoog met de manier een leerkracht of docent een student nieuwe kennis aanleert. Het “leerkracht” model is een versie van het digitale brein en het simpeler “student” model kan voorgesteld worden als een simpele set regels die beschrijven wat de AI moet doen. Deze regels, uitgedrukt in simpele formules, zijn meer interpreteerbaar voor de gebruiker.

RL loop

Fig. 1: Het leerproces van een digitaal brein in een bepaalde status. Het voert een actie uit in een omgeving waarna het een nieuwe status terecht komt samen met een beloning hoe goed zijn actie was in de vorige status.

 

Slimme brein bomen

Recent academisch werk maakte gebruik van beslissingsbomen voor de studentmodellen, wat verassende resultaten opleverde. Een beslissingsboom is een structuur vergelijkbaar met een stamboom. In elke ouderknoop van de boom wordt een bepaalde beslissing gemaakt die bepaalt naar welke onderliggende kinderknoop wordt voortgegaan. Deze procedure wordt uitgevoerd vanaf de bovenste knoop, ook wel de wortelknoop genoemd, helemaal tot beneden in de bladerknopen die zelf geen onderliggende kinderknopen hebben. Deze laatste knopen bepalen dan het gedrag van het model.

De beslissingen die gemaakt worden in de knopen kunnen simpele formules of regels zijn (zoals een ja-neen vraag), maar wanneer we ze vervangen door kleine brein modellen kunnen we ingewikkeldere beslissingen nemen. Deze mini-breinen zijn veel minder complex dan de originele leerkrachtmodellen en kunnen met een simpele visualisatietechniek weergeven waarop ze zich focussen als ze een beslissing maken. Bomen die gebruik maken van deze vereenvoudigde breinen worden ook wel neurale bomen genoemd.

Decision tree VS Neural Tree

Fig 2: Links, een voorbeeld van een beslissingsboom. Rechts, een neurale boom.

 

Maar bomen groeien toch, neen?

Het doel van de thesis was om te vertrekken van de leerkracht-student methode, samen met de neurale bomen, om een AI simpele Atari spelletjes te leren spelen. Deze computerspellen zijn een veelgebruikte methode om te testen hoe goed een AI op zichzelf kan leren.

Het probleem met de originele neurale bomen is dat ze vastgelegd worden in de computer alvorens ze getrained worden. De gebruiker weet niet op voorhand wat de optimale structuur (hoogte, complexiteit en aantal beslissingen) is waardoor de AI misschien minder goed zou werken. Een mogelijke oplossing is om de samenstelling van de boom te leren terwijl we de AI zelf traininen. Uit de thesis konden we concluderen dat het gebruik van deze adaptieve bomen kleinere modellen opleverde waardoor de AI sneller en beter kan leren. Sommige gedragingen in spelletjes met vijanden (zoals Pac-Man met de geesten) zijn een duidelijke indicatie dat de AI bewust is van gevaren in het spel.

We halen aan dat de gebruikte techniek ook in robotica en mechatronische systemen toepasbaar is waardoor toekomstige AI zou gezien worden als een begrijpelijke metgezel inplaats van het ongekende gevaarte sommige media doet vermoeden.

Trees in actionGraybox

Fig 3: Onze adaptieve bomen in actie.

 

Bibliografie

[1]

T. Jaksch, R. Ortner, and P. Auer, “Near-optimal regret bounds for reinforcement learning,” Journal of Machine Learning Research, vol. 11, pp. 1563–1600, 2010.

[2]

Z. Zhang and X. Ji, Regret minimization for reinforcement learning by evaluating the optimal bias function. 2019.

[3]

Y. LeCun and C. Cortes, “MNIST handwritten digit database,” 2010, [Online]. Available: http://yann.lecun.com/exdb/mnist/

[4]

P. Simard, Y. Le Cun, and J. Denker, “Efficient Pattern Recognition Using a New Transformation Distance,” in Advances in Neural Information Processing Systems 5, 1992, pp. 51–58.

[5]

J. Insa-Cabrera, D. L. Dowe, and J. Hernández-Orallo, “Evaluating a reinforcement learning algorithm with a general intelligence test,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2011, vol. 7023 LNAI, pp. 1–11. doi: 10.1007/978-3-642-25274-7_1.

[6]

A. Byerly, T. Kalganova, and I. Dear, “A Branching and Merging Convolutional Network with Homogeneous Filter Capsules,” ArXiv, vol. abs/2001.0, 2020.

[7]

J. Hernández-Orallo and D. L. Dowe, “Measuring universal intelligence: Towards an anytime intelligence test,” Artificial Intelligence, vol. 174, no. 18, pp. 1508–1539, Dec. 2010, doi: 10.1016/j.artint.2010.09.006.

[8]

G. Vilone and L. Longo, “Explainable Artificial Intelligence: a Systematic Review,” ArXiv, May 2020, [Online]. Available: http://arxiv.org/abs/2006.00093

[9]

G. Brockman et al., “OpenAI Gym,” techreport, Jun. 2016. [Online]. Available: http://arxiv.org/abs/1606.01540

[10]

L. Busoniu, R. Babuska, B. D. Schutter, and D. Ernst, Reinforcement Learning and Dynamic Programming Using Function Approximators, 1st ed. USA: CRC Press, Inc., 2010.

[11]

M. Grinberg, Flask web development: developing web applications with python. “ O’Reilly Media, Inc.,” 2018.

[12]

M. Pantsar, “Cognitive and Computational Complexity: Considerations from Mathematical Problem Solving,” Erkenntnis, 2019, doi: 10.1007/s10670-019-00140-3.

[13]

F. Kaptein, J. Broekens, K. Hindriks, and M. Neerincx, “The role of emotion in self-explanations by cognitive agents,” in 2017 7th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2017, 2018, vol. 2018-Janua, pp. 88–93. doi: 10.1109/ACIIW.2017.8272595.

[14]

I. Kandel and M. Castelli, “The effect of batch size on the generalizability of the convolutional neural networks on a histopathology dataset,” ICT Express, 2020, doi: https://doi.org/10.1016/j.icte.2020.04.010.

[15]

OpenAI et al., “Learning Dexterous In-Hand Manipulation,” The International Journal of Robotics Research, vol. 39, no. 1, pp. 3–20, Nov. 2019, doi: https://doi.org/10.1177/0278364919887447.

[16]

G. Engeln-Müllges and F. Uhlig, “Linear and Nonlinear Approximation,” in Numerical Algorithms with C, Berlin, Heidelberg: Springer Berlin Heidelberg, 1996, pp. 179–218. doi: 10.1007/978-3-642-61074-5_8.

[17]

M. Lima, Visual Complexity: Mapping Patterns of Information. Princeton Architectural Press, 2013.

[18]

M. van den Berg and O. Kuiper, “XAI in the Financial Sector. A Conceptual Framework for Explainable AI (XAI),” techreport, 2020. [Online]. Available: https://www-researchgate-net.ezproxy2.utwente.nl/publication/344079379\_XAI\_in\_the\_Financial\_Sector\_A\_Conceptual\_Framework\_for\_Explainable\_AI\_XAI\%0Ahttps://www.hu.nl/-/media/hu/documenten/onderzoek/projecten/explainable\_ai\_in\_the\_financial\_sector\_van\_den\_b

[19]

D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Representations by Back-propagating Errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986, doi: 10.1038/323533a0.

[20]

D. P. Kroese, T. Brereton, T. Taimre, and Z. I. Botev, “Why the Monte Carlo method is so important today,” Wiley Interdisciplinary Reviews: Computational Statistics, vol. 6, no. 6, pp. 386–392, Nov. 2014, doi: 10.1002/wics.1314.

[21]

A. Adadi and M. Berrada, “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI),” IEEE Access, vol. 6, pp. 52138–52160, Sep. 2018, doi: 10.1109/ACCESS.2018.2870052.

[22]

F. Xu, H. Uszkoreit, Y. Du, W. Fan, D. Zhao, and J. Zhu, “Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges,” in Natural Language Processing and Chinese Computing, 2019, pp. 563–574.

[23]

M. Mitchell, Complexity: A Guided Tour. Oxford University Press, USA, 2009, p. 349.

[24]

H. B. Curry, “The method of steepest descent for non-linear minimization problems,” Quarterly of Applied Mathematics, vol. 2, no. 3, pp. 258–261, 1944, doi: 10.1090/qam/10667.

[25]

T. Zahavy, N. B. Zrihem, and S. Mannor, “Graying the black box: Understanding DQNs,” in 33rd International Conference on Machine Learning, ICML 2016, 2016, vol. 4, pp. 2809–2822.

[26]

S. Varges, G. Riccardi, S. Quarteroni, and A. V. Ivanov, “The exploration/exploitation trade-off in reinforcement learning for dialogue management,” in Proceedings of the 2009 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2009, 2009, pp. 479–484. doi: 10.1109/ASRU.2009.5373260.

[27]

F. Girosi, M. Jones, and T. Poggio, “Regularization Theory and Neural Networks Architectures,” Neural Computation, vol. 7, no. 2, pp. 219–269, 1995, doi: 10.1162/neco.1995.7.2.219.

[28]

P. B. Torsten, Practical Grey-box Process Identification: Theory and Applications, 1st ed. Springer Publishing Company, Incorporated, 2010, p. 360.

[29]

F. Doshi-Velez and B. Kim, “Towards A Rigorous Science of Interpretable Machine Learning,” techreport, 2017. [Online]. Available: http://arxiv.org/abs/1702.08608

[30]

F. Rosenblatt, “The perceptron: A probabilistic model for information storage and organization in the brain,” Psychological Review, vol. 65, no. 6, pp. 386–408, 1958, doi: 10.1037/h0042519.

[31]

M. Van Lent, W. Fisher, and M. Mancuso, “An explainable artificial intelligence system for small-unit tactical behavior,” in Proceedings of the National Conference on Artificial Intelligence, 2004, pp. 900–907. [Online]. Available: www.aaai.org

[32]

V. Y. Rosales-Morales, G. Alor-Hernández, J. L. García-Alcaráz, R. Zatarain-Cabada, and M. L. Barrón-Estrada, “An analysis of tools for automatic software development and automatic code generation,” Revista Facultad de Ingenieria, vol. 2015, no. 77, pp. 75–87, 2015, doi: 10.17533/udea.redin.n77a10.

[33]

E. R. Kandel, J. H. Schwartz, and T. M. Jessell, Eds., Principles of Neural Science, Third. New York: Elsevier, 1991.

[34]

L. Mitrou, “Is the General Data Protection Regulation (Gdpr) ‘Artificial Intelligence-Proof,’” SSRN, no. December, 2018, doi: http://dx.doi.org/10.2139/ssrn.3386914.

[35]

M. I. Jordan and R. A. Jacobs, “Hierarchical Mixtures of Experts and the EM Algorithm,” Neural Computing, vol. 6, no. 2, pp. 181–214, Mar. 1994, doi: 10.1162/neco.1994.6.2.181.

[36]

M. Phuong and C. H. Lampert, “Towards understanding knowledge distillation,” in 36th International Conference on Machine Learning, ICML 2019, 2019, vol. 2019-June, pp. 8993–9007.

[37]

L. Zhu, F. R. Yu, Y. Wang, B. Ning, and T. Tang, “Big Data Analytics in Intelligent Transportation Systems: A Survey,” IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 1, pp. 383–398, Jan. 2019, doi: 10.1109/TITS.2018.2815678.

[38]

W. Samek, G. Montavon, A. Vedaldi, L. K. Hansen, and K.-R. Muller, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, vol. 11700. Springer, 2019, p. 435. doi: 10.1007/978-3-030-28954-6.

[39]

A. A. Freitas, “Comprehensible Classification Models: A Position Paper,” SIGKDD Explor. Newsl., vol. 15, no. 1, pp. 1–10, Mar. 2014, doi: 10.1145/2594473.2594475.

[40]

S. Karakovskiy and J. Togelius, “The Mario Ai benchmark and competitions,” IEEE Transactions on Computational Intelligence and AI in Games, vol. 4, no. 1, pp. 55–67, Mar. 2012, doi: 10.1109/TCIAIG.2012.2188528.

[41]

A. Rosenfeld and A. Richardson, “Explainability in human–agent systems,” Autonomous Agents and Multi-Agent Systems, vol. 33, no. 6, pp. 673–705, Nov. 2019, doi: 10.1007/s10458-019-09408-y.

[42]

J. Tobin, R. H. Fong, A. Ray, J. Schneider, W. Zaremba, and P. Abbeel, “Domain randomization for transferring deep neural networks from simulation to the real world,” in IEEE International Conference on Intelligent Robots and Systems, 2017, vol. 2017-Septe, pp. 23–30. doi: 10.1109/IROS.2017.8202133.

[43]

B. Sohlberg and E. Jacobsen, “Grey box modelling branches and experiences,” in IFAC Proceedings Volumes (IFAC-PapersOnline), 2008, vol. 17, no. 1 PART 1. doi: 10.3182/20080706-5-KR-1001.0607.

[44]

Z. Hu, X. Ma, Z. Liu, E. Hovy, and E. P. Xing, “Harnessing deep neural networks with logic rules,” in 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Long Papers, 2016, vol. 4, pp. 2410–2420. doi: 10.18653/v1/p16-1228.

[45]

C. Berner et al., “Dota 2 with Large Scale Deep Reinforcement Learning,” techreport, Dec. 2019. [Online]. Available: http://arxiv.org/abs/1912.06680

[46]

N. Patel and S. Upadhyay, “Study of Various Decision Tree Pruning Methods with their Empirical Comparison in WEKA,” International Journal of Computer Applications, vol. 60, no. 12, pp. 20–25, 2012, doi: 10.5120/9744-4304.

[47]

K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep inside convolutional networks: Visualising image classification models and saliency maps,” Dec. 2014. [Online]. Available: http://arxiv.org/abs/1312.6034

[48]

S. García, A. Fernández, and F. Herrera, “Enhancing the effectiveness and interpretability of decision tree and rule induction classifiers with evolutionary training set selection over imbalanced problems,” Applied Soft Computing Journal, vol. 9, no. 4, pp. 1304–1314, Sep. 2009, doi: 10.1016/j.asoc.2009.04.004.

[49]

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” Journal of Machine Learning Research, vol. 15, pp. 1929–1958, 2014.

[50]

K. Fukushima, “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” techreport 4, 1980. doi: 10.1007/BF00344251.

[51]

C. Molnar, G. Casalicchio, and B. Bischl, “Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges,” ArXiv, Oct. 2020, [Online]. Available: http://arxiv.org/abs/2010.09337

[52]

B. Ding, H. Qian, and J. Zhou, “Activation functions and their characteristics in deep neural networks,” Proceedings of the 30th Chinese Control and Decision Conference, CCDC 2018, pp. 1836–1841, 2018, doi: 10.1109/CCDC.2018.8407425.

[53]

S. Lange and M. A. Riedmiller, “Deep auto-encoder neural networks in reinforcement learning.,” in IJCNN, 2010, pp. 1–8. [Online]. Available: http://dblp.uni-trier.de/db/conf/ijcnn/ijcnn2010.html%5C#LangeR10

[54]

K. Li, F. Mai, R. Shen, and X. Yan, “Corporate Culture and Merger Success: Evidence from Machine Learning,” UBC Sauder Working Paper, 2018, [Online]. Available: https://editorialexpress.com/cgi-bin/conference/download.cgi?db%5C_name…

[55]

S. Nageshrao, H. E. Tseng, and D. Filev, “Autonomous Highway Driving using Deep Reinforcement Learning,” in 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Oct. 2019, pp. 2326–2331. doi: 10.1109/SMC.2019.8914621.

[56]

R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A Matlab-like Environment for Machine Learning,” 2011.

[57]

S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, 3rd ed. USA: Prentice Hall Press, 2009.

[58]

T. Jaakkola, M. L. Littman, C. Szepesvari, and S. Singh, “Convergence Results for Single-Step On-Policy Reinforcement-Learning Algorithms,” Machine Learning, vol. 39, no. 1998, pp. 287–308, 2000.

[59]

M. Harbers, K. van den Bosch, and J.-J. Ch. Meyer, “A Study into Preferred Explanations of Virtual Agent Behavior,” in Intelligent Virtual Agents, 2009, pp. 132–145.

[60]

J. H. Friedman, “Greedy Function Approximation: A Gradient Boosting Machine,” Annals of Statistics, vol. 29, pp. 1189–1232, 2000.

[61]

A. Goldstein, A. Kapelner, J. Bleich, and E. Pitkin, “Peeking Inside the Black Box: Visualizing Statistical Learning With Plots of Individual Conditional Expectation,” Journal of Computational and Graphical Statistics, vol. 24, no. 1, pp. 44–65, 2015, doi: 10.1080/10618600.2014.907095.

[62]

D. W. Apley and J. Zhu, “Visualizing the effects of predictor variables in black box supervised learning models,” Journal of the Royal Statistical Society. Series B: Statistical Methodology, vol. 82, no. 4, pp. 1059–1086, 2020, doi: 10.1111/rssb.12377.

[63]

M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135–1144. doi: 10.1145/2939672.2939778.

[64]

X. Yin and J. Han, “CPAR: Classification based on Predictive Association Rules,” 2003.

[65]

D. M. Malioutov, K. R. Varshney, A. Emad, and S. Dash, “Learning Interpretable Classification Rules with Boolean Compressed Sensing,” Transparent Data Mining for Big and Small Data, pp. 95–121, 2017, doi: 10.1007/978-3-319-54024-5_5.

[66]

H. Lakkaraju, S. H. Bach, and J. Leskovec, “Interpretable Decision Sets: A Joint Framework for Description and Prediction,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1675–1684. doi: 10.1145/2939672.2939874.

[67]

J. Krause, A. Perer, and K. Ng, “Interacting with Predictions: Visual Inspection of Black-Box Machine Learning Models,” in Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, New York, NY, USA: Association for Computing Machinery, 2016, pp. 5686–5697. [Online]. Available: https://doi.org/10.1145/2858036.2858529

[68]

P. Adler et al., “Auditing Black-Box Models for Indirect Influence,” Knowl. Inf. Syst., vol. 54, no. 1, pp. 95–122, Jan. 2018, doi: 10.1007/s10115-017-1116-3.

[69]

J. J. Thiagarajan, B. Kailkhura, P. Sattigeri, and K. N. Ramamurthy, TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning. 2016. [Online]. Available: http://arxiv.org/abs/1611.07429

[70]

J. Adebayo and L. Kagal, “Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models,” CoRR, vol. abs/1611.0, 2016, [Online]. Available: http://arxiv.org/abs/1611.04967

[71]

J. D. Olden and D. A. Jackson, “Illuminating the ‘black box’: a randomization approach for understanding variable contributions in artificial neural networks,” Ecological Modelling, vol. 154, no. 1, pp. 135–150, 2002, doi: https://doi.org/10.1016/S0304-3800(02)00064-9.

[72]

T. Wang, C. Rudin, F. Velez-Doshi, Y. Liu, E. Klampfl, and P. MacNeille, “Bayesian Rule Sets for Interpretable Classification,” in 2016 IEEE 16th International Conference on Data Mining (ICDM), Dec. 2016, pp. 1269–1274. doi: 10.1109/ICDM.2016.0171.

[73]

H. A. Chipman, E. I. George, and R. E. Mcculloch, Making sense of a forest of trees 1 Introduction 2 Methods for generating trees. 1998, pp. 84–92.

[74]

M. Craven and J. W. Shavlik, “Using Sampling and Queries to Extract Rules from Trained Neural Networks,” in Proceedings of the Eleventh International Conference on International Conference on Machine Learning, 1994, pp. 37–45.

[75]

A. Henelius, K. Puolamaki, H. Bostrom, L. Asker, and P. Papapetrou, “A peek into the black box: exploring classifiers by randomization,” Data Mining and Knowledge Discovery, vol. 28, no. 5–6, pp. 1503–1529, Sep. 2014, doi: 10.1007/s10618-014-0368-8.

[76]

M. W. Craven and J. W. Shavlik, “Extracting Tree-Structured Representations of Trained Networks,” in Proceedings of the 8th International Conference on Neural Information Processing Systems, 1995, pp. 24–30.

[77]

H. F. Tan, G. Hooker, and M. Wells, “Tree Space Prototypes: Another Look at Making Tree Ensembles Interpretable,” Proceedings of the 2020 ACM-IMS on Foundations of Data Science Conference, 2020.

[78]

R. Turner, “A model explanation system,” in 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), 2016, pp. 1–6. doi: 10.1109/MLSP.2016.7738872.

[79]

B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, “Learning Deep Features for Discriminative Localization,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 2921–2929. doi: 10.1109/CVPR.2016.319.

[80]

R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” in 2017 IEEE International Conference on Computer Vision (ICCV), Oct. 2017, pp. 618–626. doi: 10.1109/ICCV.2017.74.

[81]

W. R. Swartout and J. D. Moore, “Explanation in Second Generation Expert Systems,” in Second Generation Expert Systems, 1993, pp. 543–585.

[82]

J. Maloney, M. Resnick, N. Rusk, B. Silverman, and E. Eastmond, “The Scratch Programming Language and Environment,” ACM Trans. Comput. Educ., vol. 10, no. 4, Nov. 2010, doi: 10.1145/1868358.1868363.

[83]

L. Breiman, “Statistical modeling: The two cultures,” techreport 3, 2001. doi: 10.1214/ss/1009213726.

[84]

R. Bellman, The Theory of Dynamic Programming. 1954, p. 27.

[85]

Z. Yang, X. He, J. Gao, L. Deng, and A. Smola, “Stacked attention networks for image question answering,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Dec. 2016, vol. 2016-Decem, pp. 21–29. doi: 10.1109/CVPR.2016.10.

[86]

P. Bossaerts and C. Murawski, Computational Complexity and Human Decision-Making, vol. 21. Elsevier Ltd, 2017, pp. 917–929. doi: 10.1016/j.tics.2017.09.005.

[87]

D. Holliday, S. Wilson, and S. Stumpf, “The Effect of Explanations on Perceived Control and Behaviors in Intelligent Systems,” in CHI ’13 Extended Abstracts on Human Factors in Computing Systems, 2013, pp. 181–186. doi: 10.1145/2468356.2468389.

[88]

M. D. Fethi and F. Pasiouras, Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey, vol. 204. 2010, pp. 189–198. doi: 10.1016/j.ejor.2009.08.003.

[89]

D. Silver et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016, doi: 10.1038/nature16961.

[90]

L. A. Zadeh, “Fuzzy Sets,” Information Control, vol. 8, pp. 338–353, 1965.

[91]

Z. Juozapaitis, A. Koul, A. Fern, M. Erwig, and F. Doshi-Velez, “Explainable Reinforcement Learning via Reward Decomposition,” Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence, pp. 47--53, 2019.

[92]

S. Kajita, H. Hirukawa, K. Harada, and K. Yokoi, Introduction to Humanoid Robotics. Berlin: Springer Publishing Company, Incorporated, 2014, p. 222. doi: 10.1007/978-3-642-54536-8.

[93]

W. W. Cohen, “Fast Effective Rule Induction,” in In Proceedings of the Twelfth International Conference on Machine Learning, 1995, pp. 115–123.

[94]

J. Huerta-Cepas, F. Serra, and P. Bork, “ETE 3: Reconstruction, Analysis, and Visualization of Phylogenomic Data,” Molecular Biology and Evolution, vol. 33, no. 6, pp. 1635–1638, Jun. 2016, doi: 10.1093/molbev/msw046.

[95]

S. Luo, X. Lin, and Z. Zheng, “A novel CNN-DDPG based AI-trader: Performance and roles in business operations,” Transportation Research Part E: Logistics and Transportation Review, vol. 131, pp. 68–79, Nov. 2019, doi: 10.1016/j.tre.2019.09.013.

[96]

H. Liao, J. Jiang, and Y. Zhang, “A study of automatic code generation,” in Proceedings - 2010 International Conference on Computational and Information Sciences, ICCIS 2010, 2010, pp. 689–691. doi: 10.1109/ICCIS.2010.171.

[97]

T. Mori and N. Uchihira, “Balancing the trade-off between accuracy and interpretability in software defect prediction,” Empirical Software Engineering, vol. 24, no. 2, pp. 779–825, Apr. 2019, doi: 10.1007/s10664-018-9638-1.

[98]

H. Van Seijen, M. Fatemi, J. Romoff, R. Laroche, T. Barnes, and J. Tsang, “Hybrid Reward Architecture for Reinforcement Learning,” in Advances in Neural Information Processing Systems, 2017, vol. 30, pp. 5392–5402. [Online]. Available: https://proceedings.neurips.cc/paper/2017/file/1264a061d82a2edae1574b07…

[99]

A. Deeks, “The judicial demand for explainable artificial intelligence,” Columbia Law Review, vol. 119, no. 7, pp. 1829–1850, 2019.

[100]

J. H. Friedman and N. I. Fisher, “Bump hunting in high-dimensional data,” Statistics and Computing, vol. 9, no. 2, pp. 123–143, 1999, doi: 10.1023/A:1008894516817.

[101]

M. Bramer, “Avoiding Overfitting of Decision Trees,” in Principles of Data Mining, London: Springer London, 2013, pp. 121–136. doi: 10.1007/978-1-4471-4884-5_9.

[102]

G. K. Nayak, K. R. Mopuri, V. Shaj, R. Venkatesh Babu, and A. Chakraborty, “Zero-shot knowledge distillation in deep networks,” in 36th International Conference on Machine Learning, ICML 2019, 2019, vol. 2019-June, pp. 8317–8325.

[103]

D. Michie, “Machine learning in the next five years,” in Proc. Third European Working Session on Learning, 1988, pp. 107–122.

[104]

S. Sarkar, T. Weyde, A. D. A. Garcez, G. Slabaugh, S. Dragicevic, and C. Percy, “Accuracy and interpretability trade-offs in machine learning applied to safer gambling,” in CEUR Workshop Proceedings, 2016, vol. 1773.

[105]

J. Fürnkranz, D. Gamberger, and N. Lavrac, Foundations of Rule Learning. Springer, 2012, pp. 1–298. [Online]. Available: http://dx.doi.org/10.1007/978-3-540-75197-7

[106]

E. L. Thorndike, Animal Intelligence: Experimental Studies. Macmillan, 1911.

[107]

C. Molnar, Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. 2019, p. 247. [Online]. Available: https://christophm.github.io/interpretable-ml-book

[108]

M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why should i trust you?’ Explaining the predictions of any classifier,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Aug. 2016, vol. 13-17-Augu, pp. 1135–1144. doi: 10.1145/2939672.2939778.

[109]

J. Schulman, S. Levine, P. Moritz, M. Jordan, and P. Abbeel, “Trust region policy optimization,” in 32nd International Conference on Machine Learning, ICML 2015, Feb. 2015, vol. 3, pp. 1889–1897. [Online]. Available: http://arxiv.org/abs/1502.05477

[110]

E. Bonabeau, M. Dorigo, and G. Theraulaz, From Natural to Artificial Swarm Intelligence. USA: Oxford University Press, Inc., 1999, p. 320.

[111]

I. Chikalov, “Bounds on Average Time Complexity of Decision Trees,” Intelligent Systems Reference Library, vol. 21, pp. 15–39, 2011, doi: 10.1007/978-3-642-22661-8_2.

[112]

P. Hacker, R. Krestel, S. Grundmann, and F. Naumann, “Explainable AI under contract and tort law: legal incentives and technical challenges,” Artificial Intelligence and Law, 2020, doi: 10.1007/s10506-020-09260-6.

[113]

F. K. Dosilovic, M. Brcic, and N. Hlupic, “Explainable artificial intelligence: A survey,” 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO 2018 - Proceedings, pp. 210–215, 2018, doi: 10.23919/MIPRO.2018.8400040.

[114]

A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “FitNets: Hints for thin deep nets,” in 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, May 2015. [Online]. Available: http://arxiv.org/abs/1412.6550

[115]

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. Cambridge, MA, USA: MIT press, 2016, p. 781. [Online]. Available: http://gen.lib.rus.ec/book/index.php?md5=ebf85b30d2d751196275d5dd149689…

[116]

H. Buhrman and R. De Wolf, “Complexity measures and decision tree complexity: A survey,” in Theoretical Computer Science, 2002, vol. 288, no. 1, pp. 21–43. doi: 10.1016/S0304-3975(01)00144-X.

[117]

C. Chen, K. Lin, C. Rudin, Y. Shaposhnik, S. Wang, and T. Wang, “An Interpretable Model with Globally Consistent Explanations for Credit Risk,” 2018. [Online]. Available: http://arxiv.org/abs/1811.12615

[118]

D. Gunning and D. W. Aha, “DARPA’s explainable artificial intelligence program,” AI Magazine, vol. 40, no. 2, pp. 44–58, 2019, doi: 10.1609/aimag.v40i2.2850.

[119]

O. Loyola-Gonzalez, Black-box vs. White-Box: Understanding their advantages and weaknesses from a practical point of view, vol. 7. Institute of Electrical, 2019, pp. 154096–154113. doi: 10.1109/ACCESS.2019.2949286.

[120]

L. Weng, “Policy Gradient Algorithms,” lilianweng.github.io/lil-log, 2018, [Online]. Available: https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorit…

[121]

A. Hill et al.Stable Baselines. GitHub, 2018. [Online]. Available: https://github.com/hill-a/stable-baselines

[122]

S. Li, W. Sun, and T. Miller, “Communication in Human-Agent Teams for Tasks with Joint Action,” in Coordination, Organizations, Institutions, and Norms in Agent Systems XI, 2016, pp. 224–241.

[123]

J. Su, D. V. Vargas, and K. Sakurai, “One Pixel Attack for Fooling Deep Neural Networks,” IEEE Transactions on Evolutionary Computation, vol. 23, no. 5, pp. 828–841, Oct. 2019, doi: 10.1109/TEVC.2019.2890858.

[124]

D. Lowd and P. Domingos, “Naive Bayes models for probability estimation,” in ICML 2005 - Proceedings of the 22nd International Conference on Machine Learning, 2005, pp. 529–536. doi: 10.1145/1102351.1102418.

[125]

T. P. Lillicrap et al., “Continuous control with deep reinforcement learning.,” in ICLR, 2016. [Online]. Available: http://dblp.uni-trier.de/db/conf/iclr/iclr2016.html%5C#LillicrapHPHETS15

[126]

C. Rudin and B. Ustunb, “Optimized scoring systems: Toward trust in machine learning for healthcare and criminal justice,” Interfaces, vol. 48, no. 5, pp. 449–466, 2018, doi: 10.1287/inte.2018.0957.

[127]

P. Dhariwal et al.OpenAI Baselines. GitHub, 2017. [Online]. Available: https://github.com/openai/baselines

[128]

M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, Ninth Dove. New York: Dover, 1964.

[129]

E. Thorson, B. Reeves, and J. Schleuder, “Attention to Local and Global Complexity in Television Messages,” Annals of the International Communication Association, vol. 10, no. 1, pp. 366–383, Jan. 1987, doi: 10.1080/23808985.1987.11678652.

[130]

J. E. Colgate, W. Wannasuphoprasit, and M. A. Peshkin, “Cobots: robots for collaboration with human operators,” in American Society of Mechanical Engineers, Dynamic Systems and Control Division (Publication) DSC, 1996, vol. 58, pp. 433–439.

[131]

C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3, pp. 279–292, May 1992, doi: 10.1007/BF00992698.

[132]

I. Lana, J. J. Sanchez-Medina, E. I. Vlahogianni, and J. Del Ser, “From Data to Actions in Intelligent Transportation Systems: a Prescription of Functional Requirements for Model Actionability,” ArXiv, 2020, [Online]. Available: http://arxiv.org/abs/2002.02210

[133]

M. Fox, D. Long, and D. Magazzeni, “Explainable Planning,” ArXiv, Sep. 2017, [Online]. Available: http://arxiv.org/abs/1709.10256

[134]

K. Lee, K. Lee, J. Shin, and H. Lee, “Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning,” 2020. [Online]. Available: http://arxiv.org/abs/1910.05396

[135]

R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Computing Surveys, vol. 51, no. 5, Aug. 2018, doi: 10.1145/3236009.

[136]

C. R. Reeves, “Bias Estimation for Neural Network Predictions,” in Artificial Neural Nets and Genetic Algorithms, 1995, pp. 242–244.

[137]

M. Riedmiller, “Neural Fitted Q Iteration – First Experiences with a Data Efficient Neural Reinforcement Learning Method,” in Machine Learning: ECML 2005, 2005, pp. 317–328.

[138]

L. Prechelt, “Early Stopping — But When?,” in Neural Networks: Tricks of the Trade: Second Edition, G. Montavon, G. B. Orr, and K.-R. Müller, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2012, pp. 53–67. doi: 10.1007/978-3-642-35289-8_5.

[139]

J. K. Tsotsos, “How does human vision beat the computational complexity of visual perception,” in Computational processes in human vision: an interdisciplinary perspective, 1988, pp. 286–338.

[140]

K. Baraka, A. Paiva, and M. Veloso, “Expressive Lights for Revealing Mobile Service Robot State,” in Robot 2015: Second Iberian Robotics Conference, 2016, pp. 107–119.

[141]

A. A. Sherstov and P. Stone, “Function approximation via tile coding: Automating parameter choice,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2005, vol. 3607 LNAI, pp. 194–205. doi: 10.1007/11527862_14.

[142]

J. R. Quinlan, “Induction of decision trees,” Machine Learning, vol. 1, no. 1, pp. 81–106, 1986, doi: 10.1007/bf00116251.

[143]

A. Raffin, A. Hill, M. Ernestus, A. Gleave, A. Kanervisto, and N. Dormann, Stable Baselines 3. GitHub, 2019. [Online]. Available: https://github.com/DLR-RM/stable-baselines3

[144]

J. M. Alonso, “Explainable artificial intelligence for kids,” in Proceedings of the 11th Conference of the European Society for Fuzzy Logic and Technology, EUSFLAT 2019, 2020, pp. 134–141. doi: 10.2991/eusflat-19.2019.21.

[145]

J. Hartmanis and R. E. Stearns, “On the Computational Complexity of Algorithms,” Journal of Symbolic Logic, vol. 32, no. 1, pp. 120–121, 1967, doi: 10.2307/2271275.

[146]

G. Dong, “Exploiting the Power of Group Differences: Using Patterns to Solve Data Analysis Problems,” Synthesis Lectures on Data Mining and Knowledge Discovery, vol. 11, no. 1, pp. 1–146, Feb. 2019, doi: 10.2200/s00897ed1v01y201901dmk016.

[147]

Council of European Union, Council regulation (EU) no 269/2014. 2014.

[148]

K. Gade, S. Geyik, K. Kenthapadi, V. Mithal, and A. Taly, “Explainable AI in Industry: Practical Challenges and Lessons Learned,” in Companion Proceedings of the Web Conference 2020, 2020, pp. 303–304. doi: 10.1145/3366424.3383110.

[149]

L. K. Hansen and L. Rieger, “Interpretability in Intelligent Systems – A New Concept?,” in Artificial Intelligence and Lecture Notes in Bioinformatics, Springer, 2019, pp. 41–49. doi: 10.1007/978-3-030-28954-6_3.

[150]

P. Yazgana and A. O. Kusakci, “A Literature Survey on Association Rule Mining Algorithms,” Southeast Europe Journal of Soft Computing, vol. 5, no. 1, May 2016, doi: 10.21533/scjournal.v5i1.102.

[151]

A. Adadi and M. Berrada, “Explainable AI for Healthcare: From Black Box to Interpretable Models,” in Embedded Systems and Artificial Intelligence, 2020, pp. 327–337.

[152]

S. Klinke and J. Grassmann, “Visualization and Implementation of Feedforward Neural Networks,” techreport, 1996.

[153]

D. Bertsimas and J. Dunn, “Optimal classification trees,” Machine Learning, vol. 106, no. 7, pp. 1039–1082, Jul. 2017, doi: 10.1007/s10994-017-5633-9.

[154]

S. Greydanus, A. Koul, J. Dodge, and A. Fern, “Visualizing and understanding atari agents,” in 35th International Conference on Machine Learning, ICML 2018, 2018, vol. 4, pp. 2877–2886. [Online]. Available: https://goo.gl/jxvAKn.

[155]

K. Gasper and G. L. Clore, “Attending to the big picture: Mood and global versus local processing of visual information,” Psychological Science, vol. 13, no. 1, pp. 34–40, 2002, doi: 10.1111/1467-9280.00406.

[156]

L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, “Explaining explanations: An overview of interpretability of machine learning,” in Proceedings - 2018 IEEE 5th International Conference on Data Science and Advanced Analytics, DSAA 2018, Jan. 2019, pp. 80–89. doi: 10.1109/DSAA.2018.00018.

[157]

D. Peebles and R. P. Cooper, “Thirty years after Marr’s vision: Levels of analysis in cognitive science,” Topics in Cognitive Science, vol. 7, no. 2, pp. 187–190, Apr. 2015, doi: 10.1111/tops.12137.

[158]

M. Tokic and G. Palm, “Value-difference based exploration: Adaptive control between epsilon-greedy and softmax,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2011, vol. 7006 LNAI, pp. 335–346. doi: 10.1007/978-3-642-24455-1_33.

[159]

L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees. Monterey, CA: Wadsworth, 1984, p. 358.

[160]

M. Du, N. Liu, and X. Hu, “Techniques for interpretable machine learning,” Communications of the ACM, vol. 63, no. 1, pp. 68–77, Jan. 2020, doi: 10.1145/3359786.

[161]

S. Geman, E. Bienenstock, and R. Doursat, “Neural Networks and the Bias/Variance Dilemma,” Neural Computation, vol. 4, no. 1, pp. 1–58, 1992, doi: 10.1162/neco.1992.4.1.1.

[162]

T. Jiang, “Using Machine Learning to Analyze Merger Activity,” techreport, 2018.

[163]

T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning. New York, NY, USA: Springer New York Inc., 2001.

[164]

U. Pawar, D. O’Shea, S. Rea, and R. O’Reilly, “Explainable AI in Healthcare,” 2020. doi: 10.1109/CyberSA49311.2020.9139655.

[165]

M. Abadi et al., “Tensorflow: A system for large-scale machine learning,” in 12th Symposium on Operating Systems Design and Implementation, 2016, pp. 265–283.

[166]

M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, “The Arcade Learning Environment: An Evaluation Platform for General Agents,” J. Artif. Int. Res., vol. 47, no. 1, pp. 253–279, May 2013.

[167]

M. Sewak, Deep Reinforcement Learning - Frontiers of Artificial Intelligence. Springer, 2019, pp. 1–203.

[168]

P. Clark and T. Niblett, “The CN2 induction algorithm,” Machine Learning, vol. 3, no. 4, pp. 261–283, 1989, doi: 10.1007/BF00116835.

[169]

H. Van Hasselt and M. A. Wiering, “Convergence of model-based temporal difference learning for control,” in Proceedings of the 2007 IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning, ADPRL 2007, 2007, pp. 60–67. doi: 10.1109/ADPRL.2007.368170.

[170]

I. C. Currie, Y. G. Wilson, R. N. Baird, and P. M. Lamont, “Postocclusive hyperaemic duplex scan: A new method of aortoiliac assessment,” British Journal of Surgery, vol. 82, no. 9, pp. 1226–1229, 1995, doi: 10.1002/bjs.1800820923.

[171]

Y. Ma and G. Guo, Support Vector Machines Applications. Springer Publishing Company, Incorporated, 2014.

[172]

W. R. Swartout, “Explaining and Justifying Expert Consulting Programs,” in Explaining and Justifying Expert Consulting Programs, vol. 2, J. A. Reggia and S. Tuhrim, Eds. New York, NY, USA: Springer New York, 1985, pp. 254–271. doi: 10.1007/978-1-4612-5108-8_15.

[173]

V. Nair and G. E. Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines.,” in ICML, 2010, pp. 807–814. [Online]. Available: http://dblp.uni-trier.de/db/conf/icml/icml2010.html%7B%5C#%7DNairH10

[174]

P. Svenmarck, L. Luotsinen, M. Nilsson, and J. Schubert, “Possibilities and Challenges for Artificial Intelligence in Military Applications,” Proceedings of the NATO Big Data and Artificial Intelligence for Military Decision Making Specialists’ Meeting, pp. 1–17, 2018, [Online]. Available: https://www.researchgate.net/publication/326774966

[175]

S. R. Haynes, M. A. Cohen, and F. E. Ritter, “Designs for explaining intelligent agents,” International Journal of Human Computer Studies, vol. 67, no. 1, pp. 90–110, Jan. 2009, doi: 10.1016/j.ijhcs.2008.09.008.

[176]

R. J. Williams, “Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning,” Machine Learning, vol. 8, no. 3, pp. 229–256, 1992, doi: 10.1023/A:1022672621406.

[177]

D. Marr, Vision: a computational investigation into the human representation and processing of visual information. London, UK: The MIT Press, 1982. doi: 10.1016/0022-2496(83)90030-5.

[178]

K. V. Hindriks, “Debugging is explaining,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2012, vol. 7455 LNAI, pp. 31–45. doi: 10.1007/978-3-642-32729-2-3.

[179]

A. W. Senior et al., “Improved protein structure prediction using potentials from deep learning,” Nature, vol. 577, no. 7792, pp. 706–710, Jan. 2020, doi: 10.1038/s41586-019-1923-7.

[180]

S. J. Hanson and L. Y. Pratt, “Comparing Biases for Minimal Network Construction with Back-Propagation,” Advances in Neural Information Processing Systems (NIPS), vol. 1, pp. 177–185, 1989, [Online]. Available: http://portal.acm.org/citation.cfm?id=89851.89872

[181]

T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, no. June 2017, pp. 1–38, Feb. 2017, doi: 10.1016/j.artint.2018.07.007.

[182]

A. Braylan, M. Hollenbeck, E. Meyerson, and R. Miikkulainen, “Frame skip is a powerful parameter for learning to play atari,” in AAAI Workshop - Technical Report, 2015, vol. WS-15-10, pp. 10–11. [Online]. Available: www.aaai.org

[183]

T. Bohlin, “Derivation of a ‘designer’s guide’ for interactive ‘grey-box’ identification of nonlinear stochastic objects,” International Journal of Control, vol. 59, no. 6, pp. 1505–1524, 1994, doi: 10.1080/00207179408923143.

[184]

S. Kullback and R. A. Leibler, “On Information and Sufficiency,” Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951, doi: 10.1214/aoms/1177729694.

[185]

M. I. Jordan, Serial order: A parallel distributed processing approach. Amsterdam, Netherlands: North-Holland/Elsevier Science Publishers, 1997, pp. 471–495. doi: 10.1016/S0166-4115(97)80111-2.

[186]

C. Rudin, Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, vol. 1. Nature Research, 2019, pp. 206–215. doi: 10.1038/s42256-019-0048-x.

[187]

C. Zhang, O. Vinyals, R. Munos, and S. Bengio, “A Study on Overfitting in Deep Reinforcement Learning,” CoRR, vol. abs/1804.0, 2018, [Online]. Available: http://arxiv.org/abs/1804.06893

[188]

G. Kogan, ml4a: Looking inside neural nets. 2020. Accessed: Nov. 17, 2020. [Online]. Available: https://ml4a.github.io/ml4a/looking%5C_inside%5C_neural%5C_nets/

[189]

C. Bucilǎ, R. Caruana, and A. Niculescu-Mizil, “Model compression,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2006, vol. 2006, pp. 535–541. doi: 10.1145/1150402.1150464.

[190]

A. Barredo Arrieta et al., “Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, vol. 58, pp. 82–115, Jun. 2020, doi: 10.1016/j.inffus.2019.12.012.

[191]

S. Gu, E. Holly, T. Lillicrap, and S. Levine, “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” in Proceedings - IEEE International Conference on Robotics and Automation, 2017, pp. 3389–3396. doi: 10.1109/ICRA.2017.7989385.

[192]

J. Mingers, “Rule induction with statistical data—a comparison with multiple regression,” Journal of the Operational Research Society, vol. 38, no. 4, pp. 247–251, 1987, doi: 10.1057/jors.1987.57.

[193]

A. C. Scott, W. J. Clancey, R. Davis, and E. H. Shortliffe, “Explanation Capabilities of Production-Based Consultation Systems,” American Journal of Computational Linguistics, pp. 1–50, Feb. 1977, [Online]. Available: https://www.aclweb.org/anthology/J77-1006

[194]

J. Martens, “Deep learning via Hessian-free optimization,” in ICML 2010 - Proceedings, 27th International Conference on Machine Learning, 2010, pp. 735–742.

[195]

A. M. Turing, “Computing Machinery and Intelligence,” Mind, vol. 59, no. 236, pp. 433–460, 1950, [Online]. Available: http://www.jstor.org/stable/2251299

[196]

H. van Hasselt, “Reinforcement learning in continuous state and action spaces,” in Adaptation, Learning, and Optimization, vol. 12, 2012, pp. 207–251. doi: 10.1007/978-3-642-27645-3_7.

[197]

N. Wang, D. V. Pynadath, and S. G. Hill, “The impact of POMDP-generated explanations on trust and performance in human-robot teams,” in Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2016, pp. 997–1005. [Online]. Available: www.ifaamas.org

[198]

M. Sundararajan, A. Taly, and Q. Yan, “Axiomatic Attribution for Deep Networks,” in Proceedings of the 34th International Conference on Machine Learning - Volume 70, 2017, pp. 3319–3328.

[199]

A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” Journal of Physiology, vol. 117, pp. 500–544, 1952.

[200]

M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014, vol. 8689 LNCS, no. PART 1, pp. 818–833. doi: 10.1007/978-3-319-10590-1_53.

[201]

D. Whitley, T. Starkweather, and C. Bogart, “Genetic algorithms and neural networks: optimizing connections and connectivity,” Parallel Computing, vol. 14, no. 3, pp. 347–361, 1990, doi: 10.1016/0167-8191(90)90086-O.

[202]

J. D. Hunter, “Matplotlib: A 2D graphics environment,” Computing in Science & Engineering, vol. 9, no. 3, pp. 90–95, 2007, doi: 10.1109/MCSE.2007.55.

[203]

T. M. Mitchell, Machine Learning, 1st ed. USA: McGraw-Hill, Inc., 1997.

[204]

D. Doran, S. Schulz, and T. R. Besold, “What does explainable AI really mean? A new conceptualization of perspectives,” in CEUR Workshop Proceedings, 2018, vol. 2071. [Online]. Available: http://amueller.github.

[205]

D. V. Carvalho, E. M. Pereira, and J. S. Cardoso, Machine learning interpretability: A survey on methods and metrics, vol. 8. MDPI AG, 2019. doi: 10.3390/electronics8080832.

[206]

R. H. Wortham, A. Theodorou, and J. J. Bryson, “What Does the Robot Think? Transparency as a Fundamental Design Requirement for Intelligent Systems,” 2016. [Online]. Available: http://www.robwortham.com/instinct-planner/%20http://opus.bath.ac.uk/50…

[207]

A. O. Afolabi and P. Toivanen, “Recommender systems in healthcare: Towards practical implementation of real-time recommendations to meet the needs of modern caregiving,” in Handbook of Research on Emerging Perspectives on Healthcare Information Systems and Informatics, 2018, pp. 323–346. doi: 10.4018/978-1-5225-5460-8.ch014.

[208]

A. Kusiak, “Artificial Intelligence Approach to Production Planning,” in Computer-Aided Production Management, 1988, pp. 149–166. doi: 10.1007/978-3-642-73318-5_10.

[209]

“2018 reform of EU data protection rules.” https://ec.europa.eu/commission/sites/beta-political/files/data-protect…

[210]

D. A. Noever and J. W. Regian, “Deep Learning for Training with Noise in Expert Systems,” techreport, 2017. [Online]. Available: https://www.researchgate.net/publication/323943254

[211]

T. G. Dietterich, “Ensemble Methods in Machine Learning,” in Proceedings of the First International Workshop on Multiple Classifier Systems, 2000, pp. 1–15. [Online]. Available: http://dl.acm.org/citation.cfm?id=648054.743935

[212]

I. Arel, C. Liu, T. Urbanik, and A. G. Kohls, “Reinforcement learning-based multi-agent system for network traffic signal control,” IET Intelligent Transport Systems, vol. 4, no. 2, pp. 128–135, 2010, doi: 10.1049/iet-its.2009.0070.

[213]

M. A. Ahmad, A. Teredesai, and C. Eckert, “Interpretable machine learning in healthcare,” in Proceedings - 2018 IEEE International Conference on Healthcare Informatics, ICHI 2018, 2018, p. 447. doi: 10.1109/ICHI.2018.00095.

[214]

R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, 2nd ed. Cambridge, MA, USA: The MIT Press Cambridge, 2018, p. 526. doi: 10.1109/tnn.2004.842673.

[215]

G. Bontempi, Handbook on “Statistical foundations of machine learning.” 2017.

[216]

C. E. Shannon, “A mathematical theory of communication.,” Bell Syst. Tech. J., vol. 27, no. 3, pp. 379–423, 1948, [Online]. Available: http://dblp.uni-trier.de/db/journals/bstj/bstj27.html%5C#Shannon48

[217]

S. Anjomshoae, D. Calvaresi, A. Najjar, and K. Främling, “Explainable agents and robots: Results from a systematic literature review,” in Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2019, vol. 2, pp. 1078–1088. [Online]. Available: www.ifaamas.org

[218]

D. D. Nauck, “Measuring interpretability in rule-based classification systems,” in IEEE International Conference on Fuzzy Systems, 2003, vol. 1, pp. 196–201. doi: 10.1109/fuzz.2003.1209361.

[219]

T. Jaakkola, M. I. Jordan, and S. P. Singh, “On the Convergence of Stochastic Iterative Dynamic Programming Algorithms,” Neural Computation, vol. 6, no. 6, pp. 1185–1201, 1994, doi: 10.1162/neco.1994.6.6.1185.

[220]

M. R. Karim, M. Cochez, O. Beyan, S. Decker, and C. Lange, “OncoNetExplainer: Explainable predictions of cancer types based on gene expression data,” Proceedings - 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering, BIBE 2019, pp. 415–422, 2019, doi: 10.1109/BIBE.2019.00081.

[221]

D. Yu, H. Wang, P. Chen, and Z. Wei, “Mixed pooling for convolutional neural networks,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2014, vol. 8818, pp. 364–375. doi: 10.1007/978-3-319-11740-9_34.

[222]

Z.-H. Zhou, Ensemble Methods: Foundations and Algorithms, 1st ed. Chapman & Hall/CRC, 2012.

[223]

G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools, 2000.

[224]

R. Tanno, K. Arulkumaran, D. C. Alexander, A. Criminisi, and A. Nori, “Adaptive neural trees,” in 36th International Conference on Machine Learning, ICML 2019, Jul. 2019, vol. 2019-June, pp. 10761–10770. [Online]. Available: http://arxiv.org/abs/1807.06699

[225]

Y. Coppens, D. Steckelmacher, C. M. Jonker, and A. Nowé, “Synthesising Reinforcement Learning Policies through Set-Valued Inductive Rule Learning,” 2020.

[226]

J. X. Wang et al., “Learning to reinforcement learn,” ArXiv, Nov. 2016, [Online]. Available: http://arxiv.org/abs/1611.05763

[227]

K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the best multi-stage architecture for object recognition?,” in Proceedings of the IEEE International Conference on Computer Vision, 2009, pp. 2146–2153. doi: 10.1109/ICCV.2009.5459469.

[228]

V. Mnih et al., “Playing Atari with Deep Reinforcement Learning,” ArXiv, 2013, [Online]. Available: http://arxiv.org/abs/1312.5602

[229]

V. Mnih et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015, doi: 10.1038/nature14236.

[230]

A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” Communications of the ACM, vol. 60, no. 6, pp. 84–90, May 2017, doi: 10.1145/3065386.

[231]

N. Frosst and G. Hinton, “Distilling a Neural Network Into a Soft Decision Tree,” techreport, Nov. 2017. [Online]. Available: http://arxiv.org/abs/1711.09784

[232]

G. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a Neural Network,” techreport, Mar. 2015. [Online]. Available: http://arxiv.org/abs/1503.02531

[233]

Y. Coppens, K. Efthymiadis, T. Lenaerts, and A. Nowé, “Distilling Deep Reinforcement Learning Policies in Soft Decision Trees,” in Proceedings of the IJCAI 2019 Workshop on Explainable Artificial Intelligence, 2019, pp. 1–6. [Online]. Available: https://cris.vub.be/files/46718934/IJCAI%5C_2019%5C_XAI%5C_WS%5C_paper…

[234]

J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal Policy Optimization Algorithms,” ArXiv, Jul. 2017, [Online]. Available: http://arxiv.org/abs/1707.06347

[235]

V. Mnih et al., “Asynchronous Methods for Deep Reinforcement Learning,” International Conference on Machine Learning, vol. 48, 2016, [Online]. Available: http://arxiv.org/abs/1301.3781

[236]

A. Alharin, T.-N. Doan, and M. Sartipi, “Reinforcement Learning Interpretation Methods: A Survey,” IEEE Access, vol. 8, pp. 171058–171077, Sep. 2020, doi: 10.1109/access.2020.3023394.

[237]

E. Puiutta and E. M. Veith, “Explainable Reinforcement Learning: A Survey,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), May 2020, vol. 12279 LNCS, pp. 77–95. doi: 10.1007/978-3-030-57321-8_5.

[238]

A. Heuillet, F. Couthouis, and N. Díaz-Rodríguez, “Explainability in deep reinforcement learning,” arXiv, Aug. 2020, [Online]. Available: http://arxiv.org/abs/2008.06693

[239]

T. Lesort, N. Díaz-Rodríguez, J. F. Goudou, and D. Filliat, State representation learning for control: An overview, vol. 108. 2018, pp. 379–392. doi: 10.1016/j.neunet.2018.07.006.

[240]

O. T. Yildiz, E. Alpaydin, and O. Irsoy, “Soft decision trees,” 2012. doi: 10.1007/978-3-319-01604-7_2.

Download scriptie (10.48 MB)
Universiteit of Hogeschool
Vrije Universiteit Brussel
Thesis jaar
2021
Promotor(en)
Dr. Prof. Ann Nowé, Youri Coppens