COMPARATIVE ANALYSIS OF NEURAL-SYMBOLIC INTEGRATION TECHNIQUES IN ENHANCING THE INTERPRETABILITY OF ARTIFICIAL INTELLIGENCE DECISION SYSTEMS INHIGH-STAKES DOMAINS

Authors

  • Sarah Sami Mustapha Embedded Systems Engineer Author
  • Latoya Kassim Data Scientist, Author

Keywords:

Neural-symbolic systems, AI interpretability, explainable AI (XAI), high-stakes decision-making, hybrid intelligence

Abstract

In high-stakes domains such as healthcare, law, and finance, the need for interpretable artificial intelligence (AI) systems has become increasingly critical. Neural-symbolic integration, combining the learning capabilities of neural networks with the reasoning strengths of symbolic systems, has emerged as a promising approach to address the interpretability challenge. This paper provides a comparative analysis of neural-symbolic integration techniques available as of, evaluating their effectiveness in enhancing transparency and trust in decision-making processes. Key methods, historical developments, and empirical performances are reviewed. Findings suggest that while significant progress has been made, further refinement is necessary to fully operationalize neural-symbolic methods for deployment in critical applications.

References

Besold, Tarek R., Artur d’Avila Garcez, Sebastian Bader, Howard Bowman, Pedro Domingos, Pascal Hitzler, and David L. Silver. "Neural-symbolic learning and reasoning: A survey and interpretation." arXiv preprint arXiv:1711.03902, 2017.

Maddukuri, N. (2022). Real-time fraud detection using IoT and AI: Securing the digital wallet. Journal of Computer Engineering and Technology, 5(1), 81–96. https://doi.org/10.34218/JCET_5_01_008

Garcez, Artur S. d'Avila, Krysia Broda, and Dov M. Gabbay. Neural-Symbolic Learning Systems: Foundations and Applications. Springer, 2002.

Manhaeve, Robin, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. "DeepProbLog: Neural probabilistic logic programming." Advances in Neural Information Processing Systems, vol. 31, 2018.

Riegel, Regina Barzilay, Tommi Jaakkola, and Guy Van den Broeck. "Logical neural networks." arXiv preprint arXiv:2006.13155, 2020.

Raedt, Luc De, and Kristian Kersting. Probabilistic Inductive Logic Programming: Theory and Applications. Springer, 2008.

Pearl, Judea. Causality: Models, Reasoning, and Inference. 2nd ed., Cambridge University Press, 2009.

Maddukuri, N. (2022). Modernizing governance with RPA: The future of public sector automation. Frontiers in Computer Science and Information Technology, 3(1), 20–36. https://doi.org/10.34218/FCSIT_03_01_002

Marcus, Gary. "The next decade in AI: Four steps towards robust artificial intelligence." AI Magazine, vol. 40, no. 3, 2019, pp. 54-62.

Rudin, Cynthia. "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead." Nature Machine Intelligence, vol. 1, 2019, pp. 206-215.

Holzinger, Andreas, et al. "What do we need to build explainable AI systems for the medical domain?" Review Journal of Biomedical and Health Informatics, vol. 24, no. 2, 2020, pp. 1358-1368.

Gunning, David. "Explainable Artificial Intelligence (XAI)." Defense Advanced Research Projects Agency (DARPA), 2017.

Lipton, Zachary C. "The mythos of model interpretability." Queue, vol. 16, no. 3, 2018, pp. 30-57.

Maddukuri, N. (2021). Trust in the cloud: Ensuring data integrity and auditability in BPM systems. International Journal of Information Technology and Management Information Systems, 12(1), 144–160. https://doi.org/10.34218/IJITMIS_12_01_012

Lake, Brenden M., Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. "Building machines that learn and think like people." Behavioral and Brain Sciences, vol. 40, 2017.

Marcus, Gary, and Ernest Davis. Rebooting AI: Building Artificial Intelligence We Can Trust. Pantheon Books, 2019.

LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature, vol. 521, no. 7553, 2015, pp. 436-444.

Russell, Stuart J., and Peter Norvig. Artificial Intelligence: A Modern Approach. 4th ed., Pearson, 2020.

Downloads

Published

2024-05-23

How to Cite

Sarah Sami Mustapha, & Latoya Kassim. (2024). COMPARATIVE ANALYSIS OF NEURAL-SYMBOLIC INTEGRATION TECHNIQUES IN ENHANCING THE INTERPRETABILITY OF ARTIFICIAL INTELLIGENCE DECISION SYSTEMS INHIGH-STAKES DOMAINS. International Journal of Information Technology and Electrical Engineering (IJITEE), 13(3), 35-40. https://ijitee.com/index.php/home/article/view/IJITEE_1303004