Logic programming can play a central role in the quest for building interpretable, explainable, and trustworthy AI systems. We discuss how default theories expressed as logic programs represent inductive generalizations that, in turn, can represent interpretable and explainable machine learning models. Rule-based machine learning algorithms can be subsequently designed that are competitive with mainstream state-of-the-art machine learning systems. We discuss the application of these algorithms to making convolutional neural networks---used for image recognition---explainable. We give an overview of the s(CASP) goal-directed predicate answer set programming system and show how it can be used in flexible ways to generate explanations for predictions made by machine learning models as well as perform counterfactual reasoning. We will also discuss how s(CASP) and large language models together can be used to develop trustworthy (domain-specific) natural language understanding systems.
Gopal Gupta has been a logic programming researcher since the late 80s. His research interests are in logic programming, predicate answer set programming, and explainable machine learning. He is currently focused on applying logic programming to automating commonsense reasoning. His group has developed several logic programming systems including some that are publicly available and some that have been developed commercially. His research group's paper on Coinductive Logic Programming (CoLP) received the 10-year test-of-time award at ICLP'16. CoLP is the basis of the s(ASP) and s(CASP) predicate answer set programming systems that his group is developing. His research group was selected to compete in the 4th Amazon Alexa Prize Socialbot Challenge (2020-2021). Gopal obtained his B.Tech. in Computer Science from IIT Kanpur, India, and his MS & PhD degrees from UNC Chapel Hill. Subsequently, he worked in David H.D. Warren's group as a research associate, then as a faculty member at New Mexico State University. Currently, he is a professor of computer science at the University of Texas at Dallas where he also co-directs its Center for Applied AI and Machine Learning. From 2009-2013, he served as the President of the Association for Logic Programming. He co-founded the PADL series of conferences in 1999. His research in logic programming and its applications is currently supported by the US National Science Foundation, DARPA, and industry.
This talk addresses computational cognitive vision at the interface of (spatial) language, (spatial) logic, (spatial) cognition, and artiﬁcial intelligence. Summarizing recent works, I present general methods for the semantic interpretation of dynamic visuospatial imagery with an emphasis on the ability to perform abstraction, reasoning, and learning with cognitively rooted structured characterizations of commonsense knowledge pertaining to space and motion. I will particularly highlight:
• explainable models of computational visuospatial commonsense at the interface of symbolic and neural techniques;
• deep semantics, entailing systematically formalised declarative (neurosymbolic) reasoning and learning with aspects pertaining to space, space-time, motion, actions & events, spatio-linguistic conceptual knowledge; and
• general foundational commonsense abstractions of space, time, and motion needed for representation mediated (grounded) reasoning and learning with dynamic visuospatial stimuli.
The presented works –demonstrated in the backdrop of applications in autonomous driving, visuoauditory media, cognitive robotics, and cognitive psychology– are intended to serve as a systematic model and general methodology integrating diverse, multi-faceted AI methods pertaining knowledge representation and reasoning, computer vision, and machine learning towards realising practical, human-centred, computational visual intelligence. I will conclude by highlighting a bottom-up interdisciplinary approach – at the conﬂuence of Cognition, AI, Interaction, and Design Science – necessary to better appreciate the complexity and spectrum of varied human-centred challenges for the design and (usable) implementation of (explainable) artiﬁcial visual intelligence solutions in diverse human-system interaction contexts.
Mehul Bhatt is Professor of Computer Science within the School of Science and Technology at Orebro University (Sweden). His basic research focusses on the formal, cognitive, and computational foundations for AI technologies with a principal emphasis on knowledge representation, semantics, integration of commonsense reasoning & learning, explainability, and (declarative) spatial representation and reasoning. Mehul Bhatt steers CoDesign Lab (www.codesign-lab.org), an initiative aimed at addressing the confluence of Cognition, Artificial Intelligence, Interaction, and Design Science for the development of human-centred cognitive assistive technologies and interaction systems. Since 2014, he directs the research and consulting group DesignSpace (www.design-space.org) and pursues ongoing research in Cognitive Vision (www.codesign-lab.org/cognitive-vision) and Spatial Reasoning (www.spatial-reasoning.com).
Mehul Bhatt obtained a bachelors in economics (India), masters in information technology (Australia), and a PhD in computer science (Australia). He has been a recipient of an Alexander von Humboldt Fellowship, a German Academic Exchange Service award (DAAD), and an Australian Post-graduate Award (APA). He was the University of Bremen nominee for the German Research Foundation (DFG) Award: Heinz Maier-Leibnitz-Preis 2014. Previously, Mehul Bhatt was Professor at the University of Bremen (Germany). Further details are available via: www.mehulbhatt.org
Joe is a Principal Researcher for the Trusted AI Project at Fujitsu Research of Europe, based in Slough, Berkshire, UK. Joe obtained a BSc in Computer Science and an MSc in Applied Artificial Intelligence at the University of Exeter before spending 8 months teaching English in Japan in 2010. Joe then returned to Exeter to write his PhD Thesis “Artificial Development of Neural-Symbolic Networks” which he completed in 2014. Joe then joined Fujitsu Services as a software developer that same year but the following year moved to Fujitsu Research of Europe (then Fujitsu Laboratories of Europe) as a researcher. Joe’s research at Fujitsu has concerned applications of AI in various fields including but not limited to non-destructive testing, cyber security and medical imaging. With the surge in popularity of neurosymbolic methods, Joe returned to research in this field, which together with contributions from his colleagues led to the development of neurosymbolic tools for training convolutional neural networks and extracting interpretable rules from them.
Google Scholar: https://scholar.google.com/citations?user=ofg9dv0AAAAJ&hl=en&oi=sra