Speakers
Keynote Speakers
In this talk I will present an overview on what happened after 50 years of Prolog and what our position to add another 50 years might be. Last year we celibrated than Alain Colmehaur, together with Bob Kowalski, "invented" Prolog 50 years ago. Prolog became reality. Prolog provides a novel and unconventional way to look at programming. Prolog has inspired many people and influenced a lot of technology. Widespread usage and commercial success are limited though. After a short hype, fuelled by the Japanese "fifth generation project", commercial interest faded away. Only a few people make a living by developing Prolog systems. A larger, but still small group makes a living applying Prolog in commercial applications. It plays a a role in (academic) research and prototyping. Nowadays, "AI" is synonym for statistical machine learning. This affects Prolog as "AI language", making the language seem irrelevant to most IT professionals. On the other hand, we see a demand for "explainable systems" as well as obvious shortcomings in systems purely based on statistical methods. Our unconventional language may play a role here.
A wide range of combinatorial search problems can be modelled and solved with Answer Set Programming (ASP). While modern ASP solvers allow to quickly enumerate solutions, the user faces the problem of dealing with a possibly exponential number of solutions, which may easily go into millions and beyond. To still be able to reach an understanding of the answer set space, we propose navigation approaches to reach subspaces that fulfil desirable criteria. We start with an iterative approach to compute a diverse collection of answer sets that allows to exchange some answer sets to improve the size and diversity of the whole collection. Then, we will discuss the concept of weighted faceted answer set navigation, which allows for a quantitative understanding of the answer set space. Weights can be assigned to atoms depending on how much they restrict the remaining solution space, either by counting the number of answer sets (resp. supported models) or counting the number of atoms still available to choose. Finally, we will present a visual approach to explore solution spaces and apply it to the domain of abstract argumentation.
Answer Set Programming (ASP) has already demonstrated to be an efficient knowledge representation and reasoning tool for modeling and solving a number of real-life applications. In this talk, I will overview the successful application of ASP to solve scheduling problems in the Heathcare domain. Specific problems include nurse scheduling, operating room scheduling, chemotherapy treatment, and rehabilitation planning.
Is human indicative reasoning homogeneous or heterogeneous? The fundamental question here is whether the same inferences are valid? On that, human reasoning is heterogeneous. Narrative interpretation is reasoning in a non-monotonic logic to an interpretation whereas classical logical reasoning is monotonic and within a single interpretation. The former proposes and the latter disposes (interpretations). Stenning and van Lambalgen, 2008 focuses on Logic Programming as a medium for analysing reasoning to interpretations. Here we consider further the relation of our students to classical logic. A simple experiment takes a large class of ‘naive logicians’ in the first session of their first logic course (naive logicians), and randomly assigns them either to do the task that logic has studied at great length as ‘classical logical reasoning’; or to a contrasting task of counterexample reasoning in a situation strongly signalled as dispute. The results show that what psychologists have thought is interpreted as classical logical reasoning is in fact treated as narrative reasoning by the participants. Whereas counterexample reasoning in dispute brings out classical logical reasoning from the same students. We explore some of the impacts of this finding within the psychology of reasoning, but also raise some questions about what it means for the teaching of logic.
Reference: Stenning, K. and van Lambalgen, M. (2008). Human Reasoning and Cognitive Science. MIT Press, Cambridge, MA.
Bio
Keith Stenning is a cognitive scientist and Honorary Professor at the University of Edinburgh in Scotland, UK.
Stenning received a bachelor's degree in philosophy and psychology at the University of Oxford in 1969, and a PhD in discourse semantics as a basis for a theory of memory in New York, 1975, supervised by George Armitage Miller. Between 1975 and 1983 he taught at Liverpool University before moving to Edinburgh to the Centre for Cognitive Science in 1983. Between 1989 and 1999 he was the director of the Human Communication Research Centre. He is a Distinguished Fellow of the Cognitive Science Society and a Foreign Fellow of the Royal Netherlands National Academy. He was chairman of an Expert Group gathered by the European Commission Directorate-General for Research which proposed some lines of evolutionary cognitive research under the title "What it Means to be Human". His main research interest is integrating logical and psychological accounts of reasoning. Recent work includes investigations of interpretative processes in reasoning and, with Michiel van Lambalgen at the Institute for Logic, Language and Computation in Amsterdam, the use of non-monotonic logic and neural network implementations to model reasoning and the study of the relevance of modern mathematical logic to the study of human reasoning.
Logic programming can play a central role in the quest for building interpretable, explainable, and trustworthy AI systems. We discuss how default theories expressed as logic programs represent inductive generalizations that, in turn, can represent interpretable and explainable machine learning models. Rule-based machine learning algorithms can be subsequently designed that are competitive with mainstream state-of-the-art machine learning systems. We discuss the application of these algorithms to making convolutional neural networks---used for image recognition---explainable. We give an overview of the s(CASP) goal-directed predicate answer set programming system and show how it can be used in flexible ways to generate explanations for predictions made by machine learning models as well as perform counterfactual reasoning. We will also discuss how s(CASP) and large language models together can be used to develop trustworthy (domain-specific) natural language understanding systems.
BIOGRAPHY
Gopal Gupta has been a logic programming researcher since the late 80s. His research interests are in logic programming, predicate answer set programming, and explainable machine learning. He is currently focused on applying logic programming to automating commonsense reasoning. His group has developed several logic programming systems including some that are publicly available and some that have been developed commercially. His research group's paper on Coinductive Logic Programming (CoLP) received the 10-year test-of-time award at ICLP'16. CoLP is the basis of the s(ASP) and s(CASP) predicate answer set programming systems that his group is developing. His research group was selected to compete in the 4th Amazon Alexa Prize Socialbot Challenge (2020-2021). Gopal obtained his B.Tech. in Computer Science from IIT Kanpur, India, and his MS & PhD degrees from UNC Chapel Hill. Subsequently, he worked in David H.D. Warren's group as a research associate, then as a faculty member at New Mexico State University. Currently, he is a professor of computer science at the University of Texas at Dallas where he also co-directs its Center for Applied AI and Machine Learning. From 2009-2013, he served as the President of the Association for Logic Programming. He co-founded the PADL series of conferences in 1999. His research in logic programming and its applications is currently supported by the US National Science Foundation, DARPA, and industry.
Tutorials
This talk addresses computational cognitive vision at the interface of (spatial) language, (spatial) logic, (spatial) cognition, and artificial intelligence. Summarizing recent works, I present general methods for the semantic interpretation of dynamic visuospatial imagery with an emphasis on the ability to perform abstraction, reasoning, and learning with cognitively rooted structured characterizations of commonsense knowledge pertaining to space and motion. I will particularly highlight:
• explainable models of computational visuospatial commonsense at the interface of symbolic and neural techniques;
• deep semantics, entailing systematically formalised declarative (neurosymbolic) reasoning and learning with aspects pertaining to space, space-time, motion, actions & events, spatio-linguistic conceptual knowledge; and
• general foundational commonsense abstractions of space, time, and motion needed for representation mediated (grounded) reasoning and learning with dynamic visuospatial stimuli.
The presented works –demonstrated in the backdrop of applications in autonomous driving, visuoauditory media, cognitive robotics, and cognitive psychology– are intended to serve as a systematic model and general methodology integrating diverse, multi-faceted AI methods pertaining knowledge representation and reasoning, computer vision, and machine learning towards realising practical, human-centred, computational visual intelligence. I will conclude by highlighting a bottom-up interdisciplinary approach – at the confluence of Cognition, AI, Interaction, and Design Science – necessary to better appreciate the complexity and spectrum of varied human-centred challenges for the design and (usable) implementation of (explainable) artificial visual intelligence solutions in diverse human-system interaction contexts.
BIOGRAPHY
Mehul Bhatt is Professor of Computer Science within the School of Science and Technology at Orebro University (Sweden). His basic research focusses on the formal, cognitive, and computational foundations for AI technologies with a principal emphasis on knowledge representation, semantics, integration of commonsense reasoning & learning, explainability, and (declarative) spatial representation and reasoning. Mehul Bhatt steers CoDesign Lab (www.codesign-lab.org), an initiative aimed at addressing the confluence of Cognition, Artificial Intelligence, Interaction, and Design Science for the development of human-centred cognitive assistive technologies and interaction systems. Since 2014, he directs the research and consulting group DesignSpace (www.design-space.org) and pursues ongoing research in Cognitive Vision (www.codesign-lab.org/cognitive-vision) and Spatial Reasoning (www.spatial-reasoning.com).
Mehul Bhatt obtained a bachelors in economics (India), masters in information technology (Australia), and a PhD in computer science (Australia). He has been a recipient of an Alexander von Humboldt Fellowship, a German Academic Exchange Service award (DAAD), and an Australian Post-graduate Award (APA). He was the University of Bremen nominee for the German Research Foundation (DFG) Award: Heinz Maier-Leibnitz-Preis 2014. Previously, Mehul Bhatt was Professor at the University of Bremen (Germany). Further details are available via: www.mehulbhatt.org
Convolutional Neural Networks (CNNs) have been widely used for complex image recognition tasks. Due to the highly entangled correlations learned by the latent features in the convolutional kernels, deriving explanations and human-comprehensible knowledge from CNNs has been proven difficult. This chapter provides an overview of progress with respect to one proposed solution: ERIC (Extracting Relations Inferred from Convolutions) and its related technology EBP (Elite BackPropagation). ERIC provides decompositional, layer-wise explanations for CNNs by reducing the behaviour of one or more layers to a discrete logic program over a set of logical atoms, each corresponding to an individual convolutional kernel. EBP trains CNNs so that each class is associated with a small set of (elite) disentangled kernels, enabling ERIC to produce more compact rules. ERIC and EBP are independent of each other, and so even when EBP has not been applied during training, ERIC is able to yield high-fidelity logic programs. These logic programs yield performance comparable to that of the original CNN, with some information loss to be expected when approximations of multiple layers are chained together. When the logic rules are analysed alongside the data as a visual concept learner, ERIC has been shown to discover relevant concepts when applied to classification tasks, including in the case of specialised knowledge such as in radiology. ERIC rules achieved high fidelity to the CNN on the MNIST data set and a traffic sign classification task with up to 43 classes. Concepts captured by ERIC in the extracted logic program can be transferred to a different CNN that has been trained on a related but different problem in the same domain. For example, concepts identified for the respiratory condition of pleural effusion can be transferred to a COVID-19 classification task. In a radiology application, ERIC has been shown capable of identifying concepts that are not justified anatomically or used by medical experts in their decision making. Current work has been investigating how CNNs can be modified to avoid the use of such undesirable concepts and improve performance in ways that can increase accountability
BIOGRAPHY
Joe is a Principal Researcher for the Trusted AI Project at Fujitsu Research of Europe, based in Slough, Berkshire, UK. Joe obtained a BSc in Computer Science and an MSc in Applied Artificial Intelligence at the University of Exeter before spending 8 months teaching English in Japan in 2010. Joe then returned to Exeter to write his PhD Thesis “Artificial Development of Neural-Symbolic Networks” which he completed in 2014. Joe then joined Fujitsu Services as a software developer that same year but the following year moved to Fujitsu Research of Europe (then Fujitsu Laboratories of Europe) as a researcher. Joe’s research at Fujitsu has concerned applications of AI in various fields including but not limited to non-destructive testing, cyber security and medical imaging. With the surge in popularity of neurosymbolic methods, Joe returned to research in this field, which together with contributions from his colleagues led to the development of neurosymbolic tools for training convolutional neural networks and extracting interpretable rules from them.
Google Scholar: https://scholar.google.com/citations?user=ofg9dv0AAAAJ&hl=en&oi=sra