Causal Neuro-symbolic Artificial Intelligence: Synergy between Neuro-symbolic and Causal Artificial Intelligence

Thursday, June 19, 2025 - 09:00 am
AI Institute Seminar room
Author : Utkarshani Jaimini
Advisor: Dr. Amit Sheth
Date: June 19th, 2025
Time: 09:00 am
Place: AI Institute Seminar room
 

Abstract

Understanding and reasoning about cause and effect is innate to human cognition. In everyday life, humans continuously engage in causal reasoning and hypothetical retrospection to make decisions, plan actions, and interpret events. This cognitive ability allows us to ask questions such as: “What caused this situation?”, “What will happen if I take this action?”, or “What would have happened had I chosen differently?” This intuitive capacity to form mental models of the world, infer causal relationships, and reason about alternative scenarios, particularly counterfactuals, is

central to our intelligence and adaptability. In contrast, current machine learning (ML) and artificial intelligence (AI) systems, despite significant advances in learning from large-scale data and representing knowledge across time and space, lack a fundamental understanding of causality and counterfactual reasoning. This limitation poses challenges in high-stakes domains such as healthcare, autonomous systems, and manufacturing, where causal reasoning is indispensable for explanation, decision- making, and generalization. As argued by researchers such as Judea Pearl and Gary Marcus, endowing AI systems with causal reasoning capabilities is critical for building robust, generalizable, and human-aligned intelligence.
This dissertation proposes a novel framework: Causal Neuro-Symbolic (Causal NeSy) Artificial Intelligence, an integration of causal modeling with neuro-symbolic (NeSy) AI . The goal of Causal NeSy AI is to bridge the gap between statistical learning and causal reasoning, enabling machines to model, understand and reason upon the underlying causal structure of the world while leveraging the strengths of both neural and symbolic representations. At its core, the framework leverages causal Bayesian networks, encoded through a series of ontologies, to represent and propagate structured causal knowledge. By unifying structured causal symbolic knowledge with neural inference, the framework introduces a scalable and explainable causal reasoning pipeline grounded in knowledge graphs. The proposed Causal NeSy framework has been validated using the CLEVRER-Humans benchmark dataset, which involves video-based event causality annotated by human experts, and several real-world domains, including smart manufacturing, and  autonomous driving, areas that require high levels of robustness, interpretability, and causal understanding. Empirical results demonstrate that the integration of causal modeling into NeSy architectures significantly enhances both performance and explainability, particularly in settings with limited data or complex counterfactual scenarios. This dissertation advances the field of AI by proposing a unified framework that imbues NeSy systems with causal reasoning capabilities. By enabling machines to model, infer, and reason about causal structures, this work takes a crucial step toward building more human-aligned, trustworthy, and generalizable AI systems. It introduces scalable, explainable, and bias-aware methodologies for causal reasoning, by moving AI closer to human-like understanding. The contributions pave the way for future intelligent systems capable of meaningful intervention, retrospective explanation, and counterfactual reasoning. The Causal NeSy AI paradigm opens promising avenues for future research at the intersection of causality, learning, and reasoning, a necessary convergence on the path to truly intelligent systems.

A Neuro-Symbolic AI Approach to Scene Understanding in Autonomous Systems

Monday, June 23, 2025 - 10:00 am
Online

    DISSERTATION DFENSE
 

Author : Ruwan Tharanga Wickramarachchige Don
Advisor: Dr. Amit Sheth
Date: June 23rd, 2025
Time: 10:00 am
Place: AI Institute Seminar room
Zoom Link / Online Access: Join Zoom Meeting
https://sc-edu.zoom.us/j/89344836465?pwd=5sm3lb06ESCU8kcFmNhBWKLL8MnwhF…

 

Meeting ID: 893 4483 6465

Passcode: 180289


Abstract

 

Scene understanding remains a central challenge in the machine perception of autonomous systems. It requires the integration of multiple sources of information, background knowledge, and heterogeneous sensor data to perceive, interpret, and reason about both physical and semantic aspects of dynamic environments. Current approaches to scene understanding primarily rely on computer vision and deep learning models that operate directly on raw sensor data to perform tasks such as object detection, recognition, and localization. However, in real-world domains – such as autonomous driving and smart manufacturing/ Industry 4.0 – this sole reliance on raw perceptual data exposes limitations in safety, robustness, generalization, and explainability. To address these challenges, this dissertation proposes a novel perspective on scene understanding using a Neurosymbolic AI approach that combines knowledge representation, representation learning, and reasoning to advance cognitive and visual reasoning in autonomous systems.

Our approach involves several key contributions. First, we introduce methods for constructing unified knowledge representations that integrate scene data with background knowledge. This includes the development of a dataset-agnostic scene ontology and the construction of knowledge graphs (KGs) to represent multimodal data from autonomous systems. Specifically, we introduce DSceneKG, a suite of large-scale KGs representing real-world driving scenes across multiple autonomous driving datasets. DSceneKG has already been utilized in several emerging neurosymbolic AI tasks, including explainable scene clustering and causal reasoning, and has been adopted for an industrial cross-modal retrieval task. Second, we propose methods to enhance the expressiveness of scene knowledge in sub-symbolic representations to support downstream learning tasks that rely on high-quality translation of KG into embedding space. Our investigation identifies effective KG patterns and structures that enhance the semantic richness of KG embeddings, thereby improving model reasoning capabilities. Third, we introduce knowledge-based entity prediction (KEP), a novel cognitive visual reasoning task that leverages relational knowledge in KGs to predict entities that are not directly observed but are likely to exist given the scene context. Using two high-quality autonomous driving datasets, we evaluate the effectiveness of this approach in predicting entities that are likely to be seen given the current scene context. Fourth, we present CLUE, a context-based method for labeling unobserved entities, designed to improve annotation quality in existing multimodal datasets by incorporating contextual knowledge of entities that may be missing due to perceptual failures. Finally, by integrating these contributions, we introduce CUEBench, a benchmark for contextual entity prediction that systematically evaluates both neurosymbolic and foundation model-based approaches (i.e., large language models and multimodal language models). CUEBench fills a critical gap in current benchmarking by targeting high-level cognitive reasoning under perceptual incompleteness, reflecting real-world challenges faced by autonomous systems.