Login or signup to connect with paper authors and to register for specific Author Connect sessions (if available).
Enhancing Cybersecurity Threat Detection with Counterfactual Reasoning: A 'What-If' Ontology Approach Using Large Language Models
Hossein Arshadi Soufiani, Henry Kim
This paper proposes a novel approach to enhancing cybersecurity threat detection by integrating counterfactual reasoning with large language models (LLMs) through a structured "what-if" ontology. Traditional AI-based systems often function as black boxes, identifying threats without offering causal explanations or scenario reasoning. Our framework enables LLMs to simulate hypothetical attack scenarios and assess alternative outcomes, thereby improving detection accuracy and interpretability. Grounded in the TOVE ontology engineering methodology, the system aims to formalize key cybersecurity entities, causal relations, and counterfactual conditions using languages like OWL and SWRL. We evaluate the framework based on metrics such as detection accuracy, narrative quality, and reasoning robustness. By unifying theoretical foundations from causal reasoning, scenario planning, and facets of explainable AI, our ontology serves as a semantic backbone for LLM-guided analysis. This work contributes a proactive, explainable, and extensible model for anticipating cyber threats and guiding defensive strategies, with implications for future research and implementation in intelligent threat detection systems.
AuthorConnect Sessions
No sessions scheduled yet