These articles provide a conceptual overview of my research. You can also find my articles on my Google Scholar profile.

Neurosymbolic AI for Enhancing Instructability in Generative AI

Published in IEEE Intelligent Systems, 2024

In this article, we explore the use a symbolic task planner to decompose high-level instructions into structured tasks, a neural semantic parser to ground these tasks into executable actions, and a neuro-symbolic executor to implement these actions while dynamically maintaining an explicit representation of state. We also seek to show that neurosymbolic approach enhances the reliability and context-awareness of task execution, enabling LLMs to dynamically interpret and respond to a wider range of instructional contexts with greater precision and flexibility.

Recommended citation: Sheth, A., Pallagani, V., & Roy, K. (2024). Neurosymbolic AI for Enhancing Instructability in Generative AI. IEEE Intelligent Systems.
Download Paper

Neurosymbolic Value-Inspired Artificial Intelligence (Why, What, and How)

Published in IEEE Intelligent Systems, 2024

The rapid progression of artificial intelligence (AI) systems, facilitated by the advent of large language models (LLMs), has resulted in their widespread application to provide human assistance across diverse industries. This trend has sparked significant discourse centered around the ever-increasing need for LLM-based AI systems to function among humans as a part of human society. Toward this end, neurosymbolic AI systems are attractive because of their potential to enable and interpretable interfaces for facilitating value-based decision making by leveraging explicit representations of shared values. In this article, we introduce substantial extensions to Kahneman’s System 1 and System 2 framework and propose a neurosymbolic computational framework called value-inspired AI (VAI). It outlines the crucial components essential for the robust and practical implementation of VAI systems, representing and integrating various dimensions of human values. Finally, we further offer insights into the current progress made in this direction and outline potential future directions for the field.

Recommended citation: Sheth, A., & Roy, K. (2024). Neurosymbolic Value-Inspired Artificial Intelligence (Why, What, and How). IEEE Intelligent Systems, 39(1), 5-11.
Download Paper

Neurosymbolic Artificial Intelligence (Why, What, and How)

Published in IEEE Intelligent Systems, 2023

Humans interact with the environment using a combination of perception—transforming sensory inputs from their environment into symbols, and cognition—mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of artificial intelligence (AI), refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision making in safety-critical applications such as health care, criminal justice, and autonomous driving.

Recommended citation: Sheth, A., Roy, K., & Gaur, M. (2023). Neurosymbolic artificial intelligence (why, what, and how). IEEE Intelligent Systems, 38(3), 56-62.
Download Paper

Process Knowledge-Infused AI: Toward User-Level Explainability, Interpretability, and Safety

Published in IEEE Internet Computing, 2022

Using the examples of mental health and cooking recipes for diabetic patients, we show why, what, and how to incorporate Process Knowledge along with domain knowledge in machine learning.

Recommended citation: Sheth, A., Gaur, M., Roy, K., Venkataraman, R., & Khandelwal, V. (2022). Process knowledge-infused ai: Toward user-level explainability, interpretability, and safety. IEEE Internet Computing, 26(5), 76-84.
Download Paper

Knowledge-intensive Language Understanding for Explainable AI

Published in IEEE Internet Computing, 2021

This article covers how the inclusion of explicit knowledge helps explainable AI systems provide human-understandable explanations and enables decision making.

Recommended citation: Sheth, A., Gaur, M., Roy, K., & Faldu, K. (2021). Knowledge-intensive language understanding for explainable ai. IEEE Internet Computing, 25(5), 19-24.
Download Paper