Introduction to our Lab

The current research focus of our lab is to build automated systems (and to develop underlying methodologies and address associated challenges) that can understand text, images, and videos. We then apply that understanding to various AI (Artificial Intelligence), Robotics and Human Centered AI domains such as
  • teaching robots through demonstrations and language instructions,
  • assisting health care,
  • enabling scientific discovery through automated literature processing,
  • human-machine collaboration on difficult tasks such as software vulnerability detection, and
  • human robot collaboration.

  • Our lab's focus and USP is to augment machine learning (including deep learning approaches) with knowledge and reasoning for the above tasks, as in most cases there is accumulated task-relevant knowledge and often it is important to use commonsense reasoning. In the pursuit of using knowledge and reasoning together with machine learning we are faced with several questions and challenges such as:
  • how to incorporate knowledge and reasoning into machine learning methods;
  • how to acquire knowledge, especially commonsense knowledge;
  • how to identify key aspects of commonsense knowledge;
  • how to figure out what knowledge is missing;
  • how to obtain knowledge from text;
  • how to figure out appropriate knowledge representation formalisms to use;
  • how to determine the appropriate knowledge learning approach to use;
  • how to use question answering datasets to acquire knowledge;
  • how to use crowdsourcing for knowledge acquisition; and
  • how to do reasoning in the face of mistake-prone knowledge extraction methods and in the absence of a unified knowledge representation formalism.

  • Our research falls under the general area of AI but currently has a special focus on Cognition. Hence the name of our lab.
    We work closely with several other labs in CIDSE ASU. In particular, we have joint projects and/or joint publications with the Active Perception Group, Yochan Lab, Safecom lab, and Interactive Robotics Lab. Our external collaborators include: HRI Lab at Tufts, KLAP lab at NMSU, and Knowledge Based Systems Group at T U Wien.

    Cognition (NLP)

    Self-Supervised Knowledge Triplet Learning

    Banerjee, Baral
    EMNLP 2020
    Improving Natural Language Inference

    Mitra, Shrivastava, Baral
    AAAI 2020 (to appear)
    Knowledge Hunting & Neural Language Models for WSC

    Prakash, Sharma, Mitra, Baral.
    ACL 2019
    Open-Book QA

    Banerjee, Pal, Mitra, Baral.
    ACL 2019
    QA using ASP and NLI

    Mitra, Clark, Tafjord, Baral.
    AAAI 2019
    Solving simple word arithmetic problems

    Mitra, Baral.
    AAAI 2016
    ASP based ILP to solve BaBI

    Mitra, Baral .
    AAAI 2016
    A Platform to build NL to KR translation systems

    Nguyen, Mitra, Baral.
    ACL 2015



    Cognition (Vision)

    MUTANT: OOD generalization in VQA

    Gokhale, Banerjee, Baral, Yang
    EMNLP 2020
    Video2Commonsense: Commonsense-Enriched Captions

    Fang, Gokhale, Banerjee, Baral, Yang
    EMNLP 2020
    Diverse Visuo-Lingustic Question Answering (DVLQA) Challenge

    Sampat, Yang, Baral
    Findings of EMNLP 2020
    VQA-LOL: VQA under the Lens of Logic

    Gokhale, Banerjee, Baral, Yang
    ECCV 2020
    Event-Sequences from Image Pairs

    Gokhale, Sampat, Fang, Yang, Baral.
    (CVPR'19 Workshop)
    Integrating Knowledge and Reasoning in Image Understanding.

    Aditya, Yang, Baral.
    IJCAI 2019
    Spatial Knowledge Distillation to aid Visual Reasoning.

    Aditya, Saha, Yang, Baral
    WACV 2019
    Reasoning using Neural architectures for VQA

    Aditya, Yang, Baral
    AAAI 2018
    Image and multi-modal document Understanding and Visual QA

    Aditya, Yang, Baral, Aloimonos
    UAI 2018
    Image Understanding using Scene Description Graph.

    Aditya, Yang, Baral, Aloimonos, Fermuller
    CVIU Journal. Dec 2017
    DeepIU: An architecture for image understanding.

    Aditya, Baral, Yang, Aloimonos, Fermuller
    Advances in Cognitive Systems 2016.

    Human-Centered AI

    Imitation Learning: Combining Language, Vision and Demonstration

    Stepputtis, Campbell, Phielipp, Lee, Baral, Ben-Amor.
    NeurIPS 2020
    Cohort Selection from Clinical Notes

    Rawal, Prakash, Adhya, Kulkarni, Baral, Devarakonda
    Using P-log for Causal and Counterfactual Reasoning and Non-Naive Conditioning

    Baral, Hunsaker
    IJCAI 2007
    Identifying novel drug indications

    Tari, Vo, Liang, Patel, Baral, Cai
    PLoS ONE
    Hypothesis Formation in Biochemical Networks

    Tran, Baral, Nagaraj, Joshi
    ECCB 2005
    Representing and reasoning about cell signaling networks

    Baral, Chancellor, Tran, Tran, Joy, Berens
    ISMB/ECCB 2004
    High Level Language for Human-Robot Interaction

    Baral, Lumpkin, Scheutz
    Advances in Cognitive Systems, 2017.

    AI Foundations

    Book: Knowledge representation, reasoning and declarative problem solving

    Cambridge University Press
    Probabilistic Reasoning with Answer Sets

    Baral, Gelfond, Rushton
    TPLP 2009
    Using P-log for Causal and Counterfactual Reasoning and Non-Naive Conditioning

    Baral, Hunsaker
    IJCAI 2007
    Combining Multiple Knowledge Bases

    Baral, Minker, Kraus
    IEEE Transactions on
    Knowledge and Data Engineering, June 1991
    Formalizing sensing actions

    Baral, Son
    AI Journal, Jan 2001
    Maintenance goals of agents

    Baral, Eiter, Bjaereland, Nakamura
    Planning in Non-deterministic Domains

    Baral, Eiter, Zhao
    AAAI 2005
    Elaboration Tolerant Revision of Goals

    Baral, Zhao
    AAAI 2008
    Modeling multi-agent scenarios involving agents' knowledge about other's knowledge

    Baral, Gelfond, Son, Pontelli
    AAMAS 2010
    Multi-Agent Action Modeling using Perspective Fluents

    Baral, Gelfond, Pontelli, Son
    CommonSense 2015,
    AAAI Spring Symposium 2015.
    Incremental and Iterative Learning of Answer Set Programs

    Mitra, Baral
    TPLP 2018.