Introduction to our Lab


The current research focus of our lab is to build automated systems (and to develop underlying methodologies and address associated challenges) that can understand text, images, and videos. We then apply that understanding to various AI (Artificial Intelligence), Robotics and Human Centered AI domains such as
  • teaching robots through demonstrations and language instructions,
  • assisting health care,
  • enabling scientific discovery through automated literature processing,
  • human-machine collaboration on difficult tasks such as software vulnerability detection, and
  • human robot collaboration.

  • Our lab's focus and USP is to augment machine learning (including deep learning approaches) with knowledge and reasoning for the above tasks, as in most cases there is accumulated task-relevant knowledge and often it is important to use commonsense reasoning. In the pursuit of using knowledge and reasoning together with machine learning we are faced with several questions and challenges such as:
  • how to incorporate knowledge and reasoning into machine learning methods;
  • how to acquire knowledge, especially commonsense knowledge;
  • how to identify key aspects of commonsense knowledge;
  • how to figure out what knowledge is missing;
  • how to obtain knowledge from text;
  • how to figure out appropriate knowledge representation formalisms to use;
  • how to determine the appropriate knowledge learning approach to use;
  • how to use question answering datasets to acquire knowledge;
  • how to use crowdsourcing for knowledge acquisition; and
  • how to do reasoning in the face of mistake-prone knowledge extraction methods and in the absence of a unified knowledge representation formalism.

  • Our research falls under the general area of AI but currently has a special focus on Cognition. Hence the name of our lab.
    We work closely with several other labs in CIDSE ASU. In particular, we have joint projects and/or joint publications with the Active Perception Group, Yochan Lab, Safecom lab, and Interactive Robotics Lab. Our external collaborators include: HRI Lab at Tufts, KLAP lab at NMSU, and Knowledge Based Systems Group at T U Wien.

    Cognition (NLP)

    Cognition (Vision)

    Human-Centered AI

    AI Foundations

    Improving Natural Language Inference

    Mitra, Shrivastava, Baral
    AAAI 2020 (to appear)
     
    Event-Sequences from Image Pairs

    Gokhale, Sampat, Fang, Yang, Baral.
    Preprint;
    (CVPR'19 Vision Meets Cognition Workshop)
     
    Imitation Learning: Combining Language, Vision and Demonstration

    Stepputtis, Campbell, Phielipp, Baral, Ben-Amor.
    NeurIPS 2019 Workshop on Robot Learning
     
    Book: Knowledge representation, reasoning and declarative problem solving

    Baral
    Cambridge University Press
     
    Knowledge Hunting & Neural Language Models for WSC

    Prakash, Sharma, Mitra, Baral.
    ACL 2019
     
    Integrating Knowledge and Reasoning in Image Understanding.

    Aditya, Yang, Baral.
    IJCAI 2019
     
    Cohort Selection from Clinical Notes

    Rawal, Prakash, Adhya, Kulkarni, Baral, Devarakonda
    Preprint
     
    Probabilistic Reasoning with Answer Sets

    Baral, Gelfond, Rushton
    TPLP 2009
     
    Open-Book QA

    Banerjee, Pal, Mitra, Baral.
    ACL 2019
     
    Spatial Knowledge Distillation to aid Visual Reasoning.

    Aditya, Saha, Yang, Baral
    WACV 2019
     
    Discovering drug-drug interactions

    Tari, Anwar, Liang, Cai and Baral.
    Bioinformatics 26(18):2010.
    (special issue of ECCB 2010.)
     
    Using P-log for Causal and Counterfactual Reasoning and Non-Naive Conditioning

    Baral, Hunsaker
    IJCAI 2007
     
    QA using ASP and NLI

    Mitra, Clark, Tafjord, Baral.
    AAAI 2019
     
    Reasoning using Neural architectures for VQA

    Aditya, Yang, Baral
    AAAI 2018
     
    Identifying novel drug indications

    Tari, Vo, Liang, Patel, Baral, Cai
    PLoS ONE
     
    Combining Multiple Knowledge Bases

    Baral, Minker, Kraus
    IEEE Transactions on
    Knowledge and Data Engineering, June 1991
     
    Solving simple word arithmetic problems

    Mitra, Baral.
    AAAI 2016
     
    Image and multi-modal document Understanding and Visual QA

    Aditya, Yang, Baral, Aloimonos
    UAI 2018
     
    Hypothesis Formation in Biochemical Networks

    Tran, Baral, Nagaraj, Joshi
    ECCB 2005
     
    Formalizing sensing actions

    Baral, Son
    AI Journal, Jan 2001
     
    ASP based ILP to solve BaBI

    Mitra, Baral .
    AAAI 2016
     
    Image Understanding using Scene Description Graph.

    Aditya, Yang, Baral, Aloimonos, Fermuller
    CVIU Journal. Dec 2017
     
    Representing and reasoning about cell signaling networks

    Baral, Chancellor, Tran, Tran, Joy, Berens
    ISMB/ECCB 2004
     
    Maintenance goals of agents

    Baral, Eiter, Bjaereland, Nakamura
     
    A Platform to build NL to KR translation systems

    Nguyen, Mitra, Baral.
    ACL 2015
     
    DeepIU: An architecture for image understanding.

    Aditya, Baral, Yang, Aloimonos, Fermuller
    Advances in Cognitive Systems 2016.
     
    High Level Language for Human-Robot Interaction

    Baral, Lumpkin, Scheutz
    Advances in Cognitive Systems, 2017.
     
    Planning in Non-deterministic Domains

    Baral, Eiter, Zhao
    AAAI 2005
     



     



     



     
    Elaboration Tolerant Revision of Goals

    Baral, Zhao
    AAAI 2008
     



     



     



     
    Modeling multi-agent scenarios involving agents' knowledge about other's knowledge

    Baral, Gelfond, Son, Pontelli
    AAMAS 2010
     



     



     



     
    Multi-Agent Action Modeling using Perspective Fluents

    Baral, Gelfond, Pontelli, Son
    CommonSense 2015,
    AAAI Spring Symposium 2015.
     



     



     



     
    Incremental and Iterative Learning of Answer Set Programs

    Mitra, Baral
    TPLP 2018.