Poster
in
Workshop: Workshop on neuro Causal and Symbolic AI (nCSI)
Learning Neuro-symbolic Programs for Language-Guided Robotic Manipulation
Namasivayam Kalithasan · Himanshu Singh · Vishal Bindal · Arnav Tuli · Vishwajeet Agrawal · Rahul Jain · Parag Singla · Rohan Paul
Given a natural language instruction, and an input and an output scene, our goal is to train a neuro-symbolic model which can output a manipulation program that can be executed by the robot on the input scene resulting in the desired output scene. Prior approaches for this task possess one of the following limitations: (i) rely on hand-coded symbols for concepts limiting generalization beyond those seen during training (R. Paul et. al., 2016) (ii) infer action sequences from instructions but require dense sub-goal supervision (C. Paxton et. al., 2019) or (iii) lack semantics required for deeper object- centric reasoning inherent in interpreting complex instructions (M. Shridhar et. al., 2022). In contrast, our approach is neuro-symbolic and can handle linguistic as well as perceptual variations, is end-to-end differentiable requiring no intermediate supervision, and makes use of symbolic reasoning constructs which operate on a latent neural object- centric representation, allowing for deeper reasoning over the input scene. Our experiments on a simulated environment with a 7-DOF manipulator, consisting of instructions with varying number of steps, as well as scenes with different number of objects, and objects with unseen attribute combinations, demonstrate that our model is robust to such variations, and significantly outperforms existing baselines, particularly in generalization settings.