Talk
in
Expo Workshop: Perspectives on Neurosymbolic Artificial Intelligence Research
Decision procedures for real valued reasoning
Ryan Riegel
We introduce Logical Neural Networks, a new neuro-symbolic framework which creates a 1-to-1 correspondence between a modified form of the standard differentiable neuron and a logic gate in a weighted form of real-valued logic. The key modifications of the neuron model are a) the addition of an ability to perform inference in the reverse direction in order to perform the equivalent of logical inference rules such as modus ponens, within the message-passing paradigm of neural networks, and b) learning with constraints on the weights in order to enforce logical behavior plus a new kind loss term, contradiction loss, which maximizes logical consistency in the face of imperfect and inconsistent knowledge. The result differs significantly from other neuro-symbolic ideas in that 1) the model is fully disentangled and understandable since every neuron has a meaning, 2) the model can perform both classical logical deduction and its real-valued generalization (which allows for the representation and propagation of uncertainty) exactly, as special cases, as opposed to approximately as in nearly all other approaches, and 3) the model is compositional and modular, allowing for fully reusable knowledge across tasks.