Towards Stable Symbol Grounding with Zero-Suppressed State AutoEncoder

by   Masataro Asai, et al.

While classical planning has been an active branch of AI, its applicability is limited to the tasks precisely modeled by humans. Fully automated high-level agents should be instead able to find a symbolic representation of an unknown environment without supervision, otherwise it exhibits the knowledge acquisition bottleneck. Meanwhile, Latplan (Asai and Fukunaga 2018) partially resolves the bottleneck with a neural network called State AutoEncoder (SAE). SAE obtains the propositional representation of the image-based puzzle domains with unsupervised learning, generates a state space and performs classical planning. In this paper, we identify the problematic, stochastic behavior of the SAE-produced propositions as a new sub-problem of symbol grounding problem, the symbol stability problem. Informally, symbols are stable when their referents (e.g. propositional values) do not change against small perturbation of the observation, and unstable symbols are harmful for symbolic reasoning. We analyze the problem in Latplan both formally and empirically, and propose "Zero-Suppressed SAE", an enhancement that stabilizes the propositions using the idea of closed-world assumption as a prior for NN optimization. We show that it finds the more stable propositions and the more compact representations, resulting in an improved success rate of Latplan. It is robust against various hyperparameters and eases the tuning effort, and also provides a weight pruning capability as a side effect.


Unsupervised Grounding of Plannable First-Order Logic Representation from Images

Recently, there is an increasing interest in obtaining the relational st...

Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary

Current domain-independent, classical planners require symbolic models o...

Classical Planning in Deep Latent Space

Current domain-independent, classical planners require symbolic models o...

Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients

Hierarchical planners that produce interpretable and appropriate plans a...

Differentiable Fuzzy 𝒜ℒ𝒞: A Neural-Symbolic Representation Language for Symbol Grounding

Neural-symbolic computing aims at integrating robust neural learning and...

Classical Planning as QBF without Grounding (extended version)

Most classical planners use grounding as a preprocessing step, reducing ...

The Symbol Grounding Problem

How can the semantic interpretation of a formal symbol system be made in...

Please sign up or login with your details

Forgot password? Click here to reset