Neural-Symbolic Descriptive Action Model from Images: The Search for STRIPS

12/11/2019
by   Masataro Asai, et al.
0

Recent work on Neural-Symbolic systems that learn the discrete planning model from images has opened a promising direction for expanding the scope of Automated Planning and Scheduling to the raw, noisy data. However, previous work only partially addressed this problem, utilizing the black-box neural model as the successor generator. In this work, we propose Double-Stage Action Model Acquisition (DSAMA), a system that obtains a descriptive PDDL action model with explicit preconditions and effects over the propositional variables unsupervized-learned from images. DSAMA trains a set of Random Forest rule-based classifiers and compiles them into logical formulae in PDDL. While we obtained a competitively accurate PDDL model compared to a black-box model, we observed that the resulting PDDL is too large and complex for the state-of-the-art standard planners such as Fast Downward primarily due to the PDDL-SAS+ translator bottleneck. From this negative result, we argue that this translator bottleneck cannot be addressed just by using a different, existing rule-based learning method, and we point to the potential future directions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/10/2022

Scaling up ML-based Black-box Planning with Partial STRIPS Models

A popular approach for sequential decision-making is to perform simulato...
research
08/12/2022

RuDi: Explaining Behavior Sequence Models by Automatic Statistics Generation and Rule Distillation

Risk scoring systems have been widely deployed in many applications, whi...
research
07/12/2022

Investigating the Impact of Independent Rule Fitnesses in a Learning Classifier System

Achieving at least some level of explainability requires complex analyse...
research
04/29/2017

Classical Planning in Deep Latent Space: Bridging the Subsymbolic-Symbolic Boundary

Current domain-independent, classical planners require symbolic models o...
research
01/21/2021

Copycat CNN: Are Random Non-Labeled Data Enough to Steal Knowledge from Black-box Models?

Convolutional neural networks have been successful lately enabling compa...
research
09/06/2022

Learning Interpretable Temporal Properties from Positive Examples Only

We consider the problem of explaining the temporal behavior of black-box...

Please sign up or login with your details

Forgot password? Click here to reset