The typical assumption of creating a robot is, that the system has to act autonomously. As a result the problem occurs, how to store the domain specific knowledge into the robot. In the literature the missing connection of expert knowledge with the AI Software is called grounding problem. It can be solved with annotated gamelogs. If the recorded gamelog of a human demonstrator gets annotated, then the grounding problem is solved.

A first example how to do plan recognition is a context free grammar which contains of primitives actions like load, drivetolocation and unload.[1] The idea is to restrict the plan notation to the formalized grammar notation and this allows to parse existing plans.

From a more abstract point of view, the task is to take a dataset (generated always by human demonstration) and identify motion primitives in the dataset.[2] The normal context free grammar can be extended with probabilistic one which allows to parse a demonstration.

In contrast over parser for computer language which are developed under the term compiler, a parser for a robotic plan gets as input a list of natural language statements, for example “push object”, “move to”. Especially for complex domains the only option to annotate a dataset is to use an abstract notation which is equal to structured English. Plan recognition has much to do with natural language parsing.


  1. Geib, Christopher W., and Pavan Kantharaju. "Learning combinatory categorial grammars for plan recognition." Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
  2. Ramirez-Amaro, Karinne, Yezhou Yang, and Gordon Cheng. "A survey on semantic-based methods for the understanding of human movements." Robotics and Autonomous Systems (2019).
Community content is available under CC-BY-SA unless otherwise noted.