Online Goal-Directed Planning with Torobo

This is a demonstration of goal-directed planning using the humanoid robot Torobo, where the goal is to place the red cylinder at a user given position. The robot is controlled by an online version of the GLean planner (
As seen in the video on the right, the robot can get the desired goal position by being shown the object on the goal. The object is then placed in a random position in the work space, and the planner is allowed to run. The plots on the left show joint positions over time (the plan) on top followed by neural network activity. The activity changes as the robot moves and the plan is refined, and is different between goal positions.

Emergence of Content-Agnostic Information Processing by a Robot Using Active Inference, Visual Attention, Working Memory, and Planning

The video shows results of vision-based goal-directed planning for object manipulation tasks. Planning was performed given an initial state (e.g. at t=0) and a visual goal state at the final time step. The presented study explores how content-agnostic information processing, i.e. separate representations of content and content manipulation, can be developed in the course of end-to-end learning. Visualizations of the internal representations in two visual working memories, the attended visual feature space, and the generated signals of the RNN model in relation to the visual prediction are shown.

Collecting robot training data from human movement & Autonomous robot control by a neural network

How we set up and take control of the Torobo humanoid robot using the Xsens MVN motion capture system in order to collect human movement data for training neural networks, and a demonstration of a neural network (PV-RNN) controlled Torobo where the robot reacts to the position of a red block placed in front of it in order to touch it with either its left or right hand. The neural network generates the position of each joint in real time in order to create smooth trajectories, like human movement.

Controlling the Sense of Agency in Dyadic Robot Interaction: An Active Inference Approach

Simulation study on dyadic imitative interactions of humanoid robots using a variational recurrent neural network model by Nadine Wirkuttis and Jun Tani.

Vision-based goal-directed planning of a robot with development of visual attention and visual working memory

 Vision-based goal-directed planning of a robot is studied using predictive coding and active inference. The video shows how a visual plan for achieving a specific goal state is generated by using mechanisms developed for visual attention and visual working memory.


On-line direct physical interaction with Torobo

A study on the relation between cognitive and motor compliance, behavior emergence, and intentionality, from the free energy principle theory perspective. A PV-RNN model represents proprioceptive information, whereas motor behavior is based on joint space control from torque feed-back. This experiment has been conducted by Hendry Ferreira Chame and Jun Tani.

Predicting future and reflecting past in terms of visuo-proprioceptive patterns

A simulated humanoid robot experiment using a predictive-coding and active inference model for hierarchical and associative learning of visuo-proprioceptive sequential patterns. This experiment has been conducted by Jungsk Hwang.


<Old Movies>