Controlling Torobo Humanoid by using a motion capture suit

Vision-based goal-directed planning of a robot with development of visual attention and visual working memory

 Vision-based goal-directed planning of a robot is studied using predictive coding and active inference. The video shows how a visual plan for achieving a specific goal state is generated by using mechanisms developed for visual attention and visual working memory.


On-line direct physical interaction with Torobo

A study on the relation between cognitive and motor compliance, behavior emergence, and intentionality, from the free energy principle theory perspective. A PV-RNN model represents proprioceptive information, whereas motor behavior is based on joint space control from torque feed-back. This experiment has been conducted by Hendry Ferreira Chame and Jun Tani.

Spontaneous interactions between two robots

Humanoid Robot OP2 performs imitative interaction through learning. They shows spontaneous turn-taking and switching of movement patterns. This experiment has been conducted by Jungsk Hwang, Nadine Wirkuttis, and Jun Tani.

Predicting future and reflecting past in terms of visuo-proprioceptive patterns

A simulated humanoid robot experiment using a predictive-coding and active inference model for hierarchical and associative learning of visuo-proprioceptive sequential patterns. This experiment has been conducted by Jungsk Hwang.


A humanoid robot using MTRNN developes functinal hierarchy through learning where a set of primitive behaviors are learned in the lower level characterised by fast dynamics and their sequential combinations in the higher level characterized by slow dynamics. See (Yamashita & Tani, 2008) for details.

Video Paper

iCub robot controlled by MTRNN

iCub robot is controlled by MTRNN. This was done by Martin Peniak at Univ. of Plymouth


Spontaneous generation of actions by developing deterministic chaos

A humanoid robot spontaneously generates sequences of learned primitives. Chaos self-organized in the higher level of artificial brain (MTRNN model) generates pseudo stochastic sequences of moving an object among left, middle and right positions on a table.

Video Paper

Goal-directed planning using a framework analogous to active inference

Goal-directed planning based on visual image of a robot is performed by using predictive coding and active inference frameworks. The model is implemented on MTRNN. (see details in Arie et al., 2009)

Video Paper

Pathology of schizophrenia reconstructed in a humanoid robot

Robot picking up white cube

Pathology of schizophrenia (delusion of control) is reconstructed in a humanoid robot. The delusion of control is manifested under malfunction of top-down and bottom-up interactions in MTRNN.

Video Paper

Embodied language

A mobile robot with a hand learns to associate primitive sentences and corresponding behaviors with certain level of generalization. In the video, a robot, by recognizing a sentence “hit red”, generated the corresponding behavior. The robot was implemented with RNNPB model.

Video Paper

Online modification of motor plan of a robot by using a framework analogous to active inference

Robot performed online modification of motor program and execution of it by using a framework analogous to active inference (Friston et al., 2010) implemented in hierarchically-organized RNN. The robot was trained with two types of behavior patterns associated with two positions of a visual object. The video shows the moment of environmental situation change when the visual object was moved from a habituated position to another. It can be seen that representation in the immediate past window as well as future movement plan were modified in online by backpropagating the prediction error through time in the past window. See details in (Tani, 2003)