Vision-based goal-directed planning of a robot with development of visual attention and visual working memory
Vision-based goal-directed planning of a robot is studied using predictive coding and active inference. The video shows how a visual plan for achieving a specific goal state is generated by using mechanisms developed for visual attention and visual working memory.
On-line direct physical interaction with Torobo
A study on the relation between cognitive and motor compliance, behavior emergence, and intentionality, from the free energy principle theory perspective. A PV-RNN model represents proprioceptive information, whereas motor behavior is based on joint space control from torque feed-back. This experiment has been conducted by Hendry Ferreira Chame and Jun Tani.
Spontaneous interactions between two robots
Humanoid Robot OP2 performs imitative interaction through learning. They shows spontaneous turn-taking and switching of movement patterns. This experiment has been conducted by Jungsk Hwang, Nadine Wirkuttis, and Jun Tani.
Predicting future and reflecting past in terms of visuo-proprioceptive patterns
A simulated humanoid robot experiment using a predictive-coding and active inference model for hierarchical and associative learning of visuo-proprioceptive sequential patterns. This experiment has been conducted by Jungsk Hwang.
A humanoid robot using MTRNN developes functinal hierarchy through learning where a set of primitive behaviors are learned in the lower level characterised by fast dynamics and their sequential combinations in the higher level characterized by slow dynamics. See (Yamashita & Tani, 2008) for details.
iCub robot controlled by MTRNN
iCub robot is controlled by MTRNN. This was done by Martin Peniak at Univ. of Plymouth
Spontaneous generation of actions by developing deterministic chaos
Pathology of schizophrenia reconstructed in a humanoid robot
Online modification of motor plan of a robot by using a framework analogous to active inference
Robot performed online modification of motor program and execution of it by using a framework analogous to active inference (Friston et al., 2010) implemented in hierarchically-organized RNN. The robot was trained with two types of behavior patterns associated with two positions of a visual object. The video shows the moment of environmental situation change when the visual object was moved from a habituated position to another. It can be seen that representation in the immediate past window as well as future movement plan were modified in online by backpropagating the prediction error through time in the past window. See details in (Tani, 2003)