Symposia

To explore the main theme “AI X ART X AESTHETICS - Will AI have its own aesthetic autonomy? -” we will organize six symposia at OIST. Each symposium is based on a special sub-theme, in which three panelists address their speeches and discuss the sub-theme together. Some exhibitors are included in the panelists.

Moderators: Kenji Doya (OIST, all days except 12 November) and Hideki Nakazawa (AIAARG, all days), with simultaneous interpretation.

01 AI Aesthetics and the Art

The 9th AI Art & Aesthetics Research Meeting Symposium
November 12 (Sun), 2017
14:00-19:00 (Door Open 13:30)
Auditorium / Admission Free
Japanese, with simultaneous interpretation to English

Panelists

  • Tatsuo Unemi
    Professor, Dr. of Eng., Dean, Faculty of Science and Engineering, Soka University [Exhibitor]
  • Mike Tyka
    Artist working with Molecular Sculpture and Neural networks Google Engineer and Machine Learning Researcher [Exhibitor]
  • Minoru Tsukada (Exhibition info)
    PhD, MD. Tamagawa University Brain Science Institute Visiting Professor, Professor Emeritus, Artist (Nihongafu, Executive) [Exhibitor]

Moderator

  • Hideki Nakazawa (AIAARG)

Background and Notes

Tatsuo Unemi

Aesthetics, Creativity, and the Arts, for the Computer by the Computer -- toward Evolutionary Art Theory

We often encounter people who believe that beauty is based on human's feelings. But some the others say the God created beauty and humans are just struggling to understand it. We also hear an opinion that both creativity and artistic activity are realized only by humans, and it is impossible by apes and machines. Are these true? According to the scientific findings, human is one of the species that appeared in the long stream of biological evolution. The intelligence and the emotion are also functionalities emerged as a result of this process. The author does never negate the existence of subjective feeling and mind inside of each person, but he thinks it would be possible to talk about beauty, creativity, and the arts sharable among the machines from a viewpoint of Evolutionary Computation and Artificial Life. This talk introduces some thoughts on these issues based on the author's challenges toward Evolutionary Art Theory.

Mike Tyka

Neural networks: a new opportunity for art

Throughout history, technological breakthroughs have often deeply influenced the world of art. We are currently in the midst of such a technological change: artificial neural networks are beginning to be able to perform well on difficult tasks from image or speech recognition to playing complex games such as Go and these technologies are rapidly becoming part of our daily lives. Likewise artists are experimenting with new ways to create art. After a short introduction to neural networks and how they work I will present some of my experiments over the last two years, including making large scale art with Google’s DeepDream algorithm, currently exhibited at OIST. I will also cover experiments with Generative Adversarial Networks to generate a series of imaginary portraits, which explore the strangeness of the "uncanny valley". Finally I will present a collaboration with Refik Anadol where we invite the viewer to consider alternate histories by generating imaginary items based on the real historical archive.

Minoru Tsukada

Brain, Artificial Intelligence and Human Art

The art is the creation by human and also the presentation of human life. From the view point, I have been painting pictures crossed my mind for sixty long years. Artificial intelligence (A.I.) art becomes an extremely attractive one, if the dynamic function of learning and memory in human brain is embedded in that of A.I. It is now possible to bind between the symbol processing and the pattern dynamics. The situation will develop rapidly through the mutual communication and cooperation of brain science, A.I. and art.

02 Meaning/Meaningless and the Language

The 10th AI Art & Aesthetics Research Meeting Symposium
November 25 (Sat), 2017
14:00-19:00 (Door Open 13:30)
Seminar Room B250 / Admission Free
Japanese, with simultaneous interpretation to English

Panelists

  • Naoyuki Sato
    Professor of Department of Complex and Intelligent Systems, Future University Hakodate
  • Michael Spranger
    Artist and Researcher at Sony Computer Science Laboratories Inc. [Exhibitor]
  • Hitoshi Matsubara
    Vice-president and Professor of Department of Complex and Intelligent Systems of Future University Hakodate, Former President of Japanese Society for Artificial Intelligence [Exhibitor]

Moderators

  • Kenji Doya (OIST)
  • Hideki Nakazawa (AIAARG)

Topics

"Neural mechanisms of context and memory for generating 'meanings'"

Naoyuki Sato (Prefessor of Department of Complex and Intelligent Systems of Future University Hakodate)

Neocortex is thought as a classifier of the environmental features and that produces the basis of our semantic processing. On the other hand, more importantly, the semantic processing is drastically modulated by its contexts. To understand neural mechanisms of such context-based processing, the hippocampus is thought as a key brain region that is known to maintain the context and memory for environments, events and their sequences. In this talk, neural representation and computation of the contexts in the hippocampus are discussed and our recent studies associated with language-based contexts, which are essential for understanding of our semantic processing, are introduced.

"Autonomous meaning creation. Can robots create their own language?"

Michael Spranger (Artist and Reseacher at Sony Computer Science Laboratories Inc.)

The talk will review and discuss recent research that tries to identify computational mechanisms and representations that allow embodied agents (robots) to autonomously develop meaning and communication systems. The autonomously developed communication systems share important properties of human language such as compositionality, open-endedness and the need for inference. Through experiments with robots in the real world and in simulation, we explore the role of embodiment in communication. We are particularly interested in mechanisms that allow agents to not only develop communication systems but allow robots to choose and develop the conceptualization strategies for developing communication systems - a key feature of Natural Language evolution. The talk will discuss both recent research trends, as well as attempts at artistic exploration of the subject of autonomous meaning creation.

* (IV) "Language Games" (19) Level B, Center Bldg.

"What is 'meaning' for computers?"

Hitoshi Matsubara (Vice-president and a professor of Department of Complex and Intelligent Systems of Future University Hakodate, Former president of Japanses Society for Artificial Intelligence)

Natural language processing technology, one of key areas of artificial intelligence, has advanced considerably by using machine learning techniques. Computers can write some novels and solve some problems of entrance examinations. Humans seem to understand the meaning (as it is) when writing novels and solving problems, but computers do not understand the meaning "for humans" now. In order to make understand the meaning, it is necessary for computers to solve "symbol ground problem". Grounding of entities and symbols for human beings and for computers are different, so the meaning for human beings and the meaning to computer are considered to be different. Maybe computers already understand their meaning, it may just be impossible for humans to understand the meaning for computers.

* (III) Sato-Matsuzaki Lab. Nagoya University "Kimagure AI project I am a writer" (21) Level B, Center Bldg.

03 Future AI

The 11th AI Art & Aesthetics Research Meeting Symposium
November 26 (Sun), 2017
14:00-19:00 (Door Open 13:30)
Seminar Room B250 / Admission Free
Japanese, with simultaneous interpretation to English

Panelists

  • Koichi Takahashi
    Principal Investigator, RIKEN. Vice-chair, The Whole Brain Architecture Initiative
  • Hideki Nakazawa
    Artist, Founder and Representative of Artificial Intelligence Art and Aesthetics Research Group (AIAARG) [Exhibitor]
  • Rolf Pfeifer
    Dr. sc. techn. ETH Prof. em. University of Zurich, Switzerland Dept. of Automation, Shanghai Jiao Tong University, China Scientific Consultant: “Living with Robots” Ltd.

Moderators

  • Kenji Doya (OIST)
  • Hideki Nakazawa (AIAARG)

Topics

"How we shall reinvent ourselves"

Koichi Takahashi (Principal Investigator, RIKEN. Vice-chair, The Whole Brain Architecture Initiative)

I will discuss technological singularity and post-singularity civilization, especially putting emphases on what we could learn from the history and nature of humanity. The eastern view of the universe, that most prominently exemplified by esoteric buddhists' mandalas, can play a role in developing robust ecosystem involving machines, humanity, and post-humans.

"The Route to 'Machine Aesthetics / Machine Art'"

Hideki Nakazawa (Artist, Founder and Representative of Artificial Intelligence Art and Aesthetics Research Group)

If human gives a goal, artificial intelligence program works well. The swing robot creates more than human beings in just over a dozen minutes and the battle between AI Go is a fight of God beyond human understanding. So if a human gives human aesthetics as an obvious goal, artificial intelligence creates art. This is "Human Aesthetics / Machine Art." However, human aesthetics are not trivial, for example, the Eiffel Tower made by machine calculation was initially criticized by artists for not being aesthetic. Such "Machine Aesthetics / Human Art" will arrive at the "art for art's sake" that aimed at art itself. Well today, artificial intelligence cannot find its own goal. But if this premise breaks down, artificial intelligence can make art aimed at machine aesthetics. If this arrives at "art for art's sake," "Machine Aesthetics / Machine Art" beyond human understanding will emerge.

* (IV) "Manifesto of Artificial Intelligence Art and Aesthetics" (8) Tunnel Gallery.
* (II) "Go Stone Arrangement Painting No. 1: 35 by 35," "No. 2," "No. 3" (15) Level B, Center Bldg.

"Living with robots - Coping with the 'Robot/AI Hype'"

Rolf Pfeifer (Dr. sc. techn. ETH Prof. em. University of Zurich, Switzerland Dept. of Automation, Shanghai Jiao Tong University, China Scientific Consultant: "Living with Robots" Ltd.)

Artificial Intelligence or AI has a history of hypes. I will argue that for a number of years, there been and there still is a huge robotic/AI hype and that we are facing a big danger that the bubble will burst if we - engineers, scientists, entrepreneurs - don't manage to deliver on the promises. And we must design and build robots that do have useful sensory-motor functionality that goes beyond merely talking andsmiling. Although robots have been around for more than half a century, the term has acquired an entirely new quality since robots, roughly 25 years ago, started leaving the factory floors moving into our own living space.

04 AI Aesthetics and the Polytheism

The 14th AI Art & Aesthetics Research Meeting Symposium
January 6 (Sat), 2018 14:00-19:00 (Door Open 13:30) / Auditorium / Admission Free
Japanese, with simultaneous interpretation to English

Panelists

  • Hiroyuki Okada
    Head, Advanced Intelligence and Robotics Research Center (AIBot) Research Institute, Tamagawa University
    Executive Director, The RoboCup Japanese National Committee
  • Satoshi Kurihara
    Professor of The University of Electro-Communications, Graduate School of Informatics and Engineering, Director of Artificial Intelligence eXploration Research Center(AIX) [Exhibitor]
  • Kenji Doya
    Professor, Neural Computation Unit Okinawa Institute of Science and Technology Graduate University (OIST) [Exhibitor]

Moderators

  • Kenji Doya (OIST)
  • Hideki Nakazawa (AIAARG)

Topics

"Disembodied cognition as a new concept of intelligence from a viwepoint of 'Projection Science'"

Hiroyuki Okada (Head, Advanced Intelligence and Robotics Research Center (AIBot) Research Institute, Tamagawa University. Executive Director, The RoboCup Japanese National Committee)

"Projection Science" is a completely new methodology for elucidating human cognition, and it is argued that the process of projecting the representation into the outside world is the source of the subjective experience from the stimulation and information of the physical world.

In my talk, we discuss the possibility of a new cognitive mechanism of disembodiment cognition by rethinking the concept of physical cognition so far from the viewpoint of Projection Science.

In particular, we want to consider phenomena that are believed to exist for some reason "Projection Science", even though there is no external object directly related to generation of symbols such as ghosts or gods, or not recognized.

"Influence happen to R&D of AI through Western or Oriental perspective"

Satoshi Kurihara (Professor of The University of Electro-Communications, Graduate School of Informatics and Engineering. Director of Artificial Intelligence eXploration Research Center(AIX))

Current AI R&D based on machine learning technology should be seen as intelligent information processing technology. And, as second step, R&D of "general" and "autonomous" AI, which is true AI, is going to begin acceleration. AI system becomes large and complicated, and it will be difficult to understand the whole system 100%. Conventional science and technology has been mainly designed in a top-down manner, but for large-scale complex systems it is necessary to adopt a bottom-up method as well. The top-down method has high affinity with the Western perspective, while the bottom-up method has high affinity with the Oriental perspective. In this presentation, let's think about the influence happen to R&D of AI through Western or Oriental perspective.

* (III) "Traffic Signal Control by DQN" "Multi Layered Emergent Architecture" (2) Auditorium.

"What will adaptive autonomous robots dream of?"

Kenji Doya (Professor, Neural Computation Unit Okinawa Institute of Science and Technology Graduate University (OIST))

Until just a few years ago, robots and AI agents were to apply rules designed by humans to given inputs. By a happy marriage with deep learning, reinforcement learning can now be applied to real-world problems so that robots and AI agents can acquire their own action policies. Reinforcement learning is a framework for maximizing externally defined “rewards,” which could be running as fast as possible or gaining as much game scores as possible. Can robots and AI agents find and select their rewards by themselves? If they can, what would those rewards be? I will introduce our research trying to answer those questions.

* (IV) Kenji Doya and the Smartphone Robot Development Team "Can Robots Find Their Own Goals?" (20) Level B, Center Bldg.

05 Artificial Consciousness/Artificial Life

The 15th AI Art & Aesthetics Research Meeting Symposium
January 7 (Sun), 2018
14:00-19:00 (Door Open 13:30) / Auditorium / Admission Free
Japanese, with simultaneous interpretation to English

Panelists

  • Takashi Ikegami
    Professor, Department of General System Studies, The Graduate School of Arts and Sciences University of Tokyo
  • Ryota Kanai
    Neuroscientist, Founder & CEO of Araya, Inc.
  • Toshiyuki Nakagaki
    Director, Research Institute for Electronic Science, Hokkaido University. Professor, MATHEMATICAL and PHYSICAL ETHOLOGY Lab., Research Center of Mathematics for Social Creativity

Moderators

  • Kenji Doya (OIST)
  • Hideki Nakazawa (AIAARG)

Topics

"Artificial Consciousness and the Android 'Alter'"

Takashi Ikegami (Professor, Department of General System Studies, The Graduate School of Arts and Sciences University of Tokyo)

When do we feel that we have a mind, or how could we install the mind on the machine? We take two approaches, the experimental and the constructivist, to answer these questions. In the experimental approach, we will draw on a cognitive experiment, called the perceptual crossing experiment, in which the subjects perceive the existence of their partner from tactile interaction in a virtual space. In the constructivist approach, we create an Android named "Alter" and let it interact with people to foster humanity in Alter. The first approach shows that passive touch is essential to perceiving others (Kojima, H. et al. Front. Psych. 2017). The second method demonstrates the advantages of the principle of stimulus avoidance found in model neural network (Doi, I. et al., ECAL 2017). From these experiments, I will discuss the innovative and complex dynamics aspects of man-AI/ALIFE interaction (Takashi Ikegami and Hiroshi Ishiguro, In Between Man and Machine: Where is Mind?, (Kodansha, 2016)).

"Conscious AI, General AI, and Living AI"

Ryota Kanai (Neuroscientist, Founder & CEO of Araya, Inc.)

We have two naive intuitions about consciousness.

First, life forms have conscious experience. We tend to acknowledge inner conscious experiences in biological organisms, but not in machines.

Second, highly advanced AI has consciousness. In Sci-Fi films, advanced AIs gain experiences of emotion and intention.

While it is possible to analytically discern the differences among consciousness, intelligence and life, I will illustrate the close relationships among the three from the perspective of generative models of the self, and the philosophical stance called biological naturalism. In the same vein, I will argue that general AI as we conceive could possess phenomenal consciousness, and propose a method to test phenomenal consciousness in machines using theories such as integrated information theory.

"Imagining the interface between behavior and intelligence in slime mold of single celled organism"

Toshiyuki Nakagaki (Director, Research Institute for Electronic Science, Hokkaido University. Professor, MATHEMATICAL and PHYSICAL ETHOLOGY Lab., Research Center of Mathematics for Social Creativity)

Slime mold Physarum, an amoeboid organism, moves around on the dark and humid floor of woods, and can trying to avoid predators and exposure to sunlight while coming closer to foods. It may often meet a difficult decision to choose whether to go or not to go. Life is not always easy even in slime mold although the life style of it is totally different from human. It is not easy for us to imagine how properly his behaviors work in wild environments. After all, I like to throw away the stupid opinion that single celled organism is stupid.

06 AI Aesthetics and the Machinery

The 16th AI Art & Aesthetics Research Meeting Symposium
January 8 (Mon/Holiday), 2018
14:00-19:00 (Door Open 13:30) / Auditorium / Admission Free
Japanese, with simultaneous interpretation to English

Panelists

  • Fuminori Akiba
    Associate Professor, Graduate School of Informatics, Nagoya University (aesthetics and art theory)
  • Elena Knox
    Performing and media artist. JSPS Postdoctoral Research Fellow, Department of Intermedia Art and Science, Waseda University, Tokyo, Japan [Exhibitor]
  • Akihiro Kubota
    Professor, Art & Media Course, Department of Information Design, Tama Art University
    (Unfortunately, Inke Arns cannot attend)

Moderators

  • Kenji Doya (OIST)
  • Hideki Nakazawa (AIAARG)

Topics

"AI Aesthetics and the Machinery"

Fuminori Akiba (Associate Professor, Graduate School of Informatics, Nagoya University (aesthetics and art theory))

There are two things I want to talk about. They are about the title of this exhibition: Artificial intelligence Art and Aesthetics. First, what is the purpose of aesthetics, who is the subject of it? After confirming them, I would like to think about what we need to prepare to think of the aesthetics for artificial intelligence. Another topic is art. However, I do not argue what art is. Instead, I would like to rethink about why artificial objects that artificial intelligence might create in the future need to be considered from the perspective of art.

"Alter versus Deep Belief"

Elena Knox (Performing and media artist. JSPS Postdoctoral Research Fellow, Department of Intermedia Art and Science, Waseda University, Tokyo, Japan)

Elena's talk will unpack and evaluate her very recent art experiment (December 2017) Omikuji, part of a new series of AI - art experiments Alter versus Deep Belief.

Alter the robot (Ikegami Lab/Ishiguro Lab) was live-streamed by Watanabe Lab between Tokyo and Seoul. Alter has experimental AI. It uses a self-organising neural network to make sense of its world. Such AI strategies include deep belief networks, through which machines determine certain inputs to be believable.

In our world today, understanding belief systems is important to the inter-harmony and the preservation of culture. Omikuji is a participatory artwork exploring the way machines, and humans, learn to believe things - and how, via robotics both hard and soft, they may embody those beliefs. We want to uncover and express how Alter's learning is mirroring our own.

People may be prompted to ask: How sure are we in our beliefs, or in AI? How soft are they, and how hard?

* (III) "Canny" "Occupation" (17) Level B, Center Bldg.

"A New Kind of Aesthetics, which can be shared with AI"

Akihiro Kubota (Professor, Art & Media Course, Department of Information Design, Tama Art University)

Is it possible to share aesthetics with others other than human beings? Human natural language has rich poetic expressiveness. However, we cannot get rid of its ambiguity such as polysemy and/or uncertainty. In order for humans to discuss art and aesthetics with others such as AI, it is indispensable to set the foundation for discussion using a common language for both of them. The speakers showed that it is possible to explain the aesthetic and its underlying dynamic relations with an axiomatic structure based on contemporary mathematics. Aesthetics is a kind of function, its consistency and simplicity creates the aesthetic. With the new kind of aesthetics which is not based on humans, the border between humans and machines in art and aesthetics will disappear.

cf. Akihiro Kubota, Hirokazu Hori, Makoto Naruse and Fuminori Akiba, A New Kind of Aesthetics - The Mathematical Structure of the Aesthetic, Philosophies 2017, 2(3), 14; doi:10.3390/philosophies2030014