Seminar "Artificial Relevance" by Dr. Julian Kiverstein

Date

2019年10月15日 (火) 10:00 11:00

Location

C016, Lab1

Description

Speaker:

Dr. Julian Kiverstein, Amsterdam University Medical Centre

Abstract:

Computers increasingly outperform humans today in many domains. They do so in part through advances made possible by machine learning algorithms. Think for instance of how Deep Mind’s Alpha Go learned to play the ancient Chinese game of Go at such a high level of performance that it was able to beat the world champion. These impressive developments notwithstanding, the very same programs that excel within a narrow domain also often struggle to generalise their performance outside of the domains in which they have been trained. They have been shown to fail when it comes to determining the relevance of their past training to new and unfamiliar contexts. It is one thing for a machine to perform well in a particular well-circumscribed environment, and perhaps even to outperform what people can do in a narrow domain. It is another challenge entirely for machines to be able to adapt their behaviour flexibly to a dynamic environment in which many unexpected and novel things can happen.

To generalise their performance to novel and unexpected cases, computers have to determine from everything that could be of possible relevance what is actually of relevance. They face an explosion of possibilities they somehow need to narrow down before they can make a decision about how to respond. They must then determine which of the many potentially relevant possibilities are actually of relevance to acting appropriately in a given situation. In this talk I will consider whether neural networks that make use of what I will prediction-based learning might be able to escape the relevance problem as I’ve just described it. Networks that do prediction-driven learning don’t only look for statistical patterns in their training data so as to map inputs onto desired outputs. They learn models that map relevant statistical structure in data – statistical patterns that prove useful for the performance of a task. In building self-teaching machines we are attempting to build into machines something that has the functional effects of affective states in biological agents. Affective states steer us towards relevant or affectively significant affordances. The thesis I will advance in this talk is that machines that make use of prediction-driven learning must similarly rely on states that play a similar functional role to affective states in biological agents if they are to succeed in the tasks they are given such as playing Go. Thus instead of thinking of these neural networks as encoding information in their weights and connections, I will suggest that machines that make use of prediction-driven learning can be thought of as finding statistical patterns that allow them to act sensitively and appropriately in a domain. They can be thought of as attuning to affordances of artificial relevance.

All-OIST Category: 

Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.