WCCI 2024 Open Forum on AI Governance

Date

Sunday, June 30, 2024 (All day)

Location

Pacifico Yokohama, Japan

Description

WCCI 2024 Open Forum on AI Governance

How to harness evolving AI 
– Dialogue of developers, users, and policy makers –

World Congress on Computational Intelligence (WCCI) 2024 in Yokohama is going to be the largest conference on AI in Asia in 2024. At this opportunity, given the public interests in the risks of AI and rapid legislative motions, we hold an open forum to bring together AI researchers, lay users, and policy makers.

The forum is on-site/online hybrid format and open to the public with pre-registration. Talks will be in English, and we plan to provide Japanese translation by AI. We expect participation of not only AI experts but also broad citizens and students.

Date & Time:  9:20 – 18:00 JST (UTC+9), Sunday 30th, June 2024
Place: Pacifico Yokohama 5th floor, B503

Tentative Program

 9:00     Room and Connections Open
 9:20     Opening Remarks
             Session 1: Future prospects and dangers of AI
 9:30     Yoshua Bengio, U Montreal (online)
10:00     Koichi Takahashi, RIKEN, AI Alignment Network
10:30     Stuart Russell, UC Berkeley (online)
11:00     break
11:10     Yi Zeng, Chinese Academy of Science
11:40     Satoshi Kurihara, Keio U, JSAI
12:10     Discussion (coordinator: Kenji Doya)
12:40     Lunch break
              Session2: How to align AI research and applications to societal values
14:10     Vanessa Nurock, U Côte d'Azur, UNESCO
14:40     Hiroaki Kitano, SONY CSL, OIST
15:10     Natasha Crampton, Microsoft (online)
15:40     break
15:50     Akiko Murakami, Sompo Japan, Japan AI Safety Institute
16:20     Yoichi Iida, Ministry of Internal Affairs, Japan
16:50     Jaan Tallinn, Future of Life Institute (online)
17:20     Discussion (coordinator: Arisa Ema, Satoshi Kurihara)
18:00     Closing

Registration page (required for participants not registered for WCCI 2024)
Registration deadline: 20th June, 2024

Speakers

Yoshua Bengio  Yoshua Bengio

  Professor, Department of Computer Science, University of Montreal
  Scientific Director, Mila
  Canada CIFAR AI Chair
  Scientific advisor, UK AI Safety Institute

 

Catastrophic AI risks and the governance of AGI projects
We are on a path towards human-level AI, also called AGI, with uncertain timeline sand uncertain risks, ranging from threats to democracy and national security to existential risks due to loss of control to a rogue AI. In spite of these risks, corporations are competing fiercely to build AGI and racing ahead. To make sure they do not cut dangerous corners and to make sure the power of future AIs is not abused or destabilizes the geo-political order, we will need significant effort in how to manage these projects, ranging from organization-level governance to national regulation and international treaties, including to avoid dangerous proliferation, harmonization of policies and a move towards AI for the public good, at a global level.
Biography: Yoshua Bengio is Full Professor in the Department of Computer Science and Operations Research at U. Montreal, as well as the Founder and Scientific Director of Mila. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoff Hinton and Yann LeCun, known as the Nobel prize of computing, and holds a Canada CIFAR AI Chair. Besides, he is a Fellow of both the Royal Society of London and Canada, ACM Fellow, Officer of the Order of Canada, recipient of the Gerhard Herzberg Canada Gold Medal for Science and Engineering and Scientific advisor for the UK AI Safety Institute.

Stuart Russell  Stuart Russell

  Professor, Department of Computer Science, University of California Berkeley
  Director, Center for Human-Compatible AI
  Director, Kavli Center for Ethics, Science, and the Public
 

 

AI: What if we succeed?
The media are agog with claims that recent advances in AI put artificial general intelligence (AGI) within reach.  Is this true? If so, is that a good thing? Alan Turing predicted that AGI would result
in the machines taking control.  I will argue that Turing was right to express concern but wrong to think that doom is inevitable.  Instead, we need to develop a new kind of AI that is provably beneficial to humans. Unfortunately, we are heading in the opposite direction. Regulation may be required to correct this mistake.
Biography: Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He is a recipient of the IJCAI Computers and Thought Award, the IJCAI Research Excellence Award, and the ACM Allen Newell Award. From 2012-14 he held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the BBC Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, an AI2050 Senior Fellow, and a Fellow of AAAI, ACM, and AAAS. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in over 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.

  Satoshi Kurihara

  Professor, Faculty of Science and Technology, Keio University
  President, Japanese Society for Artificial Intelligence (JSAI)

 

 

Alignment Difficulty of Scaling Swarm AI
Large-scale LLMs such as ChatGPT achieved high performance because of the quality changes with deploying scaled resources. One of the typical scaling problems is so-called flash crashes, and there are concerns about unexpected behavior in many autonomous drone formations. The possibility of controlling scaling AI will be discussed.
Biography: Professor of the Faculty of Science and Technology, Keio University. Director of Center of Advanced Research for Human-AI Symbiosis Society, Keio Univ. President of the Japanese Society for Artificial Intelligence (JSAI), Director of JST PRESTO “Social Transformation Platform” (JST: Japan Science and Technology Agency, PRESTO: Promoting Individual Research to Nurture the Seeds of Future Innovation and Organizing Unique, Innovative Network.

  Vanessa Nurock

  Professor in Philosophy, Université Côte d’Azur, France
  UNESCO EVA Chair

 

 

Towards an “Ethics by Design for AI”?
The idea that AI ethics should not be only considered as only top-down, or bottom-up approach is now commonly challenged by the possibility of an “Ethics by Design” for AI. This talk aims at explaining the main characteristics of Ethics by Design, especially regarding the issue of ‘deskilling’. I will argue that the concept of ‘catastrophe’ is particularly fruitful to frame this Ethics by Design for AI and to help us design an ethical AI for future generations.
Biography:Vanessa Nurock is a Professor in Philosophy and Deputy Director of the Centre de Recherche en Histoire des Idées (CRHI) at Université de Côte d’Azur (France). She also holds the UNESCO EVA Chair in the Ethics of the Living and the Artificial (https://univ-cotedazur.fr/the-unesco-chair-eva/chair-eva) and is the head of the Program Committee on AI and Ethics of the International Research Center on Artificial Intelligence (https://ircai.org/project/ai-and-ethics/)
Her research is positioned at the interface between ethics, politics and emerging science & technologies. She has published numerous papers and several books on topics such as justice and care, gender, animal ethics, nanotechnologies, cybergenetics, and neuroethics. Her current research, developed in her next book Care Ethics and New Technologies (Peeters Publishers, 2024 forthcoming), focuses on the ethical and political problems raised by Nanotechnologies, Cybergenetics and Artificial Intelligence.

Natasha Crampton

Vice President and Chief Responsible AI Officer, Microsoft
UN Secretary-General’s Advisory Body on Artificial Intelligence
 




Biography:Natasha Crampton leads Microsoft’s Office of Responsible AI as the company’s first Chief Responsible AI Officer. The Office of Responsible AI defines and governs the company’s approach to responsible AI, and contributes to the discussion about the new laws, norms, and standards that are needed to secure the benefits of AI and guard against its risks.
In her personal capacity, Natasha serves on the UN Secretary-General’s Advisory Body on Artificial Intelligence, which is advancing recommendations for the international governance of AI.
Before establishing Microsoft’s Office of Responsible AI, Natasha served as lead counsel to the Aether Committee, Microsoft’s advisory committee on responsible AI. Natasha also spent seven years in Microsoft’s Australian and New Zealand subsidiaries helping highly regulated customers move to the cloud.
​Prior to Microsoft, Natasha worked in law firms in Australia and New Zealand, specializing in copyright, privacy, and internet safety and security issues. Natasha graduated from the University of Auckland in New Zealand with a Bachelor of Laws (Honours) and a Bachelor of Commerce majoring in Information System

Co-organizers

  • Minoru Asada, Osaka U, RSJ
  • Kenji Doya, OIST, INNS, JNNS, APNNS
  • Arisa Ema, U. Tokyo, RIKEN AIP
  • Joichi Ito, Chiba Institute of Technology
  • Ryota Kanai, ARAYA/Moonshot
  • Satoshi Kurihara, Keio U., JSAI
  • Masashi Sugiyama, U. Tokyo, RIKEN AIP
  • Koichi Takahashi, RIKEN BDR, ALIGN

Sponsors

Related Events

All-OIST Category: 

Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.