Neural Computation Workshop 2024 (FY2023)

Date

2024年3月16日 (土) (All day)

Location

OIST seaside house

Description

Aim:

The aim is for current and former members of Doya unit to exchange recent progress and new ideas.

 

Timeline:
Oral presentation registration deadline: March 1, 2024
Abstract submission deadline: March 1, 2024
Registration deadline: March 8, 2024

 

Registration:
Please register from the link below:

https://groups.oist.jp/ja/ncu/neural-computation-workshop-2023  

 

 

25 minutes(Q&A 5 minutes)  for the external speakers and 15 minutes(Q&A 5 minutes) for internal speakers in each session.

 

Program
8:30-9:00   Reception

9:00-9:05   Opening
 

Session 1   
Chair: Yukako Yamane (Okinawa Institute of Science and Technology)
          “Recurrent neural network for modeling mice decision making”
          ”Dorsal raphe serotonin neurons encode probability not value of future rewards”
          “ Insula networks underlying preconscious interoception and depressive mood”
          “Reverse engineering psychological constructs by large languages models”
10:40   Break
 
Session 2    
Chair: Ryota Miyata (University of the Ryukyus)
11:00  Junichiro Yoshimoto (Fujita Health University) 
           “Data-science studies to promote brain science” 
            “Computational roles of cortical layers during probabilistic lever-pulling task“
11:45  Jun Igarashi (RIKEN)
           “Oscillatory neural activity in gamma frequency range in a connectome-based spiking neural network model of the mouse cortico-cerebellar circuit”
            “Leveraging LLMs and RNNs for Brain Modeling”
 
12:35-14:00   Lunch & Poster
 
Poster presenation List
#1 Yuma Kajihara 
     
"The paraventricular thalamus maintains the sticky-choice bias in reinforcement-learning of mice"
#2 Tojoarisoa Rakotoaritina
      "Embodied Evolution of Intrinsically Motivated Reinforcement Learning"
#3  Ryota Miyata (University of the Ryukyus)
      "Quantifying the bits of information about the rat’s behavioral choices accessible from neural activity in the basal ganglia"
#4  Florian Lalande
      "A Transformer Model for Symbolic Regression towards Scientific Discovery"
#5  Terezie Sedlinska
      "Behavioral and neural adaptations unfolding in antidepressant treatment"
#6  Naohiro Yamauchi
      "Action value representation in the mouse primary motor cortex"
#7  Razvan Gamanut
      "Switch to Introspection: The Role of the Serotonin-Claustrum Interaction in Coordinating the Default Mode Network"
#8   Yukako Yamane
      "Network change across forelimb reaching movement learning revealed by wide-field calcium imaging of marmoset" 
#9   Yi-Shan Cheng
       "Information-Theoretical Analysis of Team Dynamics in Football Matches" 
#10  Sutashu Tomonaga
       "A Novel Approach to Capturing Multi-Time Scale Dynamics in Wearable Device Data using Latent Variable Modeling" 
#11  Yuji Kanagawa
       "Distributed Reward Evolution"
 
Session 3     
Chair:  Katsuhiko Miyazaki (Okinawa Institute of Science and Technology)
14:00   Eiji Uchibe  (ATR Computational Neuroscience Labs.)
            “Multitask offline generative adversarial imitation learning with positive-negative-unlabelled learning“
14:25   Miles Desforges (Okinawa Institute of Science and Technology)
           “Characterisation of Serotonin, Noradrenaline and Dopamine Release in the Motor Cortex”
14:45   Paavo Parmas (Kyoto University) 
           “Model-based reinforcement learning with scalable composite policy gradient estimators”
           “Exploring Embodied Intelligence”
15:35   Break 
 
Session 4      
Chair: Ekaterina Sangati(Okinawa Institute of Science and Technology)
16:00  Makoto Ito (TORCH,  Inc.) 
       “Robot control by Large Language Models”
16:25 Yuzhe Li (NAIST)
          “Dual automatic relevance determination for linear latent variable models and its application to calcium imaging data analysis”
16:50  Tomoki Tokuda (Earthquake Research Institute, University of Tokyo) 
         “Machine learning in seismology”
17:15  Florian Lalande (Okinawa Institute of Science and Technology)
          “Addressing the Problems of Numerical Data Imputation for Multimodal Datasets” 
17:35   Discussion
18:00   Closing & Photo Time
18:30   Dinner
 

 

Abstract:

 

Akihiro Funamizu
University of Tokyo

Recurrent Neural Network for modeling mice decision making

Our lab is interested in whether an artificial neural network, especially a recurrent neural network (RNN) can mimic the action selection of mice during a tone-frequency discrimination task. In the task, mice were head-fixed and placed on a spherical treadmill. In each trial, we presented short pulses of pure tones for 0.6 s, named as tone cloud. Depending on the dominant tone frequency of tone cloud (low- or high-tone category), mice chose a left or right spout to receive a sucrose water reward. In our task, the tone category of left or right was alternated in every trial with a transition probability of p. Every mouse was assigned to either a repeating condition (p = 0.2), where the tone category was often repeated, or an alternating condition (p = 0.9), where the tone category alternated every trial with the probability of 0.9.

During the task, we electrophysiologically recorded the neural activity of the orbitofrontal cortex, primary motor cortex, dorsal striatum, posterior parietal cortex, hippocampus, and auditory cortex with Neuropixels 1.0 probe. Please read our Preprint for the detail results of neural activity.

First topic. we modeled the discrete left or right choices of mice during the tone-frequency discrimination task with RNN. The RNN succeeded in modeling the proper choice biases and learning speed in the repeating and alternating conditions. The activity of artificial units of RNN had similar activity patterns of real neurons of mice. As one network architecture of RNN was enough to mimic the choice behavior of mice, our result proposes a unified learning rule in the brain.

Second topic. we modeled the physical body movement of mice during the task with reservoir computing on a continuous time scale. We proposed a network model implementing the activity of real neurons recorded from mice. We found that the proposed model had better prediction of mice body movement than the conventional artificial-units-only AI.

Achievement of fiscal year 2023:
Full paper:
1) Funamizu A, Marbach F, Zador AM. Stable sound decoding despite modulated sound representation in the auditory cortex. Current Biology, 33, 1-14 (2023)

Preprint:
2) Wang S, Huayi G, Ishizu K, Funamizu A. Global neural encoding of model-free and inference-based strategies in mice. bioRxiv, (2024) doi: https://doi.org/10.1101/2024.02.08.579559

3) Ishizu K, Nishimoto S, Funamizu A. Localized and global computation for integrating prior value and sensory evidence in the mouse cerebral cortex. bioRxiv, (2023) doi: https://doi.org/10.1101/2023.06.06.543645


Kayoko Miyazaki
Okinawa Institute of Science and Technolog

Dorsal raphe serotonin neurons encode probability not value of future rewards

We have revealed that there is a causal relationship between activation of dorsal raphe (DR) serotonergic neurons and patience when waiting for future rewards. To explain these behavioral data, we proposed a Bayesian decision model of waiting in which serotonin signals the prior probability of reward delivery. However, there is no direct evidence serotonin neural activity is modulated by reward probability.

     Five male serotonin neuron-specific GCaMP6-expressing mice were trained to perform a sequential tone-food waiting task that required them to wait for a delayed tone (tone delay: 0.3 s, tone duration: 0.5 s) at tone site and then to wait for a delayed food (reward delay: 3 s) at reward site. An optical fiber (400 mm diameter) was implanted into the DRN and serotonergic neural activity was recorded by fiber photometry. We prepared four tones (8 kHz, 2.1 kHz, white noise, and 4.1 kHz) and associated with four reward probability (100, 75, 50, and 25%), respectively. We focused on serotonergic neural activity during waiting for delayed reward. dF/F was the highest in the 100% test 23.4 ± 2.2% and gradually decreased 19.1 ± 1.9%, 15.9 ± 1.7%, and 13.5 ± 1.8% in the 75%, 50%, and 25% tests, respectively. In the 25% test, even if reward amount was changed, dF/F did not change (14.2 ± 1.8 and 13.1 ± 1.5% in one-pellet and three-pellet test, respectively). These results show dorsal raphe serotonin neurons encode probability not value of future rewards and serotonin response would be used to support flexible behavior.

Alan Fermin 
(Hiroshima University)

Insula networks underlying preconscious interoception and depressive mood
 

Interoception, the neural sensing of visceral and physiological signals, informs the brain about the body’s internal states. While interoceptive signals can reach awareness if attended to, at different time scales and saliency levels, the majority of interoceptive signals elicit neural responses that remain at a preconscious state. For instance, humans rarely perceive their own heartbeats.

Here, I will present results of experiments demonstrating that the insula, a cortical center for the processing of interoceptive information, is involved in interoceptive awareness (IA) and the representation of preconscious interoceptive states.

The first study demonstrated the association of insula structural covariance networks with performance in a heartbeat counting task.

The second study found a mediation role of stress resilience in the association between IA and depressive mood. Next, we identified shared and distinct insula functional networks underlying resilience and IA. Insula-resilience and insula-IA networks also predicted depressive mood.

The third study, that combined electrocardiography, electroencephalography and fMRI, identified insula networks underlying spatial- and time-dependent heartbeat evoked potentials (HEP) associated with IA and depressive mood. HEPs also mediated the relationships of insula functional networks with IA and depressive mood.


Hiroaki Hamada
Araya Inc.

Reverse engineering psychological constructs by large languages models

Language models have started to acceralate research and development. In psychology, multiple attempts have been made recently such as assessing whether language models are capable of replicating responses from specific populations and investigating the inherent psychological biases within the models. These studies assume that language models might effectively capture and replicate relationships between psychological concepts. However, it is unclear that language models embed such relationships between psychological concepts. Here, we evaluate the categorical classification accuracy of concepts based on items of 43 psychological questionnaires with multiple language models, including GPT-4. The results showed that the latest GPT-4 performed the best average classification accuracy although an embedding model showed also competitive performance. Our findings imply the ability of language models to embed the relationships between psychological concepts.



Junichiro Yoshimoto
Fujita Health University

Data-science studies to promote brain science

Traditionally, brain science has been developed based on many hypothesis-driven biological experiments. However, recent advances in high-throughput technology for recording biological data are changing the situation: Data-driven studies, which aim to generate plausible hypotheses by estimating regularity hidden behind a massive data set, are expected to create breakthroughs in brain science. To perform data-driven studies efficiently, two technical elements are critical: data management and statistical inference. This talk presents our studies on technical development related to those elements. In the first half, we introduce two database systems. One is the KANPHOS (Kinase-Associated Neural Phosphosignaling) database, providing several functions to search for proteins phosphorylated by specific kinases/neurotransmitters. This makes it helpful to think of unknown but plausible signaling pathways in neurons. The other is a database system for pathway-wide association studies, in which the data are represented as the statistical significance of the association between pathways and psychiatric disorders. We expect to uncover a molecular mechanism causing psychiatric disorders by integrating two databases. In the last half, we present our study on data-driven categorization of postoperative delirium symptoms using unsupervised machine learning. Phenotyping analysis that includes time courses helps understand postoperative delirium's mechanisms and clinical management. However, postoperative delirium has not been fully phenotyped. To overcome the limitation, we explore possible phenotypes of postoperative delirium following invasive cancer surgery based on clinical symptom assessment over five consecutive days and an unsupervised machine learning method. The results showed that patients undergoing invasive cancer resection could be delineated into three delirium clusters, two subsyndromal delirium clusters, and an insomnia cluster.

Sergey Zobnin
Okinawa Institute of Science and Technology

Computational roles of cortical layers during probabilistic lever-pulling task

Animals need to sense their environment and differentiate sensory states to optimize behavior. For example, a force too weak may fail to pull a desired object closer, while too much effort will be unnecessarily exhausting. The prior experience interacting with similar objects can help generate the required action and produce an expectation of the sensory feedback. It is hypothesized that the mammalian cortex combines this expectation signal with the actual sensory input in a probabilistic estimation of the environment. To investigate this mechanism, I conducted a probabilistic lever-pulling task on mice while imaging neural activity across multiple cortical layers in S1. I have identified functional asymmetry between superficial and deep cortical layers. The deep layers were more associated with the prior information about the expected tactile stimulus in terms of the number of task coding neurons and the amount of information per given population size. In comparison with the deep layers, superficial layers were stronger associated with the sensory information about the actual tactile stimulus, but also encoded prior information. This research contributes to the study of the neural mechanisms underlying probabilistic estimation in the cerebral cortex.


Jun Igarashi 
RIKEN

Oscillatory neural activity in gamma frequency range in a connectome-based spiking neural network model of the mouse cortico-cerebellar circuit

Gamma oscillation occurs in the cerebral cortex during specific states, which is considered to work for information processing. However, it remains unknown how it interacts spatiotemporally among layers in the cerebral cortex and among regions. To examine it, we developed a spiking neural network model of the mouse brain consisting of the primary and secondary motor cortices (M1 and M2), thalamus, pons, and cerebellum with inter-regional connections based on connectome provide by Allen Institute. With stimulation to layer 2/3 of M1, the excitatory and inhibitory cells of layers 2/3 to 6 showed spike phase locking. The spike phases were similar between excitatory cells and fast-spiking interneurons in the same layer, while they differed across layers. The gamma oscillation of M1 propagated to M2, pons, and cerebellum. The degrees of spike phase locking depended on neuron types. These results suggest that the information transfer by gamma oscillation may be taking place in different types of neurons.

Finally, we briefly introduce preliminary results of simulations on mouse granular and agranular cortex and mammalian cerebral cortex using the current simulation framework.

 

Carlos Enrique Gutierrez
SoftBank Corp.

Leveraging LLMs and RNNs for Brain Modeling

Large language models (LLMs) contain a vast amount of factual information in their pre-trained weights, yet the full potential of this knowledge remains largely untapped. In our exploration of computational neuroscience, particularly in brain modeling, we aim to harness this knowledge. One approach involves incorporating new information from external datasets or refining LLMs. In this work we investigate current refining methods of LLMs to better assist in model specification tasks. Furthermore, we examine generative models of neural dynamics of the brain, specifically recurrent neural networks (RNNs), discussing their biological plausibility and suitability for brain modeling.



Eiji Uchibe
ATR Computational Neuroscience Labs.

Multitask offline generative adversarial imitation learning with positive-negative-unlabelled learning

We have proposed Model-Based Entropy-Regularized Imitation Learning (MB-ERIL), which is an online model-based generative adversarial imitation learning method that introduces entropy regularization of a policy and a state transition model. The most notable feature is that a learner's policy and a state transition model are obtained by training two structured discriminators from an expert and a learner's data and generated data. However, collecting learner's data needs costly and unsafe interactions with an actual environment by running the learner's policy that is not optimal during learning. In addition, collecting many expert data is also expensive. In this talk, we propose Offline-MB-ERIL, which learns the policy and the transition model from expert data and additional unlabeled data. The unlabeled data is collected by interacting with the actual environment by running a safe policy. The basic idea to treat unlabeled data is to apply positive, negative, and unlabeled (PNU) classification for training discriminators. At first, we show how our Offline-MB-ERIL works in a single task setting, where the additional data is collected manually. Then, we evaluate Offline-MB-ERIL in a multitask setting, where additional data is also used as the expert data. Our experimental results show that Offline-MB-ERIL outperforms some baseline methods on benchmark tasks.

Miles Desforges
Okinawa Institute of Science and Technology

Characterisation of Serotonin, Noradrenaline and Dopamine Release in the Motor Cortex

The precise spatiotemporal dynamics of neuromodulator release in the cortex remain elusive. These release patterns determine receptor activation, as receptors with different affinities are activated by different concentrations, and thereby distances, from the release site. While neuromodulator nuclei have been extensively characterised, technological challenges have hindered detailed cortical investigation. This thesis clarifies the release patterns of dopamine, noradrenaline, and serotonin in the secondary motor cortex (M2) using a novel orthogonalised Go/NoGo task developed for head-fixed mice. We dissect neuromodulator activity at the intersection of locomotion and unconditioned stimuli (US). Employing two-photon microscopy and novel genetically encoded sensors, we achieved high-resolution imaging unattainable with previous techniques. Our findings indicate that serotonin, noradrenaline, and dopamine are robustly correlated with locomotion transitions, exhibiting significant increases from rest to locomotion and decreases during the reverse. Cross-correlation analysis suggests neuromodulator activity precedes locomotion onset, potentially facilitating motor behaviours. Unexpectedly, both appetitive (sucrose) and aversive (air-puff) stimuli elicited fluorescence decreases in all neuromodulators. Serotonin and noradrenaline showed stronger, more consistent responses during movement than rest, while dopamine responses were more consistent across locomotion states. Detailed analysis of activity maxima and minima within individual trials revealed significant variability and complexity. Contrary to expectations, no neuromodulator responses to US-predicting cues were observed, despite anticipatory behavioural changes. This highlights the need for further investigation into sensory cue processing in M2. Our findings suggest a complex multiplexing of information in M2, whereby neuromodulator activity is more influenced by locomotive state than appetitive or aversive stimuli delivery.


Paavo Parmas
Kyoto University

Model-based reinforcement learning with scalable composite policy gradient estimators

In model-based reinforcement learning (MBRL), policy gradients can be estimated either by derivative-free RL methods, such as likelihood ratio gradients (LR), or by backpropagating through a differentiable model via reparameterization gradients (RP). Instead of using one or the other, the Total Propagation (TP) algorithm in prior work showed that a combination of LR and RP estimators averaged using inverse variance weighting (IVW) can achieve orders of magnitude improvement over either method. However, IVW-based composite estimators have not yet been applied in modern RL tasks, as it is unclear if they can be implemented scalably. We propose a scalable method, Total Propagation X (TPX) that improves over TP by changing the node used for IVW, and employing coordinate wise weighting. We demonstrate the scalability of TPX by applying it to the state of the art visual MBRL algorithm Dreamer. The experiments showed that Dreamer fails with long simulation horizons, while our TPX works reliably for only a fraction of additional computation. One key advantage of TPX is its ease of implementation, which will enable experimenting with IVW on many tasks beyond MBRL.


Naoto Yoshida
University of Tokyo

Exploring Embodied Intelligence

In my talk, I will first talk about a home robot I was developing when I was at a start-up company. Here I will talk about the research I did with five professional animators on the subject of animation and the movements required for a robot to become friends with humans.

In the second part of the talk, I will discuss my doctoral research on the emergence of behavior in robots with bodies. In this case, the agent has an internal state of the body and is optimized by deep reinforcement learning for the sole purpose of stability: homeostasis. I will show experimentally that such an agent can generate integrated autonomous behaviors only by internal motivation defined inside the agent's body.


Makoto Ito
TORCH,  Inc.

Robot control by Large Language Models

I will talk about three topics. In the first topic, I will introduce our company TORCH, Inc. and our lab automation business. In many companies' experimental and development sites, there are various repetitive tasks, and researchers spend a great deal of time on them. We propose and build automation systems using collaborative robots to make these tasks more efficient.

In the second topic, I will introduce an attempt to move a robot by language using a Large-Language Model service such as ChatGPT, which has recently become a hot topic. I will show that while regular robots are driven by strict programs, ChatGPT makes it possible to easily drive robots using casual phrases.

In the third topic, I will introduce an attempt to understand the mechanism of GPT, on which ChatGPT is based. First, I will propose a small GPT with only 1000 parameters. Then, I will visualize the internal state of the GPT and show the information that is coded during the process.

Yuzhe Li
NAIST

Dual automatic relevance determination for linear latent variable models and its application to calcium imaging data analysis

In analyzing high-dimensional data, dimension reduction by a linear latent variable model is helpful for extracting meaningful information with modest demand for data and computation. Automatic relevance determination (ARD) allows extracting sparse representations by determining the dimensionality automatically. However, when the data are noisy, non-Gaussian, or nonlinearly observed, such as in calcium imaging, conventional ARD methods fail to extract sparse representations.To encourage the sparsity of the latent variables, we propose a dual ARD formulation that applies ARD priors to both loading weights and latent variables.  We first present our dual ARD formulation for a linear latent variable model and mathematically analyze how the extra degree of freedom can promote finding sparse representations. We then evaluated the performance of the dual ARD methods against existing latent variable models using both simulated datasets and real calcium imaging data. While conventional ARD models could retrieve sparse signals in linear Gaussian settings, the dual ARD methods outperformed the previous models in extracting original signals in simulated calcium imaging data subject to nonlinear observation. In applying the dual ARD method to actual two-photon calcium imaging data, we were able to identify low-dimensional latent variables that were sufficient for successfully performing a sound localization decoding task.
 

Tomoki Tokuda
Earthquake Research Institute, University of Tokyo

Machine learning in seismology

Recently, machine learning methods have gained much attention in seismology. In particular, for earthquake detection, deep learning methods are considered as a huge potential to bring new insights for seismic research and practice. Despite its usefulness, however, development of new deep learning models for better detection performance is not straight-forward, due to the ‘black box’ problem in deep learning. The black box problem hinders us from not only understanding the deep learning algorithm, but also determining the development direction of algorithm. In this talk, I discuss two attempts to overcome this difficulty by means of clustering methods. One is an effective seismic detection method for waveform classification into P-wave, S-wave and noise. The other one is a new transfer learning method of pre-trained detection model without sufficient number of training data. In both cases, a specific type of clustering method, i.e., multiple clustering method, is fully utilized to update deep learning models.

Florian Lalande
Okinawa Institute of Science and Technology

Addressing the Problems of Numerical Data Imputation for Multimodal Datasets

Numerical data imputation algorithms replace missing values by estimates to leverage incomplete data sets. Current imputation methods seek to minimize the error between the unobserved ground truth and the imputed values. But this strategy can create artifacts leading to poor imputation in the presence of multimodal or complex distributions. To tackle this problem, I introduce the kNNxKDE algorithm: a data imputation method combining nearest neighbor estimation (kNN) and density estimation with Gaussian kernels (KDE). I compare my method with previous data imputation algorithms using artificial and real-world data with different data missing scenarios and various data missing rates, and show that the kNNxKDE can cope with complex original data structure, yields lower data imputation errors, and provides probabilistic estimates with higher likelihood than current methods. I release the code in open-source for the community.

History:

Neural Computation Workshop 2018

Neural Computation Workshop 2019

Neural Computation Workshop 2021

Neural Computation Workshop 2022

All-OIST Category: 

Subscribe to the OIST Calendar: Right-click to download, then open in your calendar application.