RMT2012

rmt2012_top

Overview | Program | Lecturers | Organizers

Overview

  1. Neural network and memory, spin glass, communication networks.
  2. Genes analysis, co-expression of DNA, protein, bioinformatics.
  3. Protein folding, topology, secondary structure of RNA and other biological applications.
  4. Condensed matter application, BEC, quantum dots, quantum chaos, topological insulator, universal class.
  5. RMT and field theory, conformal field theory, AdS/CFT.
  6. Mathematical aspect of RMT.

Program

Date / Time    
April 14 (Sat)    
  Registration 1F Lobby
18:00-20:00 Dinner 3F Chura Hall
April 15 (Sun)    
07:00-09:00 Breakfast 3F Chura Hall
09:00-12:00 Lecture (E. Brezin) 1F Seminar Room
12:00-14:00 Lunch 3F Chura Hall
14:00-17:00 Lecture (R. Monasson) 1F Seminar Room
18:00-20:00 Dinner 3F Chura Hall
April 16 (Mon)    
07:00-09:00 Breakfast 3F Chura Hall
09:00-12:00 Lecture (A. Morozov) 1F Seminar Room
12:00-14:00 Lunch 3F Chura Hall
14:00-17:00 Lecture (S. Ganguli) 1F Seminar Room
18:00-20:00 Dinner 3F Chura Hall
April 17 (Tue)    
07:00-09:00 Breakfast 3F Chura Hall
09:00-12:00 Lecture (P. Wiegmann) 1F Seminar Room
12:00-14:00 Lunch 3F Chura Hall
14:00-16:30 Poster Session 1F Lobby
17:00-18:00 Campus Tour OIST Campus
18:00-20:00 Dinner 3F Chura Hall
April 18 (Wed)    
07:00-09:00 Breakfast 3F Chura Hall
09:00-12:00 Lecture (J. Zinn-Justin) 1F Seminar Room
12:00-14:00 Lunch 3F Chura Hall
14:00-17:00 Lecture (H. Sompolinsky) 1F Seminar Room
18:00-20:00 Banquet 3F Chura Hall
April 19 (Thr)    
07:00-09:00 Breakfast 3F Chura Hall
09:00-12:00 Lecture (S. Cocco) 1F Seminar Room
12:00-14:00 Lunch 3F Chura Hall
14:00-17:00 Lecture (M. Vergassola) 1F Seminar Room
18:00-20:00 Dinner 3F Chura Hall
April 20 (Fri)    
07:00-09:00 Breakfast 3F Chura Hall
09:00-12:00 Lecture (J. Miller) 1F Seminar Room
12:00-14:00 Lunch 3F Chura Hall
14:00-18:00 Excursion Okinawa Churaumi Aquarium
18:00-20:00 Dinner 3F Chura Hall
April 21 (Sat)    
07:00-09:00 Breakfast 3F Chura Hall
  Departure  

Lecturers

brezin_photo

Edouard Brézin
Professor (emeritus) of theoretical physics,
Ecole Normale Supérieure, Paris

Edouard Brézin has done work in quantum fieldtheory, mainly for applications in statistical physics, in particular for critical phenomena. He hasapplied field theory techniques to condensed matter problemssuch as the theory of critical wetting, localization by disorder or the study of the phase transition from anormal metal to a type II superconductor under a magnetic field. He has beeninterested in field theories with a large number of colors. This has led to arepresentation of two-dimensional quantum gravity, random fluctuating surfaces,i.e. bosonic closed string theories, in terms of random matrices. The scaling limit ofsuch models is related to integrable hierarchies such as KdV flows. He has worked on theuniversality of the correlations of eigenvalues in the local limit for random matrices and on the application of random matrices to topological properties of curves (work with S. Hikami). He is a former president of the French Academy of Sciences and a foreign member of the National Academy of Sciences (USA) and of the Royal Society (UK). He has shared the Dirac medal in 2011with J. Cardy and A. Zamolodchikov.

Recommended books, papers for the students and participants :

  • Random matrices (3e edition), Pure and Applied Mathematics Series 142, Elsevier (London - 2004), 688 pp. ISBN 0-12-088409-7.

* Lecture title : A brief introduction to random matrices

* Abstract : A few situations in which random matrix theory have been used in he past will be reviewed. The role of universality of the local spacing distributions will be discussed. The example of random matrices in an external matrix source will be considered in more details.

cocco

Simona Cocco
Researcher in the CNRS, Ecole Normale Superieure, Paris

Simona Cocco did her undergraduate studies in physics at the University of Rome (Italy) and did a PhD in physics and biophysics in Rome and Lyon (France). She has a position as a researcher in the CNRS in France, from 2001.

She has mainly worked on the application of statistical mechanics to biophysics. Her early works were focused on the modelling of the elastic response of a single molecule of nucleic acids (DNA, RNA), and of the unzipping experiments in which the two complementary strands of the molecule are taken apart.

More recently she has worked on inverse problems in biophysics, including: the inference of the DNA sequence from unzipping experiments, the inference of the connections between neurons from the recording of the activity of a neural population by a multi-electrode array.
 

* Lecture title : Inference of interactions from correlations: algorithms and applications

* Abstract :Large populations of components, such as neurons or genes, show corre-lations in their activities. A major question is to understand the underlying mechanisms responsible for those correlations. Correlations between two components can be either due a direct interaction, or to an indirect effect mediated through other components. Disentangling direct interactions from indirect correlations brings several advantages. First the network of interac- tions is generally much sparser than the one of correlations, hence a more efficient, compressed representation of the data is obtained. Secondly, interactions can be used to predict the global effect of a local perturbation on the system, such as the removal of one component or its stimulation.
We will describe how interactions can be found from correlations in the context of maximum entropy models, such as the Ising model [1]. Those models are the least constrained ones capable of reproducing the one- and two-point correlations, e.g. the average firing rates of the neurons in a multi-electrode recording and the probability that any two neurons fire simultaneously (within a, say, 20 msec time window). Finding the interactions of the Ising model given its correlations is a hard computational problem which raises several interesting, fundamental and practical questions. Among them are the consequences of bad or partial sampling. Generally only a small subset of a biological system is accessible for measurements, and over a limited
amount of time. A natural question is to ask how much the Ising model inferred from this partial and inaccurate measure of the correlations can tell us about the true, underlying interactions. We will address those questions using a recently introduced inference procedure, identifying the clusters of strongly interacting components [2], to analyze artificial data issued from Ising models with different interaction networks and neurobiological data, corresponding to multi-electrode recordings of the neural activity in the vertebrate retina or in the cortex.
The retina recordings experiments consist in extracting the retina from the eye of a salamander and putting it on a multi-electrode array. Tens of neural cells called ganglion cells, situated on the last cellular layer of the retina, can be recorded simultaneously. We will show that the Ising model is capable of correctly modeling the data and of reproducing the probability of a configuration of neural activity in a short-time window, typically of the order of 20 ms, and the higher moments of the distribution, i.e. 3-cell and higher-order correlations [3, 4]. This success is remarkable as the model does not include three cell couplings.
We will also describe experiments by G. Buzsaki [5] and F. Battaglia [6] on the in vivo recordings of the activity of a behaving rat. The experiments consists in implanting electrodes in the head of a rodent, still free to move.
The recorded area is situated in the cortex and in the hippocampus. The activity of tens of neurons (30-150) can be recorded on long time periods during which the rodent learns a task, for example to go to the left of a maze when it smells a banana clue or to the right when it smells a chocolate clue. The question raised by these experiments, that is, how the learning of a task is memorized and in which measure synaptic strength are at the basis of the memory (following the Webb’s hypothesis dating back from 1949) are of fundamental importance in neuroscience.
References
[1] E.T. Jaynes, Phys. Rev. 106, 620630 (1957)
[2] S. Cocco, R. Monasson, Phys. Rev. Lett. 106, 090601 (2011).
[3] E. Schneidman, M. J. Berry, R. Segev, W. Bialek, Nature 440, 1007-1012
(2006).
[4] S. Cocco, S. Leibler, R. Monasson, Proc. Nat. Acad. Sci. (USA) 106,
14058 (2009).
[5] S. Fujisawa, A. Amarasingham, M. T. Harrison, and G. Buzsaki, Nature
Neuroscience 11, 823-834 (2008).
[6] A. Peyrache, M. Khamassi, K. Benchenane, S. I. Wiener, and
F. P. Battaglia, Nature Neuroscience 12, 919 (2009).

monasson

Remi Monasson
Director of Research CNRS, Ecole Normale Superieure, Paris

R. Monasson got his PhD in theoretical physics at the Ecole Normale Superieure in 1993, and was a post-doc in Rome until 1995. After getting a position at CNRS, he spent two sabbatical, at the University of Chicago in 2000/2001 and at the Institute for Advanced Study, Princeton, in 2009/2011.

His research are at the crossroads between the statistical physics of disordered systems, and its interdisciplinary applications to computer science (study of phase transitions in combinatorial optimization problems with random inputs, learning in neural networks models) and biophysics (modeling of single molecule experiments, high-dimensional statistical inference).

* Lecture title :Statistical physics approaches to high-dimensional inference

* Abstract : Constant progress in experimental techniques has now made it possible to monitor the dynamics of complex systems, especially in biology. The temporal activity of a population of neurons, the expression patterns of a set of genes, and the evolution of species in an ecological system are just a few examples of dynamically evolving complex systems which can now be quantitatively recorded and made available for modeling purposes. Careful analysis of these systems is necessary to develop a quantitative systems-level understanding of biological networks and to explore how global properties arise from local interactions.
Interpreting experimental data, understanding how the components of a system interact with each other, inferring the values of those interactions from the data, and designing predictive models are formidable tasks. Difficulties hindering the modeling of complex data and the resolution of the attached inference problems include poor quality of data. Sampling may be incomplete, both from the temporal (limited acquisition frequency or recording duration) and spatial (limited access to a subpart of the system) points of view, and the data may be noisy, plagued by measurement uncertainties and/or intrinsic stochastic dynamics. There is also a trade-off between the power of models, i.e. their ability to fit the data, and the number of their defining parameters, which must be balanced to avoid overfitting. The computational cost in solving the models and inferring their parameters from the data may be great. This situation is drastically different from the usual situation encountered in the analysis of physical data, for example in condensed matter. Physical models are, generically, defined from a small number of parameters, such as the interaction strength between close spins in a ferromagnetic material or the interaction potential between two particles in a liquid. An essential property of physical systems is the indistinguishability of their components, e.g. any two electrons are identical, which make the number of parameters independent of the size of the system. In biological systems, entities such as cells exhibit individual features. Describing a collection of different entities requires a large number of parameters, extensive in the system size.
The process of inferring a large number of parameters from poor data, called high-dimensional inference, is an important goal in quantitative analysis and defines a field at the crossroads of statistical inference, machine learning, and statistical physics. In this lecture, we will concentrate on maximum-entropy models, which are currently very popular in the analysis of biological e.g. neural and protein data. These models correspond to Ising (or Potts) models in statistical physics. Briefly speaking, we want to find the interactions between a set of components from the knowledge of their correlations. This problem is at least as difficult as the direct problem of calculating the statistics of observables for a given model (whose complexity is exemplified by the notorious spin-glass problem). The inverse problem is complicated by the absence of any a priori symmetry (e.g. not defined on any a priori known lattice); the presence of noise due to poor sampling influences the estimation of observables, and thus of the model parameters.
Different approaches to solve the inverse Ising problem will be reviewed during the lecture, starting from elementary mean-field techniques to more sophisticated diagrammatic expansions and logistic regressions. An emphasis will be put on the inverse Hopfield model, useful to recast the popular principal component analysis in a controlled Bayesian framework. The relationship with random matrices and the so-called retarded learning transitions will be detailed. The approaches will be briefly illustrated on genomic data (analysis of covariations on protein families).

 

morozov

Alexei Morozov
Main researcher in ITEP (Institute of Theoretical and Experimental Physics, Moscow)

Main research interests:
elementary particle theory, unification models; quantum field theory, string theory; mathematical physics.

Recommended books, papers for the students and participants :

  • Introduction to Non-Linear Algebra, V. Dolotin, A. Morozov arXiv:hep-th/0609022
  • Unitary Integrals and Related Matrix Models, A.Morozov arXiv:0906.3518
  • Towards a proof of AGT conjecture by methods of matrix models, A.Mironov, A.Morozov, Sh.Shakirov arXiv:1011.5629, and references therein.

* Lecture title :Faces of matrix models

* Abstract : Partition functions of eigenvalue matrix models possess a number of very different descriptions:
as matrix integrals, as solutions to linear equations, as $\tau$-functions of integrable hierarchies, as result of the action of $W$-operators and of various recursions on elementary input data, as gluing of certain elementary building blocks.
All this explains the central role of such matrix models in modern mathematical physics:
they provide the basic "special functions"
to express the answers an relation between them, and they serve as a dream model of what one should try to achieve in any other field.

haim photo

H. Sompolinsky
Professor of Physics, The Hebrew University

Haim Sompolinsky is a Professor of Physics and William N. Skirball Professor of Neuroscience at the Hebrew University. He is a founding member of the Interdisciplinary Center for Neural Computation (ICNC) and of the newly established Edmond and Lily Safra Center for Brain Sciences (ELSC). Sompolinsky serves as the Director of the Swartz Program for Theoretical Neuroscience at Harvard University and is an Honorary Foreign Member of the American Academy of Arts and Sciences.

Sompolinsky's early work focused on the statistical mechanics and dynamics of spin glasses. Later on, he moved to theoretical and computational neuroscience, specializing in the application of concepts and methods from statistical physics, theory of random systems, and dynamical systems theory to the study of the brain. His research includes investigations of randomness, noise and chaos in neuronal circuits, and their role in information processing, memory and learning. He has applied Random Matrix Theory to the study of spin glasses and neuronal circuits.

* Lecture title : Neuronal Circuits with Random Connectivity

* Abstract : In most neuronal systems, neurons exhibit high temporal irregularity and trial to trial variability, motivating the investigation of the conditions under which neuronal circuits exhibit chaotic dynamics and the properties of such a state. In my lectures, I will present the theory of chaos in nonlinear random neuronal circuits, developed over more than two decades. I will first describe the theory of the spectrum of the relevant random connection matrix, which governs the linear regime of the network dynamics.
I will then present the dynamic mean field theory, which describes the chaotic state of the nonlinear random network. Finally, I will discuss the nonlinear interaction between the intrinsic network dynamics and external stimuli.

* References:
1. Sommer H J, Crisanti A, Sompolinsky H, and Stein Y (1988) The Spectrum of Large Random Asymmetric Matrices. Physical Review Letters, 60: 1895-1899.
2. Rajan, K. and Abbott, L.F. (2006) Eigenvalue Spectra of Random Matrices for Neural Networks. Physical Review Letters, 97:188104.
3. Sompolinsky H, Crisanti A, and Sommers H.J (1988) Chaos in Random Neural Networks. Physical Review Letters, 61: 259-262.
4. Rajan, K, Abbott, L.F., and Sompolinsky, H. (2010) Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks, Physical Review E 82, 011903
5. Rajan K., Abbott L.F., and Sompolinsky H. (2010). Inferring stimulus selectivity from the spatial structure of neural network dynamics. Advances in Neural Information Processing, MIT Press, Cambridge MA. 23: 1975-1983

 

 

J. Zinn-Justin
CEA

Lecture title: RANDOM MATRIX AND RANDOM VECTOR THEORY: THE RENORMALIZATION GROUP APPROACH

Abstract: The realization that some ensembles of random matrices in the  large size and the so-called  double scaling limit could be used as toy models for quantum gravity has resulted in a tremendous expansion of  random matrix theory. However, the somewhat paradoxical situation is that either models can be solved exactly or very little can be said. Since the solved models display critical points and \Red{universal properties}, it is tempting to use renormalization group (RG) ideas to reproduce universal properties, without solving models explicitly. The main ideas behind this approach are recalled here. The approach  has led to encouraging results but has not yet become a universal tool as initially expected. In particular, no progress has been made for problems of quantum field theories with matrix fields. To illustrate some of the difficulties one meets, we apply in the second part of this talk the same ideas to O(N) symmetric vector models, models which can quite generally  be solved in the large N limit.

 

P. Wiegmann
U.Chicago

Lecture title: Random Matrices, Growth models and hydrodynamic singularities.

Abstract: A broad class of non-equilibrium growth processes in two dimensions have a common law :
the velocity of the growing interface is determined by the gradient of a harmonic field (Laplacian Growth or Geometrical Growth). This kind of growth or flow is unstable, giving rise to hydrodynamic singularities which further develop into fractal singular patterns. Similar singularities occur in Random Matrix Models There a support of equilibrium measure grows with the size of the matrix according to the same law as Laplacian (or Geometrical) Growth.
In the lectures I will review this relation emphasizing a geometrical aspects of Random Matrix Theory, their hydrodynamic interpretation and the relation of growing patterns to the distribution of zeros of orthogonal polynomials.

 

M. Vergassola
CNRS/Institut Pasteur

 

*Lecture title1: Statistics of the maximum eigenvalue in random matrices

*Abstract:The statistical properties of the largest eigenvalue of a random matrix are of interest in diverse fields ranging from disordered systems and quantum mechanics to population genetics. Recent developements on the theory of large fluctuations of the first eigenvalue and some of its applications will be discussed.

*Lecture title2: Strategies of motility in living organisms

*Abstract: Challenges faced by living organisms trying to locate and move towards sources of nutrients, odors, pheromones, etc., will be discussed. Macro-organisms, such as insects and birds, lack local cues because chaotic mixing breaks up regions of high concentration into random and disconnected patches, carried by winds and currents. Thus, macroscopic animals detect patches very intermittently and have to rely on strategies more elaborate than gradient-climbing. Conversely, microorganisms, such as bacteria performing chemotaxis, can rely on local concentration cues, yet they have to cope with the stochastic nature of their microscopic world. The bacterial chemotactic response appears indeed to emerge from selective adaptation to strong fluctuations in the environments that bacterial populations experience.

Surya Ganguli
Dept. of Applied Physics, Stanford University

 

*Lecture title: The statistical mechanics of compressed sensing and memory through random matrices.

*Abstract:Compressed sensing (CS) is an important recent advance that shows how to reconstruct sparse high dimensional signals from surprisingly small numbers of random measurements. However, the nonlinear nature of the reconstruction process poses a challenge to understanding the performance of CS. After introducing CS, we will discuss how techniques from the statistical physics of disordered systems sheds insight into the remarkable performance of CS. We then address the seemingly unrelated question of how a neuronal network, or more generally any dissipative dynamical system, can store a memory trace for sparse temporal sequences of inputs. We show that this question is intimately related to an online, dynamical version of CS, and we discuss the properties of random networks capable of compressed sensing of time series in an online fashion.

*References:
[1] S. Ganguli, B. Huh, H. Sompolinsky, Memory Traces in Dynamical Systems, PNAS (2008)
[2] S. Ganguli and P. Latham, Feedforward to the past: the relation between neuronal connectivity, amplification, and short-term memory, Neuron (2009) 61:499-501.
[3] S. Ganguli and H. Sompolinsky, Statistical Mechanics of Compressed Sensing, Phys. Rev.Lett. (2010).
[4] S. Ganguli and H. Sompolinsky, Short-term memory in neuronal networks through dynamical compressed sensing, NIPS (2010).
[5] S. Ganguli and H. Sompolinsky, Compressed sensing, sparsity and dimensionality in neuronal information processing and data analysis, Ann. Rev. of Neurosci 35 (2012)

Participants

Vladimir ​​Al. Osipov

Institute of Theoretical Physics, Cologne University 
Zülpicher Str. 77, 50937 Cologne, Germany
Office 105 | Tel.: +49 (0) 221 470 4205 | Fax: +49 (0) 221 470 5159

  • Poster Title: "Ultrametric structure of the space of p-closed sequences";
  • Authors: Vladimir Al. Osipov, Boris Gutkin;
  • Abstract:
    "The idea of p-closed sequences originated from the concept of
    periodic orbits appeared in theory of quantum chaos. In the framework
    of the semiclassical approach the universal spectral correlations in
    the Hamiltonian systems with classical chaotic dynamics can be
    attributed to the systematic correlations between actions of periodic
    orbits which pass through approximately the same points of the phase
    space. In the simplest way the concept is described in the following
    terms. Let X and Y be two sequences of the same length n with "glued"
    ends, consisting of symbols 1 and 0. Let A be a sequence of the length
    p<n. The sequences X and Y are p-close (X~Y) if for any A the number
    of entrances of A into X and Y is equal (could be zero). For instance,
    two sequences [0010111] and [0011101] are 3-close two each other.

    In our work we show that all sequences of the length n can be
    distributed over clusters with respect to the naturally appearing
    ultrametric distance based on the notion of p-closeness. We study the
    distribution of cluster sizes in the limit of long sequences. This
    problem is equivalent to the one of counting degeneracies in the
    length spectrum of the de Bruijn graphs. Based on this fact, we derive
    the distribution of sizes of clusters and demonstrate that in the same
    limit it does not depend on n, but only on p."

 

Shin-ya Koyama

Professor of Department of Biomedical Engineering, Toyo University

Main interest: The theory of zeta functions.

 

* poster title: Quantum ergodicity of Eisenstein series in the level aspect.

* authors: Shin-ya Koyama and Sachiko Nakajima

* abstract: Quantum ergodicity is an equidistribution property of eigenfunctions for the Laplacian over a manifold as the spectrum grows. Luo and Sarnak proved it for Eisenstein series over arithmetic surfaces.  
In this research we consider a famiy of arithmetic manifolds which are called the congruence surfaces of level N, and proved an equidistribution property as the level N grows with the spectrum fixed.

Zoran Ristivojevic

Postdoctoral Fellow, LPT Ecole Normale Superieure, Paris, France

I am interested in different systems where low-dimensional physics is realized, that include quantum wires, edges of quantum Hall states and cold atomic gases. I am also interested in certain aspects of some models of statistical physics that include disordered XY models.

  • Poster title: Some two-loop results for two XY models with disorder
  • Authors: Zoran Ristivojevic, P. Le Doussal, T. Giamarchi, A. Perret, A. Petkovic, G. Schehr, and K. Wiese
  • Abstract: We consider two disordered XY model and derive two-loop
        scaling equations for them. The first model is the two-dimensional
        XY model with quenched uncorrelated random symmetry-breaking
        fields and it has a phase transition at a finite temperature.
        Using the obtained scaling equations we calculate the amplitude of
        the correlation function in the low-temperature superrough phase.
        Obtained results show excellent agreement with numerical
        simulation. The second model we considered has correlated disorder
        in alone one direction. We calculate two correlation functions at
        its Berezinskii-Kosterlitz-Thouless transition between the
        Gaussian and localized phase and find the logarithmic corrections
        to the naive results that could be obtained by using the
        fixed-point values of the parameters.

 

William Powell

I'm 22 years old
I live in Leamington Spa, Warwickshire, England
I completed my Bachelor degree at the University of Warwick in Mathematics
I am currently at the University of Warwick studying for a Masters in Systems Biology
I will undergo two research projects this year. The first is on femtosecond spectroscopy of biomolecules and for the second I will look at EEG signals in response to visual stimuli.

My interested field of study is the application of random matrices to the study of biology

 

Taro Kimura

RIKEN Nishina Center Mathematical Physics Laboratory

I am a postdoctoral fellow at Mathematical Physics Laboratory,
RIKEN. A main research topic is the study of supersymmetric gauge
theory and its relation to random matrix theory. I am now exploring
the remarkable relationship between 2d conformal field theory and 4d
gauge theory, which is called the AGT relation, through their matrix
model descriptions. The matrix model is derived from the combinatorial
expression of the gauge theory partition function by considering its
asymptotic behavior. I'm also interested in other topics, lattice
gauge theory, vortex theory, quantum Hall effect, topological
insulator and so on.


* Poster Title : Matrix model from orbifold partition function

* Abstract : We investigate the matrix model associated with the
combinatorial partition function, which is derived from the instanton
counting on the four dimensional orbifold. We would like to show that
the q-deformation and its root of unity limit is relevant to this
model. It is then shown that the gauge theory consequence is extracted
by studying its asymptotic behavior. In particular Seiberg-Witten
curve is given by the spectral curve of the matrix model.

 

Ranjan Modak

DEPARTMENT OF PHYSICS, THEORETICAL CONDENSED MATTER,
INDIAN INSTITUTE OF SCIENCE


PROFILE: I did my B.Sc from Jadavpur University,Kolkata.I have completed my M.Sc from IIT Kharagpur. Presently I am doing PhD in Department of Physics,Indian Institute of Science (IISc) ,Bangalore under the guidance of Prof. Sriram Ramaswamy and Prof. Subroto Mukerjee.

Research Interest: Thermalization of Quantum system and transport properties of integrable and non-integrable systems.


* Poster Title : Finite Size Scaling of Integrability Breaking parameter for One dimensional quantum models

* Authors : Ranjan Modak, Subroto Mukerjee, and Sriram Ramaswamy

* Abstract : We study energy level spacing statistics of 1D models of spinless fermions using numerical ex- act diagonalization. For finite-length chains,physical properties exhibit a crossover from behavior corresponding to Poisson level characteristic of integrable systems to Wigner-Dyson statistics char- acteristic of non-integrable systems. We use the Drude weight to extract the threshold value of the integrability breaking parameter and investigate its scaling with system size .The range of numer- ically accessible system sizes is not sufficient to establish the scaling with absolute certainty, but our data suggests that the threshold value decreases with increasing system size as a power law and that in the thermodynamic limit an infinitesimal value of the parameter would break integrability. We also consider a simple analytical model of random matrices that produces a power law scaling of the integrability breaking parameter with system size.

 

Sergio Andraus

Second-year PhD student of Physics at the U. of Tokyo, Miyashita Group
Research interests: statistical mechanics, stochastic processes and random matrix theory
email: andraus@spin.phys.s.u-tokyo.ac.jp

* Poster Title : Dyson's model as a special case of Dunkl processes and Dunkl's intertwining operators

* Abstract :
Dyson’s Brownian motion model is a family of systems in which N Brownian particles interact repulsively through a log-potential in one dimension, and it is indexed by the positive real parameter beta. Dunkl processes are stochastic processes defined as a generalization of N-dimensional Brownian motion based on a set of differential-difference operators, called Dunkl operators. These operators depend on the choice of a finite set of vectors called root system. When the A-type root system is chosen, its associated Dunkl process describes a system of Brownian particles that repel each other in one dimension and exchange positions spontaneously. An important part of the results from the theory of Dunkl operators is ob- tained through the use of the intertwining operator. This operator relates partial derivatives and Dunkl operators, but its explicit form is unknown in general.

We show that the A-type Dunkl process of parameter k equal to beta/2 under a symmetric initial condition is equivalent to Dyson’s model. From this equivalence, we extract an expression for the effect of the intertwining operator on symmetric polynomials. We show that in the strong coupling limit it maps symmetric functions into a function of the sum of their variables. This allows us to study the zero-temperature limit of Dyson’s model, and we show that the final configuration is proportional to a vector of the roots of the Hermite polynomials multiplied by the square root of the process time, while being independent of the initial configuration. We briefly discuss two topics of further study on the intertwining operator: its symmetric eigenfunctions and its effect on non-symmetric polynomials.

 

Fumihiko NAKANO

Department of Mathematics, Gakushuin University, Japan.

* Poster Title : The level statistics of one-dimensional Schroedinger operator with random decaying potential.

* Abstract : We study the level statistics problem of the one-dimensional Schr\"dingier operator with random potential decaying like $x^{-\alpha}$ at infinity. The results obtained so far is summarized as follows :
(i)(ac spectrum case)
if $\alpha > \frac 12$, the point process $\xi_L$ consisting of the rescaled eigenvalues converges to a clock process, and the fluctuation of the spacing of eigenvalues converges to Gaussian.
(ii)(critical case)
if $\alpha = \frac 12$, $\xi_L$ converges to the limit of the circular $\beta$-ensemble.

 

Craig Jolley

Foreign Postdoctoral Researcher
Laboratory for Systems Biology
RIKEN Center for Developmental Biology
2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe-shi
650-0047 Japan
Lab phone: +81-78-306-3191

e-mail: jolleycraig@gmail.com


My current projects center on systems-biological approaches to understanding the mammalian circadian clock.  The circadian clock has been extensively studied at the cellular level, and many of the molecular-level interactions are well-understood.  The precise means by which the cellular-level clocks generate organism-level outputs in terms of behavior and physiology are less well-understood.  This makes the circadian clock an ideal test case for organism-level systems biology, in which we are attempting to piece together a multiscale description of circadian regulatory processes at the cell, tissue, and organism scales.  I'm fairly new to random matrix theory, but I hope that it might provide a way to resolve two complementary (and endemic) problems in systems biology: the "parameter problem" in large dynamical models, and the "curse of dimensionality" in high-volume data collection studies.

* Poster Title :Random Matrix Theory and Systems Biology: Some possible directions

* Abstract :Systems Biology attempts to understand the function of biological systems by leveraging two emerging technological trends.  The first is the appearance of fast, inexpensive computing power which enables the study of detailed mathematical models of biological systems.  The second is the development of high-throughput "omics" technologies that allow for the identification and quantification of mRNA transcripts, proteins, metabolites, and other cellular components.  Taken together, these technologies provide us with an unprecedented opportunity to measure and model biological dynamics at a large scale.  In both cases, however, new quantitative approaches will be required.  Complex models with tens or hundreds of parameters can be fit to experimental data, but varying the values of specific parameters (often order many orders of magnitude) can have a negligible effect on the model's prediction.  This makes parameter estimation by fitting a model to experimental data a precarious business.  In many cases, the Hessian matrices used in model fitting have been shown to exhibit "sloppy" eigenvalue spectra, in which eigenvalues are approximately evenly-spaced (on a logarithmic axis) over many orders of magnitude.  This situation is somewhat different from the matrix ensembles traditionally used in RMT, and a "sloppy universality class" has been proposed to explain these features; its features (and implications) are still not as well understood as the conventional ensembles used in RMT.  High-volume data collection experiments can generate a related problem -- while it is possible to search for correlations in large data sets, the probability of finding spurious correlations between unrelated variables increases with the dimensionality of the data set.  RMT can help us to reject unreliable correlations by comparing the eigenspectrum of an empirical correlation matrix with one created by assuming no correlations at all; this has been shown to be a much more reliable method for extracting meaningful correlations.

 

Ricky Kwok

PROFILE:I am a third year graduate student at University of California, Davis under the guidance of Craig Tracy.  My research general research areas are interacting particle systems and statistical mechanics.  I have previously studied Heisenberg’s XXZ model and its relationship to the asymmetric simple exclusion process via unitary transformation of their matrix generators.  Currently, I am working on Lieb and Liniger’s model of Bose gas subject to hard wall boundary conditions for attractive particles.

 

 

Chushun Tian

Professor of Institute for Advanced Study, Tsinghua University, Beijing
Ph. D. 2005, Minnesota (supervisor: Anatoly Larkin)

Research interests: disordered systems, quantum chaos, and strongly correlated electron systems.

 

Jan Dahlhaus

PhD student of theoretical physics at the Lorentz Institute,
Leiden University, The Netherlands

 

* Poster Title :Random matrix theory of transport in topological superconductors

* Abstract :Topological superconductors are realizations of a new phase of matter that has been theoretically predicted recently and probably be found experimentally (still controversial). They are characterized by topological invariants - integer numbers that can only change when the system undergoes a quantum phase transition. The electrical transport properties of a junction between a metal and a topological superconductor can be described by a unitary scattering matrix. We investigate the situation that the junction is disordered and of irregular shape such that its electronic dynamics are chaotic. An average over different disorder realizations can then be obtained by averaging over the circular random matrix ensemble of scattering matrices that are allowed by the symmetries of the system. In our work we investigate the influence of the topological invariant on the conductance statistics of the junction.

 

Fumika Suzuki

PROFILE:Graduate student, Physics and Astronomy, UBC

My research interest is mathematical physics (operator theory, topological approach to quantum mechanics etc.)
and theoretical physics (large-scale quantum mechanics, decoherence, quantum mechanics and gravity)


* Poster Title :Decoherence and RMT

* Abstract :Decoherence is one of the most fundamental problems in quantum physics, which addresses many research area such as quantum computing, quantum biology, quantum cosmology and quantum gravity.
We will review the conventional models of environmental (ex. osccilator bath, spin bath) and intrinsic (ex. gravitational) decoherence.
Also, we will investigate the new model of decoherence using RMT and its potential relationship with quantum chaos.

 

Jacopo Iacovacci

PROFILE: Biophysics Student, University of Rome, ‘La Sapienza’
e-mail: mriacovacci@hotmail.it

I’m a 23 student working on my master thesis in biophysics at the University of  Rome ‘La Sapienza’.
My primary field of study is neuroscience. I have studied the resolution of the Hopfield model for a neural network using the Random Matrix approach. In general I’m interested in the statistical mechanics applied to the study of  biological systems. I’m also interested in studying quantum aspects and mechanism in living matter.

 

Ulisse Ferrari

PROFILE: PhD student at "La Sapienza" (Rome, Italy)

* poster title: On the critical slowing down exponents of mode coupling theory

* authors: F. Caltagirone, U. Ferrari, L. Leuzzi, G. Parisi, T. Rizzo

* abstract: An important prediction of Mode-Coupling-Theory (MCT) is the relationship between the decay exponents in the $\beta$  regime:   ${\Gamma^2(1-a) \over \Gamma(1-2 a)}={\Gamma^2(1+b) \over \Gamma(1+2b)}=\lambda $.    
In the original structural glass context this relationship follows from the MCT equations that are obtained making rather uncontrolled approximations. As a consequence, it is usually assumed that the relationship between the exponents is correct while $\lambda$ has to be treated like a tunable parameter. On the other hand, it is known that in some mean-field spin-glass models the dynamics is precisely described by MCT. In that context $\lambda$ can be computed exactly but, again, its computation becomes difficult when we consider more complex models including finite-dimensional ones.     
In this work we unveil the physical meaning of $\lambda$ in complete generality, relating the dynamical parameter to the static Gibbs free energy and giving, thus, a ``recipe'' for its computation. In this new framework we compute the exponents $a$ and $b$ for some mean-field models and compare our results with numerical simulations or, when available, exact results obtained via purely dynamical equations.

 

Debayan Dey

PROFILE: 
*Position: PhD fellow (graduate student)     
*PhD Advisor (PI): Prof.S.Ramakumar, Professor of Physics, Indian Institute of Science   
*Address: Department of Physics, Indian Institute of Science, Bangalore 560012, India   
*PhD project: Crystallography, structural bioinformatics and mathematical biology of pathogenic organisms.

* poster title: Random matrix theory and gene correlation coefficient statistics of DNA-microarray data: Application in understanding the system biology of gene regulation

* authors: Debayan Dey, S. Ramakumar 

* abstract: The fundamental question in biology is to understand the mechanism by which a cell functions. The gene regulation of a cell and its interaction with environment & other cells makes a complex living organism. Gene regulation is the key process which dictates cell function and any imbalance in it results into disease. Understanding gene regulation using high throughput methods are pivotal to understand the holistic nature of gene regulatory network. But it suffers from large embedded noise within it; so a noise reduction method is very important to deduce sensible biological information which further can be experimentally tested. DNA-microarray technique provides gene expression level data for the whole cell’s activity at a given time. The understanding of gene correlation matrix provided by the data is essential for biological elucidation of gene regulatory network.

 

Artur Święch

PROFILE: I'm studying theoretical physics on masters studies at Jagiellonian University, Krakow,. My main research interests are spectra of products of random matrices in thermodynamical limit. Currently I'm also involved in analysis of data from Collider Detector at Fermilab, i.e. events with forward rapidity gaps.

* poster title:Eigenvalues and Singular Values of Products of Rectangular Gaussian Random Matrices

* authors: Z. Burda, A. Jarosz, G. Livan, M. A. Nowak and A. Swiech

* abstract:  We analyze spectra of products of arbitrary number of rectangular Gaussian random matrices in two cases: 1. Singular value spectra - using Free Random Variable calculus. 2. Eigenvalue spectra - using planar diagrammatics. We derive analytical expressions describing those spectra in thermodynamical limit and we propose corrections for finite sizes of matrices. The behavior of eigenvalue and singular value distributions near zero is determined to be power-law. The results are compared to numerical simulations of large random matrices.

 

Yutaka Shikano

PROFILE: I am the research associate professor at Institute for Molecular Science. My main research interest is the foundations of quantum mechanics, especially the quantum measurement theory, discrete time quantum walk, which is related to the quantum chaos, and the quasi-particle condensation.

* poster title: On Inhomogeneous Quantum Walks with Self-Duality

* author: Yutaka Shikano and Hosho Katsura

* abstract: We introduce and study a class of discrete-time quantum walks on a one-dimensional lattice. In contrast to the standard homogeneous quantum walks, coin operators are inhomogeneous and depend on their positions in this class of models. The models are shown to be self-dual with respect to the Fourier transform, which is analogous to the Aubry-Andr\'e model describing the one-dimensional tight-binding model with a quasi-periodic potential. When the period of coin operators is incommensurate to the lattice spacing, we rigorously show that the limit distribution of the quantum walk is localized at the origin. We also numerically study the eigenvalues of the one-step time evolution operator and find the Hofstadter butterfly spectrum which indicates the fractal nature of this class of quantum walks.

 

 

Name

PROFILE:

* poster title:

* author:

* abstract:

 

Organizers

miller

Jonathan Miller

Physics and Biology Unit, OIST

Lecture title: Tails of Genome Evolution

Abstract : I describe two 'universal' features of natural genome sequences that ought to be of interest to biologists. One of them probes sequence duplication; the other, probing correlated mutation in otherwise conserved sequence, is (implicitly, but not explicitly) believed by biologists to be important. Both involve distributions of 'word' lengths that are, obviously, not Poisson.

hikami_photo

Shinobu Hikami

Mathematical and Theoretical Physics Unit, OIST