Program

 


Lectures  (2-5 September 2019)

Bayesian Dynamic Latent Variables Models (6h)

Roberto Casarin (Dept. of Economics, University Ca’ Foscari of Venice)

In time series analysis, the introduction of latent processes allows for capturing heterogeneity in the temporal evolution. In latent variable modelling, the Bayesian approach is appealing since it allows for the inclusion of simulation methods and stochastic filtering in the inference process. This is an introductory course on dynamic latent variable models for time series analysis designed to provide students with basic concepts in latent variable modeling and inference. Topics covered include basic Bayesian models, state space models, hidden Markov models, linear and nonlinear filtering. Applications to time series data from various fields (traffic fatality data, unemployment data, crime data, financial volatility,…) will be developed in MATLAB.

Unsupervised learning of features from data: a statistical-physics approach (6h)

Rémi Monasson (Laboratoire de Physique Théorique de l’ENS, Paris)

Extracting statistical features from unlabelled data is a major challenge in machine learning. I will show how statistical physics-based approaches can be helpful to understand the operation of various learning algorithms including principal component analysis, independent component analysis, auto-encoders, restricted Boltzmann machines … The lectures will both show applications and theory based on the tools and concepts of the statistical mechanics of disordered systems.

A gentle introduction to Reinforcement Learning  (3h)

Antonio Celani (ICTP, Trieste)

The course aims at giving an introduction to reinforcement learning. Starting from the definitions of agents, environments, rewards, the challenges of decision-making is described in terms of prior knowledge, acquisition of information and learning. Important topics covered are: Markov Decision Processes and the omniscient decision-maker.  Partial observability. Controlling Hidden Markov models with Bayesian updating. The role of models. Learning to make good decisions without prior knowledge. Temporal difference learning.  Multi-armed bandits: “the hydrogen atom” of reinforcement learning. Reinforcement Learning in desperately large environments.


Hands-on Sessions  (2-5 September 2019)

– Deep learning with pytorch (Matteo Negri, Milano University)
– Bayesian inference with R (Giovanni Diana, King’s College London)
– Calling bullshit in the Era of Big Data (Jacopo Grilli, Santa Fe)
– Reinforcement Learning (Andrea Mazzolini, ICTP, Trieste)

 

timefinal

 

MGDS – Scientific Symposium  (Friday 9-6)

08:30-09:00 Antonio Celani (ICTP, Trieste) “Learning to navigate in dynamic environments” [MOVED to thursday 16:45]
09:00-09:30 Estelle Pitard (University of Montpellier) “Statistical physics tools and Remote-sensing data for lagoon ecosystems conservation”
09:30-10:00 Chiara Cammarota (King’s College, London) “Rough landscapes: from machine learning to glasses and back”
10:00-10:30 Francesca Tria (Sapienza University of Rome) “The dynamics of systems featuring innovation”

10:30–11:00 COFFEE BREAK

11:00—11:30 Paul Wiggins (University of Washington, Seattle) “TBA”
11:30–12:00 Jacopo Grilli (ICTP, Trieste) “Laws of diversity and variation in microbial communities”
12:00-12:30 Samir Suweis (University of Padua) “Inferring macro-ecological patterns from local presence/absence data”
12:30-13:00 Giovanni Diana (King’s College, London) “Bayesian inference of neuronal ensembles”

13:00-14:00 LUNCH

14:00-14:30 Barbara Bravi (ENS, Paris) “Statistical physics approaches to inverse modelling for protein function prediction”
14:30-15:00 Carlo Lucibello (Bocconi University, Milan) “Wide flat minima in the loss landscape of neural networks”
15:00-15:30 Federico Vaggi (Amazon) “Hierarchical counterfactual modelling for causal inference”