【ERC Coffee House】Bisimulation metrics for representation learning
Prof Prakash Panangaden will give a talk, in person and online, for the Coffee House Tech Talk Series.
The details of the talk are below. A lunch will be provided.
When : 22 Feb 2023, 11am-12nn & 1330pm-1430pm
Where (Physically) : Coffee House, 4/F, Bayes Centre, 47 Potterrow, Edinburgh EH8 9BT, UK
Where (Virtually) : https://welink.zhumu.com/j/0205600289
Registration: https://www.smartsurvey.co.uk/s/1RWACJ/
Title: Bisimulation metrics for representation learning
Part 1 – Probabilistic bisimulation and bisimulation metrics
Probabilistic bisimulation is an equivalence relation that captures behavioural similarity in reactive probabilistic systems also called Labelled Markov Processes. The original definition, due to Larsen and Skou (1989) was defined on discrete spaces. With a strong finite-branching assumption, they established a logical characterization theorem in the spirit of van Benthem and Hennessy-Milner. This work was extended to continuous state spaces by Desharnais et al. (1997-98) and they were able to show a logical characterization result with no finite branching assumption as well as with a significantly more parsimonious logic. The proof uses special properties of analytic spaces. One can argue that an equivalence relation is not the right tool to study quantitative systems. In 1999 Desharnais et al. developed a metric analogue of probabilistic bisimulation and established approximation results using it. Later Ferns et al. (2004-05) extended this metric to Markov decision processes and showed that it gave bounds on the optimal value function, thus establishing an important connection with reinforcement learning (RL). I will describe these developments starting from the Larsen-Skou work to the work of Ferns et al.
Part 2 – The MICo distance and representation learning in RL
In this second part I will critique bisimulation metrics and will present a cheaper proxy for it called the MICo distance. I will describe how this can be used to improve the quality of representation learning. This is work due to Castro et al. from NeurIPS 2021. This talk will assume no knowledge of representation learning. If time permits I will briefly mention connections with the theory of reproducing kernel Hilbert spaces.
The logical characterization work on continuous spaces represents joint work with Blute, Desharnais and Edalat and the metric work is joint with Desharnais, Gupta and Jagadeesan. The connection to RL is joint work with Ferns and Precup. The work on representation learning is joint work with Castro, Kastner and Rowland.
Brief Bio: Prakash Panangaden was born in India and attended IIT Kanpur. He got an MS in physics from the University of Chicago and a PhD in physics from the University of Wisconsin-Milwaukee in the area of quantum field theory in curved spacetime. He obtained a MS in computer science from the University of Utah working with Robert Keller on semantics of dataflow networks. He was an assistant professor at Cornell University and later a professor at McGill University where he is currently employed. His initial area of interest was semantics of programming languages. For the last 25 years he has been working on probabilistic systems: bisimulation, logics, approximation and metrics. For the last several years he has been particularly interested in applications to machine learning. He was elected a Fellow of the Royal Society of Canada in 2013 and a Fellow of the Association of Computing Machinery in 2021. He is currently a Strategic Talent Visitor to the School of Informatics
Comments are closed
Comments to this thread have been closed by the post author or by an administrator.