Weakly supervised causal representation learning | Johann Brehmer

Описание к видео Weakly supervised causal representation learning | Johann Brehmer

Valence Portal is the home of the TechBio community. Join for more details on this talk and to connect with the speakers: https://portal.valencelabs.com/care

Abstract: Many systems can be described in terms of some high-level variables and causal relations between them. Often, these causal variables are not known but only observed in some unstructured, low-level representation, like the pixels of a camera feed. Learning the high-level representations together with their causal structure from pixel-level data is a challenging problem – and known to be impossible from unsupervised observational data. We prove that causal variables and structure can however be learned from weak supervision. This setting involves a dataset with paired samples before and after random, unknown interventions, but no further labels. We then introduce implicit latent causal models, variational autoencoders that represent causal structure without having to optimize an explicit graph. On simple image data, we demonstrate that such models disentangle causal variables and allow for causal reasoning. Finally, we briefly comment of the limitations of causal representation learning in its current form and speculate about the path to practically useful methods.

Speaker: Johann Brehmer

Twitter Chandler:   / chandlersquires  
Twitter Dhanya:   / dhanya_sridhar  
Twitter Jason:   / jasonhartford  

~

Chapters
00:00 - Discussant Slide
02:00 - Introduction and Background
08:56 - Structural Causal Models
15:15 - Theory
19:31 - Assumptions
22:46 - Practice
29:30 - Experiments
44:10 - Outlook
49:40 - Discussion

Комментарии

Информация по комментариям в разработке