Seminar: Stefanie Jegelka (MIT)

Zoom link:
(Meeting ID: 849 0964 5595, Passcode: 772448)


Speaker: Stefanie Jegelka (MIT)

Title: Machine Learning for discrete optimization: Graph Neural Networks, generalization under shifts, and loss functions

Graph Neural Networks (GNNs) have become a popular tool for learning algorithmic tasks, in particular related to combinatorial optimization. In this talk, we will focus on the “algorithmic reasoning” task of learning a full algorithm. Instead of competing on empirical benchmarks, we will aim to get a better understanding of the model's behavior and generalization properties, i.e., the performance on hold-out data, which is an important question in learning-supported optimization too.
We will try to understand in particular out-of-distribution generalization in widely used message passing GNNs, with an eye on applications in learning for optimization: what may be an appropriate metric for measuring shift in the data? Under what conditions will a GNN generalize to larger graphs?
In the last part of the talk, we will take a brief look at objective (loss) functions for learning with discrete objects, beyond GNNs. 

This talk is based on joint work with Ching-Yao Chuang, Keyulu Xu, Joshua Robinson, Nikos Karalias, Jingling Li, Mozhi Zhang, Simon S. Du, Kenichi Kawarabayashi and Andreas Loukas.