Puja Trivedi

I am a CSE PhD candidate in the Graph Exploration and Mining at Scale (GEMS) Lab at the University of Michigan, where I am fortunate to be advised by Prof. Danai Koutra. I also often often collaborate with Dr. Jay Thiagarajan at Lawrence Livermore National Laboratory.

I am broadly interested in understanding how self-supervised learning can be performed effectively and reliably for non-euclidean and graph data by incorporating domain invariances and designing grounded algorithms. My recent work has focused on understanding the role of data augmentations in graph contrastive learning.

Email  /  CV  /  Google Scholar

profile photo
News
[09/2022] Our work on using data-centric properties to understand graph contrastive learning was accepted to NeurIPS!
[08/2022] Our work on understanding neural network dynamics through a dynamic graph perspective was accepted as a short paper at CIKM.
[08/2022] Our work on understanding how quadractic regularizers mitigate catastrophic forgetting was accepted at the Conference on Lifelong Learning Agents (CoLLAs).
[07/2022] Our work exploring how to adapt expressive pretrained models for both improved generalization and safety was accepted at the Principles of Distribution Shift (PODS) Workshop at ICML.
[01/2022] Our work on unsupervised graph representation learning was accepted to TheWebConf (formerly WWW)!
[05/2021] Started interning at Lawrence Livermore National Laboratory with Dr. Jay Thiagarajan.
Publications
datacentricsssl Analyzing Data-Centric Properties for Contrastive Learning on Graphs
Puja Trivedi, Ekdeep Singh Lubana, Mark Heimann, Danai Koutra, and Jay J. Thiagarajan
Advances in Neural Information Processing Systems (NeurIPS), 2022
bibtex / arXiv / Code

We provide a novel generalization analysis for graph contrastive learning with popularly used, generic graph augmentations. Our analysis identifies several limitations in current self-supervised graph learning practices.

pods Exploring the Design of Adaptation Protocols for Improved Generalization and Machine Learning Safety
Puja Trivedi, Danai Koutra, and Jay J. Thiagarajan
Principles of Distribution Shift (PODS) Workshop at ICML, 2022
bibtex / arXiv

We study how effective common adaptation protocols are when adapting expressive pretrained models for both improved generalization and safety (calibration, robustness, etc).

www A Content-First Benchmark for Self-Supervised Graph Representation Learning
Puja Trivedi, Mark Heimann, Ekdeep Singh Lubana, Danai Koutra, and Jay J. Thiagarajan
TheWebConf Workshop on Graph Learning Benchmarks, 2022
bibtex / PDF / Video / Code

We introduce a synthetic data generation process that allows us to control the amount of task-irrelevant and task-relevant information in graph datasets. We find that it is particularly useful in evaluating automated augmentations methods.

www Augmentations in Graph Contrastive Learning: Current Methodological Flaws & Towards Better Practices
Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang, and Danai Koutra
ACM The Web Conference (formerly WWW), 2022
bibtex / arXiv / Video

We contextualize the performance of several unsupervised graph representation learning methods with respect to inductive bias of GNNs and show significant improvements by using structured augmentations defined by task-relevance.

gradflow How do Quadratic Regularizers Prevent Catastrophic Forgetting: The Role of Interpolation
Ekdeep Singh Lubana, Puja Trivedi, Danai Koutra, and Robert P. Dick
Conference on Lifelong Learning Agents, 2022
bibtex / arXiv / Code

This work demonstrates how quadratic regularization methods for preventing catastrophic forgetting in deep networks rely on a simple heuristic under-the-hood: Interpolation.


Website Design from: here.