We’re absolutely thrilled to have an exciting lineup of speakers this year covering a wide range of topics, including @pseudomanifold.topology.rocks @dom-beaini.bsky.social @mathildepapillon.bsky.social Dhananjay Bhaskar, Naoki Saito!
We’re absolutely thrilled to have an exciting lineup of speakers this year covering a wide range of topics, including @pseudomanifold.topology.rocks @dom-beaini.bsky.social @mathildepapillon.bsky.social Dhananjay Bhaskar, Naoki Saito!
The 8th annual Graph Signal Processing Workshop is back this May 14-16! Held in Montreal, CA, at MILA! GSP covers all things graphs, signals, learning, & applications!
🔗: gspworkshop.org
👉🏻Abstract submission opens Feb 1
👉🏻 Registration opens Mar 20
Hey 👋 I'm new here and I did my best to compile some starter packs to help other new people in the same community!
I probably missed some people so please help me update my lists :)
1/6 Machine learning for drug discovery
go.bsky.app/2PpxWLc
More in the thread! 🧵
📖 Beyond results, we include:
🔹 Theoretical analysis on why GCON works.
🔹 Ablations on efficiency and generalizability — GCON is surprisingly transferable 🚀
Check out the paper for more, or even better, catch us at our @logconference.bsky.social oral at 14:30 EST/19:30 GMT! 🧵[10/10]
We test GCON on Max Cut, Min Dominating Set, & Max Clique tasks.
🔹 GCON beats other GNNs & GFlowNet-based solvers
🔹 Outperforms (time-budgeted) Gurobi optimizer on Max Cut by 45+ edges!
🔹 Much faster inference than GFlowNet & Gurobi
GCON is both versatile & powerful. 🧵[9/n]
✨ Attention then re-weights the multi-scale features on a node-by-node basis, which are then passed through MLP + softmax to predict node probabilities (p).
p is then used for:
(a) Self-supervised loss computation
(b) Task-specific decoding to satisfy task constraints
🧵[8/n]
🌐 High-frequency signals are vital for CO, helping capture subsets that are not always local & distinguish clear boundaries for vertex sets.
[L]: High-frequency features capture the true clique.
[R]: Low-pass filters diffuse boundaries to nodes not part of the clique. 🧵[7/n]
🔍 The GCON pipeline starts with generating node features from graph statistics.
We then apply GCON blocks with multi-scale filters derived from geometric scattering alongside conventional GNN aggregation for low-pass filtering
Why the multi-scale filters? 🧵[6/n]
🔧 GCON overcomes these by:
1️⃣ Hybrid filter bank: Combines GNN aggregation with wavelet filters to capture intricate graph geometry.
2️⃣ Localized attention: Dynamically weights filters per node for flexibility.
3️⃣ Self-supervised losses: Circumvent the need for labels 🧵[5/n]
1️⃣ GNNs treat smoothness as an inductive bias. This isn't always suitable for CO problems, which often require high-frequency info.
2️⃣ CO graphs usually lack informative node features.
3️⃣ NP-hardness limits labeled data availability, making supervised learning infeasible. 🧵[4/n]
✨ Deep learning enables finding rapid approximate solutions, which are practical for most real-world CO tasks. Since many CO problems are graph-based, GNNs are a natural fit, but they face some challenges of their own: 🧵[3/n]
🔍 Combinatorial Optimization (CO) problems require finding the optimal subset of objects from a finite set. Most CO problems are NP-hard, making exact solutions (e.g., via MIP) infeasible as set instances become larger. 🧵[2/n]
Excited to present our paper "Towards a General Recipe for Combinatorial Optimization with Multi-Filter GNNs" as a Spotlight at @logconference.bsky.social 2024! 🎉
We propose GCON, a novel GNN framework for tackling CO problems.
📜 arxiv.org/abs/2405.20543
🛠 github.com/WenkelF/copt
🧵[1/n]