Enjoyed giving our tutorial on Geometric & Topological Deep Learning at IEEE MLSP 2025 alongside @semihcanturk.bsky.social. Loving the Istanbul vibes and the amazing food here! β
Enjoyed giving our tutorial on Geometric & Topological Deep Learning at IEEE MLSP 2025 alongside @semihcanturk.bsky.social. Loving the Istanbul vibes and the amazing food here! β
1/ Introducing TxPert: a new model that predicts transcriptional responses across diverse biological contexts
Itβs designed to generalize across unseen single-gene perturbations, novel combinations of gene perturbations, and even new cell types π§΅
www.valencelabs.com/txpert-predi...
1/ At Valence Labs, @recursionpharma.bsky.social's AI research engine, weβre focused on advancing drug discovery outcomes through cutting-edge computational methods
Today, we're excited to share our vision for building virtual cells, guided by the predict-explain-discover framework π§΅
Loved working on the TxPert project! It's also exciting to see my PhD work on Graph Transformers (Exphormer model) finding such meaningful application in a critical real-world task.
π Read their paper: lnkd.in/gECWw_tR
π§΅ Or check out a great summary thread from Yi (Joshua): bsky.app/profile/josh...
If you're attending ICLR, take a visit to their poster and talk:
π Poster Hall 3+2B #376 on Fri, Apr 25 at 15:00
π€ Oral in Session 6A on Sat, Apr 26 at 16:30
Huge congrats to my labmate @joshuaren.bsky.social and my supervisor @djsutherland.ml for receiving an Outstanding Paper Award at @iclr-conf.bsky.social for their work: "Learning Dynamics of LLM Finetuning" π
So proud to see their amazing research recognized! ππ₯
blog.iclr.cc/2025/04/22/a...
Thereβs more in the paper: arxiv.org/abs/2411.16278 (Appendix F)
Weβd love to see anyone do more analysis of these things! To get you started, our scores are available from the "Attention Score Analysis" notebook in our repo:
github.com/hamed1375/Sp...
How much do nodes attend to graph edges, versus expander edges or self-loops?
On the Photo dataset (homophilic), attention mainly comes from graph edges. On the Actor dataset (heterophilic), self-loops and expander edges play a major role.
Q. Is selecting the top few attention scores effective?
A. Top-k scores rarely cover the attention sum across nodes, unless the graph has a very small average degree. Results are consistent for both dim=4 and dim=64.
Q. How similar are attention scores across layers?
A. In all experiments, the first layer's attention scores differed significantly, but scores were very consistent for all the other layers.
Q. How do attention scores change across layers?
A. The first layer consistently shows much higher entropy (more uniform attention across nodes), while deeper layers have sharper attention scores.
We trained 100 single-head Transformers (masked for graph edges w/ and w/o expander graphs + self-loops) on Photo & Actor, with hidden dims 4 to 64.
Q. Are attention scores consistent across widths?
A. The distributions of where a node attends are pretty consistent.
As a reminder, we will have our poster session tomorrow:
π East Exhibit Hall, Poster #3010
π arxiv.org/abs/2411.16278
π» github.com/hamed1375/Sp...
To motivate you further, we have some insights gained from the attention score analysis of this work, which I'll share in this thread:
After two years, our paper on generative models for structure-based drug design is finally out in @natcomputsci.bsky.social
www.nature.com/articles/s43...
π¨ Come chat with us at NeurIPS next week! π¨
ποΈ Thursday, Dec 12
β° 11:00 AMβ2:00 PM PST
π East Exhibit Hall A-C, Poster #3010
π Paper: arxiv.org/abs/2411.16278
π» Code: github.com/hamed1375/Sp...
See you there! πβ¨
[13/13]
For more on the compression results see our workshop paper βA Theory for Compressibility of Graph Transformers for Transductive Learningβ; there will be a thread on this too!
Workshop paper link: arxiv.org/abs/2411.13028
We have theoretical guarantees, too: on compression (smaller nets can estimate the attention scores), and that sparsification works even from an approximate attention matrix (from the narrow net).
[11/13]
Downsampling the edges and regular-degree calculations can make you even faster and more memory efficient than GCN!
But we can scale to graphs Exphormer couldnβt even dream of:
How much accuracy do we lose compared to an Exphormer with many more edges (and way more memory usage)? Not much.
Now, with sparse (meaningful) edges, k-hop sampling is feasible again even across several layers. Memory and runtime can be traded off by choosing how many βcore nodesβ we expand from.
[7/13]
By sampling a regular degree, graph computations are much more efficient (simple batched matmul instead of needing a scatter operation). Naive implementations of sampling can also be really slow, but reservoir sampling makes resampling edges per epoch no big deal.
Now, we extract the attention scores from the initial network, and use them to sample a sparse attention graph for a bigger model. Attention scores vary on each layer, but no problem: we sample neighbors per layer. Memory usage plummets!
[5/13]
But not all the edges matter β if we know which wonβt be used, we can just drop them and get a sparser graph/smaller k-hop neighborhoods. It turns out a small network (same arch, tiny hidden dim, minor tweaks) can be a really good proxy for which edges will matter!
[4/13]
For very large graphs, though, even very simple GNNs need batching. One way is k-hop neighborhood selection, but expander graphs are specifically designed so that k-hop neighborhoods are big. Other batching approaches can drop important edges and kill the advantages of GT.
[3/13]
Our previous work, Exphormer, uses expander graphs to avoid the quadratic complexity of full GTs.
π¨ Come chat with us at NeurIPS next week! π¨
ποΈ Thursday, Dec 12
β° 11:00 AMβ2:00 PM PST
π East Exhibit Hall A-C, Poster #3010
π Paper: arxiv.org/abs/2411.16278
π» Code: github.com/hamed1375/Sp...
See you there! πβ¨
For more on the compression results see our workshop paper βA Theory for Compressibility of Graph Transformers for Transductive Learningβ; there will be a thread on this too!
Workshop paper link: arxiv.org/abs/2411.13028
We have theoretical guarantees, too: on compression (smaller nets can estimate the attention scores), and that sparsification works even from an approximate attention matrix (from the narrow net).
[11/13]
Downsampling the edges and regular-degree calculations can make you even faster and more memory efficient than GCN!