Trending

#TMLR

Latest posts tagged with #TMLR on Bluesky

Latest Top
Trending

Posts tagged #TMLR

Post image

Our paper Angel Villar-Corrales, Gjergj Plepi, and Sven Behnke:
"TextOCVP: Object-Centric Video Prediction with Language Guidance"
has been published in Transactions on Machine Learning Research (TMLR). #TextOCVP #TMLR #GenAI
ais.uni-bonn.de/videos/TMLR_...

0 0 0 0

🚨Thoughtology is now accepted to #TMLR! We've added some new analyses, most notably:
🌟 We quantify rumination; repetitive thoughts are associated with incorrect responses
🌟 We add 2 LRMs: gpt-oss and Qwen3. Both show a reasoning 'sweet spot'
See 📃 : openreview.net/forum?id=BZw...

4 1 1 0

This work was made possible through a great collaboration with Jingcheng (Frank) Niu, Subhabrata Dutta, Ahmed Elshabrawy, @harishtm.bsky.social, and @igurevych.bsky.social

#Interpretability #InContextLearning #TMLR #LLMs #MechanisticInterpretability #EmergentAbilities

2 0 0 0
A diagram of the GRAPES pipeline. It shows a subgraph being sampled in two steps and being fed to a GNN, with a blue line showing the learning signal. The caption reads Figure 1: Overview of GRAPES. First, GRAPES processes a target node (green) by computing node inclusion probabilities on its 1-hop neighbors (shown by node color shade) with a sampling GNN. Given these probabilities, GRAPES samples k nodes. Then, GRAPES repeats this process over nodes in the 2-hop neighborhood. We pass the sampled subgraph to the classifier GNN for target node classification. Finally, GRAPES uses the classification loss to update the classifier GNN and to reward the sampler GNN.

A diagram of the GRAPES pipeline. It shows a subgraph being sampled in two steps and being fed to a GNN, with a blue line showing the learning signal. The caption reads Figure 1: Overview of GRAPES. First, GRAPES processes a target node (green) by computing node inclusion probabilities on its 1-hop neighbors (shown by node color shade) with a sampling GNN. Given these probabilities, GRAPES samples k nodes. Then, GRAPES repeats this process over nodes in the 2-hop neighborhood. We pass the sampled subgraph to the classifier GNN for target node classification. Finally, GRAPES uses the classification loss to update the classifier GNN and to reward the sampler GNN.

A results table for node classification on heterophilious graphs. Table 2: F1-scores (%) for different sampling methods trained on heterophilous graphs for a batch size of 256, and a sample size of 256 per layer. We report the mean and standard deviation over 10 runs. The best values among the sampling baselines (all except GAS) are in bold, and the second best are underlined. MC stands for multi-class and ML stands for multi-label classification. OOM indicates out of memory.

A results table for node classification on heterophilious graphs. Table 2: F1-scores (%) for different sampling methods trained on heterophilous graphs for a batch size of 256, and a sample size of 256 per layer. We report the mean and standard deviation over 10 runs. The best values among the sampling baselines (all except GAS) are in bold, and the second best are underlined. MC stands for multi-class and ML stands for multi-label classification. OOM indicates out of memory.

Performance of samples vs sampling size showing that GRAPES generally performs well across sample sizes, while other samplers often show more variance across sample sizes. The caption reads Figure 4: Comparative analysis of classification accuracy across different sampling sizes for sampling baseline
and GRAPES. We repeated each experiment five times: The shaded regions show the 95% confidence intervals.

Performance of samples vs sampling size showing that GRAPES generally performs well across sample sizes, while other samplers often show more variance across sample sizes. The caption reads Figure 4: Comparative analysis of classification accuracy across different sampling sizes for sampling baseline and GRAPES. We repeated each experiment five times: The shaded regions show the 95% confidence intervals.

A diagrammatic illustration of a graph classification task used in one of the theorems. The caption reads Figure 9: An example of a graph for Theorem 1 with eight nodes. Red edges belong to E1, features xi and labels yi are shown beside every node. For nodes v1 and v2 we show the edge e12 as an example. As shown, the label of each node is the second feature of its neighbor, where a red edge connects them. The edge homophily ratio is h=12/28 = 0.43.

A diagrammatic illustration of a graph classification task used in one of the theorems. The caption reads Figure 9: An example of a graph for Theorem 1 with eight nodes. Red edges belong to E1, features xi and labels yi are shown beside every node. For nodes v1 and v2 we show the edge e12 as an example. As shown, the label of each node is the second feature of its neighbor, where a red edge connects them. The edge homophily ratio is h=12/28 = 0.43.

Now out in# TMLR:

🍇 GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks 🍇

There's lots of work on sampling subgraphs for GNNs, but relatively little on making this sampling process _adaptive_. That is, learning to select the data from the […]

[Original post on sigmoid.social]

3 0 0 0
Post image

The second project: *On Good Practices for Task-Specific Distillation of Large Pretrained Visual Models* published at #TMLR looked at how to best distill from DINOv2 when training a small specialized model

2 0 1 0

Our insight is to introduce an intermediate form of gradient clipping that can leverage the PL* inequality of wide nets - something not known for standard clipping. Given our algorithm works for transformers maybe that points to some yet unkown algebraic property of them. #TMLR

0 0 0 0

Our "delta-GCLip" is the *only* known adaptive gradient algorithm that provably trains deep-nets AND is practically competitive. That's the message of our recently accepted #TMLR paper - and my 4th TMLR journal 🙂

openreview.net/pdf?id=ABT1X...

#optimization #deeplearningtheory

0 1 0 2

Check out our MOCA self-supervised learning approach unifying the learning principles of both discriminative & masked image modelling paradigms.
With a non-linear path, MOCA has been accepted at #TMLR and presented in the TMLR poster session at #iclr2025

3 0 0 0
Post image

12/
- we tried again, but not enough bold numbers 😞
- by then DINOv2 was released & the world has changed (for the good)
- MOCA eventually landed at #TMLR (top reviewers) & eventually presented as poster at #iclr2025

*photo: running downstream exps during rebuttal at judo match

2 0 1 0
Post image

1/ New & old work on self-supervised representation learning (SSL) with ViTs:
MOCA ☕ - Predicting Masked Online Codebook Assignments w/ @spyrosgidaris.bsky.social O. Simeoni, A. Vobecky, @matthieucord.bsky.social, N. Komodakis, @ptrkprz.bsky.social #TMLR #ICLR2025
Grab a ☕ & brace for a story & a🧵

22 3 1 2
Preview
AttentionSmithy: A Modular Framework for Rapid Transformer Development Transformer architectures have revolutionized a broad spectrum of AI applications by leveraging attention mechanisms for parallelized and long-range sequence processing. Despite their remarkable...

📢 Exciting news! Our paper, "AttentionSmithy: A Modular Framework for Rapid Transformer Development," has been accepted at TMLR! 🎉
openreview.net/forum?id=0jh...
#AttentionSmithy #Transformer #AI #MachineLearning #TMLR #OpenReview #Bioinformatics #Genomics #NLP #NeuralArchitectureSearch

1 0 0 0
Post image

Delighted to share that Siddhant Bhambri & Mudit Verma's
critical evaluation and refutation of the reasoning claims of ReACT has been accepted to #TMLR (Transactions on Machine Learning)

👉https://openreview.net/forum?id=aFAMPSmNHR

4 1 1 0
Post image

Woo hoo.. Our first #TMLR paper!🤗 On the planning and scheduling abilities of LRMs o1 & R1 (w/ Karthik, Kaya, Atharva)

👉 openreview.net/forum?id=FkK...

Even a jaded researcher like me has to admit that
Transactions on Machine Learning Research is a veritable oasis among #AI publication venues! 🙏

7 1 0 0
Preview
TMLR Recap 4/6/25: Hayden Minton and Jake Miller Straight Dominate - Tigers Minor League Report TMLR Recap has a series of firsts with Max Clark first home run, Erie sweeps Harrisburg and Roberto Campos gets his first hit

#TMLR recap:

-Erie sweeps Harrisburg, Jake Miller tosses a gem
-West Michigan sweeps Dayton, Max Clark goes opposite field
-Hayden Minton strikes out 7 in 4 innings of work

tigersmlreport.com/2025/04/06/t...

6 0 0 0

I managed to find emergency reviewers for the #ICML2025 papers (Thank you!)

Now I'm looking for one more emergency reviewer for #TMLR as an Action Editor

If you are an expert of model training on edge devices, please send me a DM with your Google Scholar and OpenReview profiles

0 0 1 0

#TMLR 3/2

Trey Sweeney 2/3, HR, 2B, 2R
Jace Jung 0/3, K
Dillon Dingler 1/3, 2B, R
Hao-Yu Lee 1/2, 2B, R
Andrew Navigato 2/2, R, RBI
Patrick Lee 0/1, BB, R
Thayron Liranzo 1/3 HR , K, 3RBI

Jackson Jobe
3IP 2H ER 3K

Brant Hurter
2.2IP 3H 2ER BB 2K

Tyler Owens
IP H K

4 0 0 0

#TMLR 3/2

Trey Sweeney 2/3, HR, 2B, 2R
Jace Jung 0/3, K
Dillon Dingler 1/3, 2B, R
Hao-Yu Lee 1/2, 2B, R
Andrew Navigato 2/2, R, RBI
Patrick Lee 0/1, BB, R
Thayron Liranzo 1/3 HR , K, 3RBI

Jackson Jobe
3IP 2H ER 3K

Brant Hurter
2.2IP 3H 2ER BB 2K

Tyler Owens
IP H K

6 0 0 0

#AI #Privacy #MachineLearning #LargeLanguageModels #LLMs #Research #ArtificialIntelligence #DataSecurity #DifferentialPrivacy #MachineUnlearning #TMLR #Publication #HealthcareAI #DataTools4Heart #Translated

1 0 0 0

Interested in reviewing for #TMLR?

I am looking for reviewers with expertise in e.g., probabilistic ML, Bayesian statistics, decision making, and AI/ML for science.

You can specify a max paper load and mark when you're unavailable to review.

Sign up on the google form below:

12 1 1 0
Post image

second time speaking @ the holy land of statistics, the Indian Statistical Institute (ISI) 🙂
In 2022 I was @ the Bangalore campus. Same topic : theory of operator learning
- the story continues 🔥 Both journal papers were published at #TMLR in 2024.

1 0 0 0

Another interesting project I worked on at @icepfl.bsky.social Kudos to @sjaved.bsky.social and all the co-authors!

The findings in MQAT highlight the potential of exploiting modularity in neural networks for efficient and performant compression/adaptation.

Check out the #TMLR paper for details!

2 0 0 0
Online Proceedings | MLRC Machine Learning Reproducibility Challenge

🎓 Checkout the #MLRC2023 posters at #NeurIPS 2024 this week: reproml.org/proceedings/

Also, important announcement for the next iteration coming this week! #MLSky #TMLR

2 1 0 0
Post image

This plot is from our 3rd #TMLR journal paper of the year - lnkd.in/e9NWcPcK - on generalization bounds for #DeepOperatorNet methods of PDE solving. #AI4Science Our prediction from theory is this: *use Huber loss to solve PDEs* - and here's a demonstrative comparison on Heat PDE

1 0 0 0
A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”.
Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti 

Background
Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness.

Motivation
Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance.

Research Question
If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it?

Core Idea
An explainer should detect if a model relies on specific features for node classification.
Implements a “true-to-the-model” rather than “truth-to-the-data” logic.

Key Components
White-Box Classifiers:  Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance.
Axioms: an explainer must assign higher scores to truly important features.
Findings:
Explainer Performance
Deconvolution: Perfect fidelity but limited to GNNs.
GraphLIME: Fails with non-local correlations and high sparsity.
LRP/Integrated Gradients: Struggle with zero-valued features.
GNNExplainer: Sensitive to sparsity and edge masking.

Real-World Insights: Facebook Dataset
Fidelity in detecting protected feature use in classification.
Results for different explainers, highlighting strengths and limitations.
Contributions:
Proposed a rigorous framework for benchmarking explainers
Demonstrated practical biases and flaws in popular explainers

A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”. Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti Background Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness. Motivation Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance. Research Question If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it? Core Idea An explainer should detect if a model relies on specific features for node classification. Implements a “true-to-the-model” rather than “truth-to-the-data” logic. Key Components White-Box Classifiers: Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance. Axioms: an explainer must assign higher scores to truly important features. Findings: Explainer Performance Deconvolution: Perfect fidelity but limited to GNNs. GraphLIME: Fails with non-local correlations and high sparsity. LRP/Integrated Gradients: Struggle with zero-valued features. GNNExplainer: Sensitive to sparsity and edge masking. Real-World Insights: Facebook Dataset Fidelity in detecting protected feature use in classification. Results for different explainers, highlighting strengths and limitations. Contributions: Proposed a rigorous framework for benchmarking explainers Demonstrated practical biases and flaws in popular explainers

Check out our poster at #LoG2024, based on our #TMLR paper:
📍 “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”
🗓️ Tuesday 4–6 PM CET
📌 Poster Session 2, GatherTown
Join us to discuss graph ML explainability and benchmarks
#ExplainableAI #GraphML
openreview.net/forum?id=HSQTv3R8Iz

2 0 0 0
Preview
Gradient scarcity with Bilevel Optimization for Graph Learning A common issue in graph learning under the semi-supervised setting is referred to as gradient scarcity. That is, learning graphs by minimizing a loss on a subset of nodes causes edges between unlabel...

Our #TMLR paper "Gradient scarcity with Bilevel Optimization for Graph Learning" (w/ H Ghanem, @samuelvaiter.com) was accepted as an oral presentation at the Learning on Graphs conference 🤓
100% free and online, come check it out!
logconference.org

arxiv.org/abs/2303.13964

12 3 0 0

#TMLR 8/18

Toledo (L 3-0)

Ryan Vilade 0/4, 2K
Ryan Kreidler 1/4, K
Andrew Navigato 0/3, BB, K
Eddys Leonard 0/3, BB, K
Stephen Scott 1/2, K
Drew Maggi 1/3, K
Oscar Mercado 1/3

Lael Lockhart
5.1 IP 5H 3ER 4BB 4K

Mason Englert
2IP 2H 4K

0 0 0 0

#TMLR 9/15

Toledo (L 7-1)

Ryan Vilade 1/4, BB
Akil Baddoo 1/5, 3K
Andrew Navigato 1/5
Eddys Leonard 1/4, K
Justice Bigbie 1/4, R
Stephen Scott 3/3, RBI

Bryan Sammons
3.2IP 2H 2ER 2BB 5K 2HR

Mason Englert
IP 3H 2ER BB K

0 0 0 0

#TMLR 8/12

Toledo (L 7-1)

Ryan Vilade 0/3, BB, 3K
Andrew Navigato 1/4, 2B, 2K
Eddys Leonard 1/4, R
Justice Bigbie 1/4, K
Akil Baddoo 1/3, BB

Matt Manning
3IP H 2ER BB 3K HR

0 0 0 0

#TMLR 9/8

Toledo (L 9-7)

Ryan Vilade 2/5, RBI
Wenceel Pérez 0/3, BB, K, R
Akil Baddoo 1/3, 2BB, 2K, R, SB(25)
Andrew Navigato 1/5, 3K, RBI
Eddys Leonard 2/5 HR(7) K
Bligh Madris 1/4 HR(17) BB, K, 3RBI
Stephen Scott 2/4, 2 2B

Wilmer Flores
0.2IP 3H 4ER BB 2K HR

0 0 0 0

#TMLR 9/6

Toledo (W 13-9)

Ryan Vilade 4/5 HR(12) 2 2B, 2R, 3RBI
Andrew Navigato 3/5 HR(21) 2B, K, 4R, 2RBI, SB(20)
Bligh Madris 1/4 HR(16) BB, K, 2R
Justice Bigbie 4/4, BB, R, RBI
Stephen Scott 2/5, 3B, 2K, R, 3RBI

Lael Lockhart
6.2IP 5H ER 2BB 7K

Wilmer Flores
1.1 IP 3BB 2K

0 0 0 0