Trending

#GraphML

Latest posts tagged with #GraphML on Bluesky

Latest Top
Trending

Posts tagged #GraphML

Large-Scale Graph Dataset Measures Long-Range Interactions

Large-Scale Graph Dataset Measures Long-Range Interactions

City‑Networks offers over 100 k road‑intersection nodes, pushing models to capture long‑range graph info, and adds a metric using output Jacobians of distant neighbors. Read more: getnews.me/large-scale-graph-datase... #citynetworks #graphml #longrange

0 0 0 0
Preview
Choose the Right Graph Model Faster with HypNF’s Parameter Knobs

This article tests how degree, clustering, and topology–feature ties sway GNN and feature-only models using HypNF synthetic graphs. #graphml

0 0 0 0
Preview
A Hyperbolic Benchmark for Stress-Testing GNNs Across Degree, Clustering, and Homophily

Synthetic HypNF graphs reveal GNN fragilities: HGCN beats GCN on dense, homogeneous nets but falters on sparse power-law ones. #graphml

0 0 0 0

A quick reminder: Workshop on Graph-Augmented LLMs (GaLM): Bridging Language and Structured Knowledge is still accepting submissions at #IEEE #ICDM.

📝 Papers — Extended deadline: September 5

For more info and how to submit your work: iitbhu.ac.in/cf/jcsic/act...

#LLM #ML #AI #GraphML #RAG

2 1 0 0
Preview
Multiresolution Analysis and Statistical Thresholding on Dynamic Networks Detecting structural change in dynamic network data has wide-ranging applications. Existing approaches typically divide the data into time bins, extract network features within each bin, and then comp...

📄 Preprint available:
 arxiv.org/abs/2506.01208

Joint work with @tijldebie.bsky.social @nickheard @alexandermodell
#TemporalNetworks #DynamicGraphs #NetworkScience #ChangeDetection #Cybersecurity #SignalProcessing #Wavelets #StatisticalLearning #TimeSeries #GraphML

0 0 0 0
Preview
Multiresolution Analysis and Statistical Thresholding on Dynamic Networks Detecting structural change in dynamic network data has wide-ranging applications. Existing approaches typically divide the data into time bins, extract network features within each bin, and then comp...

📄 Full preprint available:
arxiv.org/abs/2506.01208

Joint work with @tijldebie.bsky.social @alexandermodell @nickheard
#TemporalNetworks #DynamicGraphs #NetworkScience #ChangeDetection #Cybersecurity #SignalProcessing #Wavelets #StatisticalLearning #TimeSeries #GraphML

0 0 0 0
Preview
Multiresolution Analysis and Statistical Thresholding on Dynamic Networks Detecting structural change in dynamic network data has wide-ranging applications. Existing approaches typically divide the data into time bins, extract network features within each bin, and then comp...

📄 Link to the full preprint:
arxiv.org/abs/2506.01208

Joint work with @tijldebie.bsky.social @alexandermodell @nickheard
#TemporalNetworks #DynamicGraphs #NetworkScience #ChangeDetection #Cybersecurity #SignalProcessing #Wavelets #StatisticalLearning #TimeSeries #GraphML

0 0 0 0
Preview
The Generalized Skew Spectrum of Graphs This paper proposes a family of permutation-invariant graph embeddings, generalizing the Skew Spectrum of graphs of Kondor & Borgwardt (2008). Grounded in group theory and harmonic analysis, our metho...

🎉 Our paper “The Generalized Skew Spectrum of Graphs” was accepted to ICML 2025!

We applied deep math - group theory, rep theory & Fourier analysis - to graph ML (no quantum this time!😄)

📍 See you in Vancouver in July!
📄 arxiv.org/abs/2505.23609

#ICML2025 #GraphML #AI #ML

4 0 2 0

some of our newest and exciting updates in how we align the use of AI with the practice of catastrophe modeling and disaster risk science!

#AI #DRR #EnvironmentalRisk #Cambridge #AI4ER #CDT #regional #disaster #risk #catastrophe #EO #ML #CAT #graphML #Bayesian

0 0 0 0
Post image

I’m in Sydney this week! 🇦🇺 Excited to attend #WWW2025, the 2025 ACM Web Conference.

Tomorrow, I’ll be presenting our paper, “To Share or Not to Share: Investigating Weight Sharing in Variational Graph Autoencoders,”
co-authored with Jiaying Xu.

#WebConf2025 #GraphML

3 0 0 0
Knowledge Graph Technology Showcase for Tom Sawyer Software.

Knowledge Graph Technology Showcase for Tom Sawyer Software.

Want to simplify your data analysis? Dr. Ashleigh Faith breaks down how Tom Sawyer Software's suite empowers you to use graphML effortlessly. No coding? No problem! Watch her insightful review and unlock your data's potential: www.youtube.com/watch?v=1-AZ... #GraphML #TechReview

0 0 0 0
2025 ACM Web Conference

2025 ACM Web Conference

Some research update!

Happy to share that our #GraphML paper:

"To Share or Not to Share: Investigating Weight Sharing in Variational Graph Autoencoders"
co-authored with Jiaying Xu,

has been accepted for presentation at the ACM Web Conference #WWW2025! 🥳

Paper online soon. See you in Sydney!

7 0 1 0
A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”.
Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti 

Background
Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness.

Motivation
Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance.

Research Question
If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it?

Core Idea
An explainer should detect if a model relies on specific features for node classification.
Implements a “true-to-the-model” rather than “truth-to-the-data” logic.

Key Components
White-Box Classifiers:  Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance.
Axioms: an explainer must assign higher scores to truly important features.
Findings:
Explainer Performance
Deconvolution: Perfect fidelity but limited to GNNs.
GraphLIME: Fails with non-local correlations and high sparsity.
LRP/Integrated Gradients: Struggle with zero-valued features.
GNNExplainer: Sensitive to sparsity and edge masking.

Real-World Insights: Facebook Dataset
Fidelity in detecting protected feature use in classification.
Results for different explainers, highlighting strengths and limitations.
Contributions:
Proposed a rigorous framework for benchmarking explainers
Demonstrated practical biases and flaws in popular explainers

A poster with a light blue background, featuring the paper with title: “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”. Authors: Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti Background Explainability in GNNs is crucial for enhancing trust understanding in machine learning models. Current benchmarks focus on data, ignoring the model’s actual decision logic, leading to gaps in understanding. Furthermore, existing methods often lack standardized benchmarks to measure their reliability and effectiveness. Motivation Reliable, standardised benchmarks are needed to ensure explainers reflect the internal logic of graph-based models, aiding in fairness, accountability, and regulatory compliance. Research Question If a model M is using a protected feature f , for instance using the gender of a user to classify whether their ads should gain more visibility, is a given explainer E able to detect it? Core Idea An explainer should detect if a model relies on specific features for node classification. Implements a “true-to-the-model” rather than “truth-to-the-data” logic. Key Components White-Box Classifiers: Local, Neighborhood, and Two-Hop Models with hardcoded logic for feature importance. Axioms: an explainer must assign higher scores to truly important features. Findings: Explainer Performance Deconvolution: Perfect fidelity but limited to GNNs. GraphLIME: Fails with non-local correlations and high sparsity. LRP/Integrated Gradients: Struggle with zero-valued features. GNNExplainer: Sensitive to sparsity and edge masking. Real-World Insights: Facebook Dataset Fidelity in detecting protected feature use in classification. Results for different explainers, highlighting strengths and limitations. Contributions: Proposed a rigorous framework for benchmarking explainers Demonstrated practical biases and flaws in popular explainers

Check out our poster at #LoG2024, based on our #TMLR paper:
📍 “A True-to-the-Model Axiomatic Benchmark for Graph-based Explainers”
🗓️ Tuesday 4–6 PM CET
📌 Poster Session 2, GatherTown
Join us to discuss graph ML explainability and benchmarks
#ExplainableAI #GraphML
openreview.net/forum?id=HSQTv3R8Iz

2 0 0 0
Position: Relational Deep Learning - Graph Representation Learning on Relational Databases Much of the world’s most valued data is stored in relational databases and data warehouses, where the data is organized into tables connected by primary-foreign key relations. However, building mac...

Hi!
This thursday, nov 21st, 11am EST, Rishabh Ranjan will present:
RELBENCH: A Benchmark for Deep Learning on Relational Databases (NeurIPS 2024 Datasets and Benchmarks Track)
🎈
Join on zoom (link on website)

arxiv.org/pdf/2407.20060

#graphml #machinelearning #temporalgraphs #neurips

3 1 0 1