Symbol-equivariant Recurrent Reasoning Models (SE-RRM)
SE-RRM advances HRM and TRM -- guaranteed identical solutions for problems with permuted colors (ARC AGI) or digits (Sudoku).
Coolest part: extrapolation to larger problem sizes!!!
P: arxiv.org/abs/2603.02193
C: github.com/ml-jku/SE-RRM
03.03.2026 13:57
👍 4
🔁 1
💬 0
📌 1
xLSTM for Financial Time Series: arxiv.org/abs/2603.01820
"VLSTM achieved the highest overall Sharpe ratio"
"VxLSTM and LPatchTST exhibited superior downside-adjusted characteristics"
“xLSTM achieves the highest portfolio-level cost buffer"
xLSTM excels in financial time series as TiRex does.
03.03.2026 06:39
👍 2
🔁 0
💬 0
📌 0
Drug design is significantly accelerated by ConGLUDe. Protein–small-molecule interactions can be screened orders of magnitude faster using the ConGLUDe approach.
15.01.2026 14:06
👍 2
🔁 1
💬 0
📌 0
xLSTM for Lensed Gravitational Waves: arxiv.org/abs/2512.21370
sLSTM models fine-grained temporal structures, while mLSTM finds large-scale global patterns.
xLSTM achieves AUC beyond 0.99, a TPR above 98% a FPR below 1% and is robust against noise, lens type, lens mass.
Cool xLSTM application.
29.12.2025 17:37
👍 3
🔁 1
💬 0
📌 0
xLSTM for Real-Time DNS Tunnel Detection: arxiv.org/abs/2512.09565
DNS-HyXNet = xLSTM for DNS tunnels.
DNS-HyXNet has 99.99% accuracy, with F1-scores exceeding 99.96%, and per-sample detection latency of just 0.041 ms, confirming its scalability and real-time readiness. wow!
11.12.2025 11:18
👍 2
🔁 1
💬 0
📌 0
xLSTM for PINNs that learn PDEs: arxiv.org/abs/2511.12512
“Across four PDEs under matched size and budget, xLSTM-PINN consistently reduces MSE, RMSE, MAE, and MaxAE with markedly narrower error bands.”
“cleaner boundary transitions with attenuated high-frequency ripples”
20.11.2025 13:19
👍 3
🔁 2
💬 0
📌 0
Tox21 leaderboard is now on huggingface. What is the best AI method for toxicity prediction: LLMs are not there. Toxicity prediction is essential in the first phase of drug discovery.
19.11.2025 09:22
👍 0
🔁 0
💬 0
📌 0
Measuring AI Progress in Drug Discovery - A NEW LEADERBOARD IN TOWN
2015-2025: turns out that there's hardly any improvement. AI bubble?
GPT is at 70% for this task, whereas the best methods get close to 85%.
Leaderboard: huggingface.co/spaces/ml-jk...
P: arxiv.org/abs/2511.14744
19.11.2025 06:52
👍 12
🔁 7
💬 3
📌 3
xLSTM for Vehicle Trajectory Prediction: arxiv.org/abs/2511.00266
X-TRACK based on xLSTM achieves SOTA.
“Compared to state-of-the-art baselines, X-TRACK achieves performance improvement by 79% at the 1-second prediction and 20% at the 5-second prediction in the case of highD”
Again xLSTM excels.
04.11.2025 12:44
👍 3
🔁 1
💬 0
📌 0
xLSTM for robotic manipulation systems via diffusion-based imitation learning: arxiv.org/abs/2510.20406
PMP leverages xLSTM to denoise actions for robotics.
“PMP not only achieves state-of-the-art performance but also offers significantly faster training and inference.”
xLSTM excels in robotics.
26.10.2025 17:22
👍 2
🔁 0
💬 0
📌 0
Tenure Track in quantum informatics! Super cool position. Super cool team. World-class research. Scientifically outstanding work.
22.10.2025 07:36
👍 4
🔁 2
💬 0
📌 0
xLSTM for Toxic Comment Classification: arxiv.org/abs/2510.17018
“On the Jigsaw Toxic Comment benchmark, xLSTM attains 96.0% accuracy and 0.88 macro-F1, outperforming BERT by 33% on threat and 28% on identity_hate categories, with 15× fewer parameters and <50 ms inference latency.”
xLSTM is fast!
21.10.2025 05:19
👍 0
🔁 0
💬 0
📌 0
gLSTM extends xLSTM to a graph neural network architecture: arxiv.org/abs/2510.08450
"gLSTM mitigates sensitivity over-squashing and capacity over-squashing."
"gLSTM achieves comfortably state of the art results on the Diameter and Eccentricity Graph Property Prediction tasks"
10.10.2025 12:13
👍 2
🔁 0
💬 0
📌 0
xLSTM for Intrusion Detection: arxiv.org/abs/2510.08333
"The xLSTM-based IDS achieves an F1-score of 98.9%, surpassing the transformer-based model at 94.3%."
xLSTM is faster than transformer when using fast kernels as provided in github.com/nx-ai/mlstm_... and github.com/NX-AI/flashrnn
10.10.2025 08:54
👍 1
🔁 0
💬 0
📌 0
xLSTM for long-term context using short sliding windows: arxiv.org/abs/2509.24552
"SWAX, a hybrid consisting of sliding-window attention and xLSTM."
"SWAX trained with stochastic window sizes significantly outperforms regular window attention both on short and long-context problems."
30.09.2025 05:18
👍 0
🔁 0
💬 0
📌 0
xLSTM shines as an Electrocardiogram (ECG) foundation model: arxiv.org/abs/2509.10151
"xECG achieves superior performance over earlier approaches, defining a new baseline for future ECG foundation models."
xLSTM is perfectly suited for time series prediction as shown by TiRex.
16.09.2025 05:20
👍 3
🔁 0
💬 1
📌 1
xLSTM excels in time series forecasting: arxiv.org/abs/2509.01187 .
Introduces "stochastic xLSTM" (StoxLSTM).
"StoxLSTM consistently outperforms state-of-the-art baselines with better robustness and stronger generalization ability."
We know that xLSTM is king at time series from our TiRex.
03.09.2025 05:38
👍 2
🔁 0
💬 0
📌 0
xLSTM for Cellular Traffic Forecasting: arxiv.org/abs/2507.19513
"Empirical results showed a 23% MAE reduction over the original STN and a 30% improvement on unseen data, highlighting strong generalization."
xLSTM shines again in time series forecasting.
29.07.2025 05:02
👍 2
🔁 0
💬 0
📌 0
xLSTM for Monaural Speech Enhancement: arxiv.org/abs/2507.04368
xLSTM has superior performance vs. Mamba and Transformers but is slower than Mamba.
New Triton kernels: xLSTM is faster than MAMBA at training and inference: arxiv.org/abs/2503.13427 and arxiv.org/abs/2503.14376
08.07.2025 05:47
👍 1
🔁 0
💬 0
📌 0
xLSTM for Aspect-based Sentiment Analysis: arxiv.org/abs/2507.01213
Another success story of xLSTM. MEGA: xLSTM with Multihead Exponential Gated Fusion.
Experiments on 3 benchmarks show that MEGA outperforms state-of-the-art baselines with superior accuracy and efficiency”
05.07.2025 10:28
👍 0
🔁 0
💬 0
📌 0
xLSTM for multivariate time series anomaly detection: arxiv.org/abs/2506.22837
“In our results, xLSTM showcases state-of-the-art accuracy, outperforming 23 popular anomaly detection baselines.”
Again, xLSTM excels in time series analysis.
01.07.2025 08:30
👍 4
🔁 2
💬 0
📌 0
xLSTM for Human Action Segmentation: arxiv.org/abs/2506.09650
"HopaDIFF, leveraging a novel cross-input gate attentional xLSTM to enhance holistic-partial long-range reasoning"
"HopaDIFF achieves state-of-theart results on RHAS133 in diverse evaluation settings."
12.06.2025 07:48
👍 3
🔁 0
💬 0
📌 0
Mein Buch “Was kann Künstliche Intelligenz?“ ist erschienen. Eine leicht zugängliche Einführung in das Thema Künstliche Intelligenz. LeserInnen – auch ohne technischen Hintergrund – wird erklärt, was KI eigentlich ist, welche Potenziale sie birgt und welche Auswirkungen sie hat.
04.06.2025 17:13
👍 3
🔁 3
💬 0
📌 1
We are soooo proud. Our European-developed TiRex is leading the field—significantly ahead of U.S. competitors like Amazon, Datadog, Salesforce, and Google, as well as Chinese models from companies such as Alibaba.
04.06.2025 08:59
👍 5
🔁 4
💬 0
📌 0
Attention!! Our TiRex time series model, built on xLSTM, is topping all major international leaderboards. A European-developed model is leading the field—significantly ahead of U.S. competitors like Amazon, Datadog, Salesforce, and Google, as well as Chinese models from Alibaba.
02.06.2025 12:12
👍 7
🔁 3
💬 0
📌 0
Introducing TiRex - xLSTM based time series model | NXAI
TiRex model at the top 🦖
We are proud of TiRex - our first time series model based on #xLSTM technology.
Key take aways:
🥇 Ranked #1 on official international leaderboards
➡️ Outperforms models ...
TiRex 🦖 time series xLSTM model ranked #1 on all leaderboards.
➡️ Outperforms models by Amazon, Google, Datadog, Salesforce, Alibaba
➡️ industrial applications
➡️ limited data
➡️ embedded AI and edge devices
➡️ Europe is leading
Code: lnkd.in/eHXb-XwZ
Paper: lnkd.in/e8e7xnri
shorturl.at/jcQeq
02.06.2025 12:11
👍 5
🔁 5
💬 0
📌 0
Recommended read for the weekend: Sepp Hochreiter's book on AI!
Lots of fun anecdotes and easily accessible basics on AI!
www.beneventopublishing.com/ecowing/prod...
30.05.2025 12:28
👍 6
🔁 3
💬 1
📌 0
xLSTM for the classification of assembly tasks: arxiv.org/abs/2505.18012
"xLSTM model demonstrated better generalization capabilities to new operators. The results clearly show that for this type of classification, the xLSTM model offers a slight edge over Transformers."
26.05.2025 05:07
👍 3
🔁 1
💬 0
📌 0
Happy to introduce 🔥LaM-SLidE🔥!
We show how trajectories of spatial dynamical systems can be modeled in latent space by
--> leveraging IDENTIFIERS.
📚Paper: arxiv.org/abs/2502.12128
💻Code: github.com/ml-jku/LaM-S...
📝Blog: ml-jku.github.io/LaM-SLidE/
1/n
22.05.2025 12:24
👍 7
🔁 8
💬 1
📌 1
1/11 Excited to present our latest work "Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics" at #ICLR2025 on Fri 25 Apr at 10 am!
#CombinatorialOptimization #StatisticalPhysics #DiffusionModels
24.04.2025 08:57
👍 16
🔁 7
💬 1
📌 0