TabICL also released a new version this week, higly recommend checking it out too github.com/soda-inria/t...
TabICL also released a new version this week, higly recommend checking it out too github.com/soda-inria/t...
P.S. Its interesting how in our little corner of the ML/DL space SOTA foudnation models are actually open (GraphPFN was initialised from the github.com/limix-ldm/Li... model, and used @dholzmueller.bsky.social @gaelvaroquaux.bsky.social prior sampling from TabICL
They also wrote up a blogpost about the model research.yandex.com/blog/graphpf...
To piggy-back a bit on foundation models for structured data discussion here
My colleagues at Yandex Research just updated the GraphPFN paper. It's a Graph Foundation Model that works on graph datasets with tabular features, and shows SOTA results both in ICL regimes and when fine-tuned.
this?
How hard can it be to build a browser from scratch for three platforms anyways?
Apparently 20K lines of code and ~70 hours from first commit to last.
emsh.cat/one-human-on...
#llm #llms #ai #codex #openai
I liked the response better geohot.github.io//blog/jekyll...
if youβre going to use AI in your workflow, you have to get extremely good at self-discipline/focus because AI will literally tempt you to pursue every tiny whim/idea that enters your brain and thus will absolutely destroy you and your work if left unchecked
slow down before itβs too late
We are excited to announce that we can successfully use Rust's standard library from the GPU. This has never been done before.
www.vectorware.com/blog/rust-st...
Supporting Rust's standard library enables existing Rust code to work on the GPU and makes GPU programming feel normal.
Explicitly adding induction heads helps. Some gains in NLP, seemingly bigger in RL algorithm distillation arxiv.org/abs/2411.01958
β‘οΈ
I just completed "Historian Hysteria" - Day 1 - Advent of Code 2024 #AdventOfCode adventofcode.com/2024/day/1 (in zig btw)
Yep, just need to find the code. I can share
Yeah. I've experimented a bit with the existing code. It generalized to some of our specific problems in tabular DL (even though the meta-train was mostly from language and vision tasks). Curious what do you mean by actually worked here? No edge cases and failures, or just easy to use technically?
#MLsky
The rejects were horribly misinformed self contradictory but extremely confident. PSGD, SOAP and friends are taking over regardless of academia.
VeLO was something else, Iβm a fan arxiv.org/abs/2211.09760
Thank you @bsky.app team for correcting the mistake. Glad to be back!
Did you know that 99% of email today is spam? Your inbox isnβt 99% spam because AI is used to filter it.
The same 99% will happen here too, but if AI researchers continue to get perma-banned for making available the datasets needed to filter it, itβs going to make this platform unusable.
@trl-research.bsky.social
Tabular DL and AutoML podcast just dropped. For sure watching this
youtu.be/3qpQ-sMRafE
Hello to all #ICLR reviewers on #MLsky
bsky.app/profile/hame...
But keep the numbers in appendix or code pls
So annoying when the only info is in visual form with unclear axes etc. I agree that itβs much better for presentation, but when digging in, I often need raw metrics.
β¦extend of customisability?
If I understand correctly, we can do a lot with custom feeds.
Some examples here github.com/Bossett/bsky...
Wow. Didnβt know we can create custom algorithmic feeds here. This is cool! What are your favourites, whatβs the extend of
(context: docs.bsky.app/docs/starter...)
Paper screenshot and Figure 1 (c) with cumulative ablations for components of RealMLP-TD.
Can deep learning finally compete with boosted trees on tabular data? π²
In our NeurIPS 2024 paper, we introduce RealMLP, a NN with improvements in all areas and meta-learned default parameters.
Some insights about RealMLP and other models on large benchmarks (>200 datasets): π§΅