Moving intentions from brains to machines
Opinion by Christian Beste, Heleen A. Slagter (@haslagter.bsky.social), Christian Herff (@cherff.bsky.social), Yukiyasu Kamitani (@ykamit.bsky.social), Sabrina Coninx (@sconinxphil.bsky.social), Richard van Wezel, & Christian Frings
tinyurl.com/mr2ch69z
18.02.2026 21:19
👍 5
🔁 3
💬 0
📌 1
Between October 13-17th we ran the second edition of Ibn Sina Neurotech School hosted and sponsored by NYUAD Center for Brain and Health and @ibroorg.bsky.social
We welcomed 18 students from across the world to learn hands-on about fMRI data collection and processing
26.10.2025 21:06
👍 7
🔁 5
💬 1
📌 1
SfN 2025 satellite Symposium | CREST
SfN 2025 satellite Symposium
DATES: Thursday 13 November ~ Friday 14 November 2025 (cf. SfN2025; November 15~19)
VENUE: CORTEZ HILL Room (3rd floor), MANCHESTER GRAND HYATT SAN DIEGO
1Market Place, San Diego CA 92101, USA.
www.jst.go.jp/kisoken/cres...
17.10.2025 02:48
👍 0
🔁 0
💬 0
📌 0
New preprint from our lab led by Onoo-san. We propose readout representation, defining neural codes by what can be recovered, not what caused them. Inputs remain recoverable from distant features, revealing expansive, redundant codes that align neural activation with meaning arxiv.org/abs/2510.12228
16.10.2025 14:03
👍 1
🔁 0
💬 0
📌 0
Schematic overview of the proposed sound reconstruction pipeline. Left: DNN feature extraction from sound. A deep neural network (DNN) extracts auditory features at multiple levels of complexity using a hierarchical framework. Right: Sound reconstruction. The reconstruction pipeline starts with decoding DNN features from fMRI responses using trained brain decoders. The audio generator then transforms these decoded features into the reconstructed sound.
Reconstructing sounds from #fMRI data is limited by its temporal resolution. @ykamit.bsky.social &co develop a DNN-based method that aids reconstruction of perceptually accurate sound from fMRI data, offering insights into internal #auditory representations @plosbiology.org 🧪 plos.io/4fhNw1Z
25.07.2025 14:07
👍 16
🔁 6
💬 0
📌 0
Can we hear what's inside your head? 🧠→🎶
Our new paper, led by Jong-Yun Park, presents an AI-based method for reconstructing arbitrary natural sounds directly from a person's brain activity measured with fMRI.
journals.plos.org/plosbiology/...
24.07.2025 10:36
👍 10
🔁 3
💬 0
📌 0
Original post on mas.to
Spurious reconstruction from brain activity https://www.sciencedirect.com/science/article/pii/S0893608025003946 by @ykamit et al.; more information in the Bluesky thread https://bsky.app/profile/kencan7749.bsky.social/post/3lri44htxek25 #BCI #NeuroTech #neuroscience
"Our findings suggest that […]
13.06.2025 09:33
👍 0
🔁 3
💬 0
📌 0
Redirecting
Our paper is now accepted at Neural Networks!
This work builds on our previous threads in X, updated with deeper analyses.
We revisit brain-to-image reconstruction using NSD + diffusion models—and ask: do they really reconstruct what we perceive?
Paper: doi.org/10.1016/j.ne...
🧵1/12
13.06.2025 09:17
👍 3
🔁 3
💬 1
📌 1
Yukiyasu Kamitani, Misato Tanaka, Ken Shirakawa
Visual Image Reconstruction from Brain Activity via Latent Representation
https://arxiv.org/abs/2505.08429
14.05.2025 07:10
👍 4
🔁 3
💬 0
📌 0
I promised to write about my thoughts on the status of the field of neuroAI, some of the big challenges we are facing, and the approaches we are taking to address them. This is super selective on the topic of finding a good model but in my view it affects the field as a whole. Here we go. 🧵
11.12.2024 22:18
👍 142
🔁 47
💬 2
📌 6
Spurious reconstruction from brain activity
Advances in brain decoding, particularly visual image reconstruction, have sparked discussions about the societal implications and ethical considerations of neurotechnology. As these methods aim...
New preprint, led by Ken Shirakawa. Using inappropriate generative AI methods and naturalistic data, seemingly realistic but spurious visual image reconstruction can be generated from brain data (even from random data). We describe it and formulate how it occurs
arxiv.org/abs/2405.10078
25.05.2024 00:52
👍 1
🔁 0
💬 0
📌 0
These stimuli were presumably adjacent video frames extracted from a single movie. Was this known to the community?
02.05.2024 13:48
👍 1
🔁 0
💬 0
📌 0
While investigating the issues of questionable brain decoding and reconstruction (osf.io/nmfc5/#!), Misato Tanaka from my lab found that the seminal paper by Nishimoto, Naselaris, Benjamini, Yu, & Gallant (2011) appears to use highly similar stimuli across both training and test sets. ...
02.05.2024 13:42
👍 1
🔁 0
💬 1
📌 0
It enables inter-site visual image reconstruction (Deeprecon⬌THINGS⬌NSD), where a source subject's brain data is analyzed using a model trained on a dataset from a different site to reconstruct the viewed image, achieving near-equivalent quality to within-site performance
19.03.2024 13:42
👍 0
🔁 0
💬 0
📌 0
New preprint from our lab! Led by Wang-san, this work introduces a content-loss-based functional alignment of brain data, which does not require shared stimuli between subjects/datasets; greatly expanding the potential of data reuse
arxiv.org/abs/2403.11517
19.03.2024 13:41
👍 0
🔁 0
💬 1
📌 0
Exhibition of Pierre Huyghe "Liminal"
From 17 March 2024 to 24 November 2024 At Punta della Dogana, Venice, Italy
www.pinaultcollection.com/palazzograss...
We provided brain-decoded images and moves for some of the works
16.02.2024 13:58
👍 0
🔁 0
💬 0
📌 0
Exhibition of Mogens Jacobsen"Restruktion" at Ringsted Gallery, Denmark, where one of the works was created in collaboration with my lab
kunsten.nu/artguide/cal...
16.02.2024 13:47
👍 4
🔁 1
💬 0
📌 0