Fuzzing the brain: automated stress testing for the safety of ML-driven neurostimulation
Fuzzing the brain: automated stress testing for the safety of ML-driven neurostimulation, Downing, Mara, Peng, Matthew, Granley, Jacob, Beyeler, Michael, Bultan, Tevfik
This work was a collaboration with Mara Downing, Matthew Peng, and Tevfik Bultan from the UCSB Verification Lab, together with @jacobgranley.bsky.social from the Bionic Vision Lab.
Read the full paper here: iopscience.iop.org/article/10.1...
@ucsb-cs.bsky.social
05.03.2026 17:20
π 1
π 0
π¬ 0
π 0
Bar plot comparing fuzzing strategies. y-axis: combined violation and diversity score. Each bar represents an average of normalized violations and normalized diversity score, equally weighted.
Our metrics are shown in green, with our two best VO-KMVP and VO-KMOC highlighted in dark green. Neuron coverage metrics are shown in purple, and basic metrics in red. Conventional testing (model test set with no mutations) is shown in blue. Our methods reach scores above 0.8, whereas conventional testing sits at 0.1
Our approach borrows an idea from software verification: coverage-guided fuzzing.
We systematically mutate inputs and search for stimulation patterns that violate biophysical constraints - uncovering diverse safety violations that conventional testing misses.
#Neurotech #MLResearch #AIVerification
05.03.2026 17:20
π 0
π 0
π¬ 1
π 0
Most prior work treats safety narrowly, often by minimizing charge.
But unsafe stimulation can take many forms:
β’ physically impossible pulses
β’ unsafe instantaneous currents
β’ activating too many electrodes
These are model outputs, so they need to be tested like software systems.
#AISafety
05.03.2026 17:20
π 0
π 0
π¬ 1
π 0
Fuzzing the brain: Automated stress testing for the safety of ML-driven neurostimulation | Bionic Vision Lab
We propose a systematic, quantitative approach to detect and characterize unsafe stimulation patterns in ML-driven neurostimulation systems.
π¨Fuzzing the brain - automated stress testing for ML-driven neurostimulationπ¨
As #MachineLearning begins to control electrical stimulation in neural interfaces, how do we know these models are safe?
Paper in Journal of Neural Engineering:
bionicvisionlab.org/publications...
#BCI #neuroskyence
05.03.2026 17:20
π 7
π 2
π¬ 1
π 1
BIRD: Behavior Induction via Representation-structure Distillation
Human-aligned deep learning models exhibit behaviors consistent with human values, such as robustness, fairness, and honesty. Transferring these behavioral properties to models trained on different ta...
What if your strongest #ML model is brittle at one thing that really matters?
Can it learn that behavior from a weaker but specialist model, even when they share no task, no data, and no architecture?
My student Galen Pogoncheff explored this in our #ICLR2026 paper:
π arxiv.org/abs/2505.23933
08.02.2026 17:35
π 4
π 1
π¬ 1
π 0
Neural mechanisms underlying intracortical microstimulation for sensory restoration
Nature Biomedical Engineering - Intracortical microstimulation can elicit artificial sensations in persons who have lost sensation due to neurological injury or disease. This Review discusses...
Just out in @nature.com BME: Our Review unpacks intracortical microstimulation: axons, not somas, drive activation; direct + indirect pathways shape perception; parameters interact with neuron type, layer, and network; long-term use limited by neural depression & tissue response. π§ β‘
rdcu.be/eZbTz
15.01.2026 16:51
π 19
π 2
π¬ 1
π 1
Robust Foraging Competition
Can your AI visually navigate better than a mouse?
π Mouse vs AI #NeurIPS2025 Challenge 2025
The first year was a great success:
π€ 290 submissions
π₯ 22 teams
π 7 countries
robustforaging.github.io
A huge thank you to all who participated!π
This was our first attempt at a global competition built around real mouse behavior and visual robustness
26.11.2025 19:56
π 9
π 5
π¬ 1
π 0
Presenting βHuman in the loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortexβ again this afternoon at #SfN!!
Come discuss!
An amazing collaboration between the Biomedical Neuroengineering group at UMH and @bionicvisionlab.org
19.11.2025 18:02
π 5
π 3
π¬ 0
π 0
Schematic labeled human-in-the-loop optimization (HILO). It shows two stimuli on the left: pulse trains with varying stimulus amplitude over time. A participant has to choose which stimulus appears brighter. This feedback is used to inform a Gaussian process model that chooses the next stimulus pair, with the goal of finding the stimulus with the lowest overall charge to elicit perception
Our final poster at #SfN2025 explores human-in-the-loop optimization for intracortical microstimulation, presented by @lozaneuro.bsky.social, in collaboration with @umh.es:
PSTR450.20
Nov 19 at 1:00 PM
www.abstractsonline.com/pp8/#!/21171...
#SfN25 #VisionScience #NeuroTechnology
16.11.2025 18:58
π 5
π 1
π¬ 1
π 0
left: experimental setup showing an implantee with an introcortical prosthesis and example phosphenes described as a large filled circle, a half-moon, and a tiny dot.
right: schematic showing cross-temporal decoding of delay period activity. Over 406 trials, working memory content could be decoded during the delay period in 88.5% of delay period windows.
Lily Turkstra is presenting new findings on stimulus-selective spiking activity recorded during a working memory experiment in a unique intracortical dataset.
PSTR341.13
Nov 18 at 1:00 PM
www.abstractsonline.com/pp8/#!/21171...
#SfN25 #VisionScience #Neuroscience
16.11.2025 18:58
π 1
π 0
π¬ 1
π 0
Table showing different network diagrams under test, trained either on MNIST or Fashion-MNIST, with either soft or hard winner-take-all (WTA) wiring. Synaptic weights showed either holistic or parts-based representations of images
Our second poster at #SfN2025 dives into biologically plausible networks for efficient image encoding, presented by Hasith Basnayake:
PSTR154.08
Nov 17 at 8:00 AM
www.abstractsonline.com/pp8/#!/21171...
#SfN25 #VisionScience #NeuroTechnology
16.11.2025 18:58
π 0
π 0
π¬ 1
π 0
Simulated responses of a bipolar cell mosaic to simulated electrical stimulation and the corresponding decoded phosphenes. Small phosphenes appear focal and colored, whereas larger phosphenes most often appear with a white-ish, yellow-ish tint.
If you are into #VisionScience or #neuroengineering, come check out our first poster at #SfN2025 this afternoon!
Emily Joyce is presenting new work on modeling the bipolar circuitry in the human fovea
PSTR122.22
Nov 16 at 1:00 PM
www.abstractsonline.com/pp8/#!/21171...
16.11.2025 18:58
π 4
π 2
π¬ 1
π 1
bionic-vision.org | Research Spotlights | Yossi Mandel
In a new Advanced Functional Materials paper, Prof. Yossi Mandel and colleagues unveiled a hybrid retinal prosthesis that fuses living neurons with a high-density electrode array. By nestling human st...
What if retinal prostheses could speak the brainβs language? ποΈπ§ π§ͺ
Prof. Yossi Mandel and team built a hybrid implant that merges neurons and electrodes to restore high-acuity sight.
New Data Drop interview β
www.bionic-vision.org/research-spo...
#BionicVision #Neurotech #Blindness
30.10.2025 20:01
π 4
π 2
π¬ 0
π 0
Can Fruit Ninja train the #BionicEye?
A new JoV paper finds that while participants improved with distorted βprostheticβ input, gaming-based training didnβt generalize to object recognition - hinting that rehab may need to stay task-specific.
doi.org/10.1167/jov....
22.10.2025 01:55
π 1
π 1
π¬ 0
π 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.
ποΈπ§ New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!
TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.
www.biorxiv.org/content/10.1...
27.09.2025 02:52
π 93
π 25
π¬ 2
π 6
Thrilling progress in brain-computer interfaces from UC labs
UC researchers and the patients they work with are showing the world what's possible when the human mind and advanced computers meet.
As federal research funding faces steep cuts, UC scientists are pushing brain-computer interfaces forward: restoring speech after ALS, easing Parkinsonβs symptoms, and improving bionic vision with AI (thatβs us π at @ucsantabarbara.bsky.social).
π§ www.universityofcalifornia.edu/news/thrilli...
17.09.2025 18:03
π 4
π 2
π¬ 0
π 0
Epic collage of Bionic Vision Lab activities. From top to bottom, left to right:
A) Up-to-date group picture
B) BVL at Dr. Beyeler's Plous Award celebration (2025)
C) BVL at The Eye & The Chip (2023)
D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony
E) BVL logo cake created by Tori LeVier
G) Dr. Beyeler with symposium speakers at Optica FVM (2023)
H, I, M, N) Students presenting conference posters/talks
J) Participant scanning a food item (ominous pizza study)
K) Galen Pogoncheff in VR
L) Argus II user drawing a phosphene
O) Prof. Beyeler demoing BionicVisionXR
P) First lab hike (ca. 2021)
Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022)
R) BVL at Club Vision
S) Students drifting off into the sunset on a floating couch after a hard day's work
Excited to share that Iβve been promoted to Associate Professor with tenure at UCSB!
Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!
Full post: www.linkedin.com/posts/michae...
02.08.2025 18:12
π 25
π 5
π¬ 1
π 0
Program β EMBC 2025
Loading...
At #EMBC2025? Come check out two talks from my lab in tomorrowβs Sensory Neuroprostheses session!
ποΈ Thurs July 17 Β· 8-10AM Β· Room B3 M3-4
π§ Efficient threshold estimation
π§π¬ Deep human-in-the-loop optimization
π embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS
16.07.2025 16:54
π 3
π 1
π¬ 0
π 0
Program β EMBC 2025
Loading...
ποΈβ‘ Headed to #EMBC2025? Catch two of our labβs talks on optimizing retinal implants!
π Sensory Neuroprostheses
ποΈ Thurs July 17 Β· 8-10AM Β· Room B3 M3-4
π§ Efficient threshold estimation
π§π¬ Deep human-in-the-loop optimization
π embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS #Retina
13.07.2025 17:24
π 1
π 2
π¬ 1
π 0
This matters. Checkerboard rastering:
βοΈ works across tasks
βοΈ requires no fancy calibration
βοΈ is hardware-agnostic
A low-cost, high-impact tweak that could make future visual prostheses more usable and more intuitive.
#BionicVision #BCI #NeuroTech
09.07.2025 16:55
π 1
π 0
π¬ 0
π 0
Boxplots showing task accuracy for two experimental tasksβLetter Recognition and Motion Discriminationβgrouped by five raster patterns: No Raster (blue), Checkerboard (orange), Vertical (green), Horizontal (brown), and Random (pink). Each colored boxplot shows the median, interquartile range, and individual participant data points.
In both tasks, Checkerboard and No Raster yield the highest median accuracy.
Horizontal and Random patterns perform the worst, with more variability and lower scores.
Significant pairwise differences (p < .05) are indicated by horizontal bars above the plots, showing that Checkerboard significantly outperforms Random and Horizontal in both tasks.
A dashed line at 0.125 marks chance-level performance (1 out of 8).
These results suggest Checkerboard rastering improves perceptual performance compared to conventional or unstructured patterns.
β
Checkerboard consistently outperformed the other patternsβhigher accuracy, lower difficulty, fewer motion artifacts.
π‘ Why? More spatial separation between activations = less perceptual interference.
It even matched performance of the ideal βno rasterβ condition, without breaking safety rules.
09.07.2025 16:55
π 1
π 0
π¬ 1
π 0
Diagram showing the four-step pipeline for simulating prosthetic vision in VR.
Step 1: A virtual camera captures the userβs view, guided by eye gaze. The image is converted to grayscale and blurred for preprocessing.
Step 2: The preprocessed image is mapped onto a simulated retinal implant with 100 electrodes. Electrodes are activated based on local image intensity and grouped into raster groups. Raster Group 1 is highlighted.
Step 3: Simulated perception is shown with and without rastering. Without rastering (top), all electrodes are active, producing a more complete but unrealistic percept. With rastering (bottom), only 20 electrodes are active per frame, resulting in a temporally fragmented percept. Phosphene shape depends on parameters for spatial spread (Ο) and elongation (Ξ»).
Step 4: The rendered percept is updated with temporal effects and presented through a virtual reality headset.
We ran a simulated prosthetic vision study in immersive VR using gaze-contingent, psychophysically grounded models of epiretinal implants.
π§ͺ Powered by BionicVisionXR.
π Modeled 100-electrode Argus-like array.
π Realistic phosphene appearance, eye/head tracking.
09.07.2025 16:55
π 0
π 0
π¬ 1
π 0
Raster pattern configurations used in the study, shown as 10Γ10 electrode grids labeled with numbers 1 through 5, representing five sequentially activated timing groups.
1. Horizontal: Each row of electrodes belongs to one group, with activation proceeding top to bottom.
2. Vertical: Each column is a group, activated left to right.
3. Checkerboard: Electrode groups are arranged to maximize spatial separation, forming a checkerboard-like layout.
4. Random: Group assignments are randomly distributed across the grid, with no spatial structure. This pattern was re-randomized every five frames to test unstructured activation.
Each group is represented with different shades of gray and labeled numerically to indicate activation order.
Checkerboard rastering has been used in #BCI and #NeuroTech applications, often based on intuition.
But is it actually better, or just tradition?
No one had rigorously tested how these patterns impact perception in visual prostheses.
So we did.
09.07.2025 16:55
π 0
π 0
π¬ 1
π 0
Raster patterns in simulated prosthetic vision. On the left, a natural scene of a yellow car is shown, followed by its transformation into a prosthetic vision simulation using a 10Γ10 grid of electrodes (red dots). Below this, a zoomed-in example shows the resulting phosphene pattern. To comply with safety constraints, electrodes are divided into five spatial groups activated sequentially across ~220 milliseconds. Each row represents a different raster pattern: vertical (columns activated left to right), horizontal (rows top to bottom), checkerboard (spatially maximized separation), and random (reshuffled every five frames). For each pattern, five panels show how the scene is progressively built across the five raster groups. Vertical and horizontal patterns show strong directional streaking. Checkerboard shows more uniform activation and perceptual clarity. Random appears spatially noisy and inconsistent.
ποΈπ§ New paper alert!
We show that checkerboard-style electrode activation improves perceptual clarity in simulated prosthetic visionβoutperforming other patterns in both letter and motion tasks.
Less bias, more function, same safety.
π doi.org/10.1088/1741...
#BionicVision #NeuroTech
09.07.2025 16:55
π 2
π 0
π¬ 1
π 1
Assistive Technology Use In The Home and AI and Adaptive Optics Ophthalmoscopes
Lily Turkstra (University of California - Santa Barbara) Dr. Johnny Tam (National Eye Institute - Bethesda, MD) Lily Turkstra , PhD Student,...
ποΈOur very own Lily Turkstra was featured on WYPL-FMβs Eye on Vision podcast to discuss how blind individuals use assistive tech at home, from tactile labels to digital tools.
π» Listen: eyeonvision.blogspot.com/2025/05/assi...
π° Read: bionicvisionlab.org/publications...
#BlindTech #Accessibility
24.06.2025 20:14
π 2
π 1
π¬ 0
π 0
VSS PresentationPresentation β Vision Sciences Society
Last but not least is Lily Turkstra, whose poster is assessing the efficacy of visual augmentations for high-stress navigation:
Tue, 2:45 - 6:45pm, Pavilion: Poster #56.472
www.visionsciences.org/presentation...
ποΈπ§ͺ #XR #VirtualReality #Unity3D #VSS2025
20.05.2025 14:10
π 3
π 2
π¬ 0
π 0