Bionic Vision Lab's Avatar

Bionic Vision Lab

@bionicvisionlab.org

πŸ‘οΈπŸ§ πŸ–₯️πŸ§ͺπŸ€– What would the world look like with a bionic eye? Interdisciplinary research group at UC Santa Barbara. PI: @mbeyeler.bsky.social‬ #BionicVision #Blindness #NeuroTech #VisionScience #CompNeuro #NeuroAI

477
Followers
222
Following
133
Posts
23.09.2024
Joined
Posts Following

Latest posts by Bionic Vision Lab @bionicvisionlab.org

Preview
Fuzzing the brain: automated stress testing for the safety of ML-driven neurostimulation Fuzzing the brain: automated stress testing for the safety of ML-driven neurostimulation, Downing, Mara, Peng, Matthew, Granley, Jacob, Beyeler, Michael, Bultan, Tevfik

This work was a collaboration with Mara Downing, Matthew Peng, and Tevfik Bultan from the UCSB Verification Lab, together with @jacobgranley.bsky.social from the Bionic Vision Lab.

Read the full paper here: iopscience.iop.org/article/10.1...

@ucsb-cs.bsky.social

05.03.2026 17:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Bar plot comparing fuzzing strategies. y-axis: combined violation and diversity score. Each bar represents an average of normalized violations and normalized diversity score, equally weighted.
Our metrics are shown in green, with our two best VO-KMVP and VO-KMOC highlighted in dark green. Neuron coverage metrics are shown in purple, and basic metrics in red. Conventional testing (model test set with no mutations) is shown in blue. Our methods reach scores above 0.8, whereas conventional testing sits at 0.1

Bar plot comparing fuzzing strategies. y-axis: combined violation and diversity score. Each bar represents an average of normalized violations and normalized diversity score, equally weighted. Our metrics are shown in green, with our two best VO-KMVP and VO-KMOC highlighted in dark green. Neuron coverage metrics are shown in purple, and basic metrics in red. Conventional testing (model test set with no mutations) is shown in blue. Our methods reach scores above 0.8, whereas conventional testing sits at 0.1

Our approach borrows an idea from software verification: coverage-guided fuzzing.

We systematically mutate inputs and search for stimulation patterns that violate biophysical constraints - uncovering diverse safety violations that conventional testing misses.

#Neurotech #MLResearch #AIVerification

05.03.2026 17:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Most prior work treats safety narrowly, often by minimizing charge.

But unsafe stimulation can take many forms:
β€’ physically impossible pulses
β€’ unsafe instantaneous currents
β€’ activating too many electrodes

These are model outputs, so they need to be tested like software systems.

#AISafety

05.03.2026 17:20 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Fuzzing the brain: Automated stress testing for the safety of ML-driven neurostimulation | Bionic Vision Lab We propose a systematic, quantitative approach to detect and characterize unsafe stimulation patterns in ML-driven neurostimulation systems.

🚨Fuzzing the brain - automated stress testing for ML-driven neurostimulation🚨

As #MachineLearning begins to control electrical stimulation in neural interfaces, how do we know these models are safe?

Paper in Journal of Neural Engineering:
bionicvisionlab.org/publications...

#BCI #neuroskyence

05.03.2026 17:20 πŸ‘ 7 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Preview
BIRD: Behavior Induction via Representation-structure Distillation Human-aligned deep learning models exhibit behaviors consistent with human values, such as robustness, fairness, and honesty. Transferring these behavioral properties to models trained on different ta...

What if your strongest #ML model is brittle at one thing that really matters?

Can it learn that behavior from a weaker but specialist model, even when they share no task, no data, and no architecture?

My student Galen Pogoncheff explored this in our #ICLR2026 paper:

πŸ‘‰ arxiv.org/abs/2505.23933

08.02.2026 17:35 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Neural mechanisms underlying intracortical microstimulation for sensory restoration Nature Biomedical Engineering - Intracortical microstimulation can elicit artificial sensations in persons who have lost sensation due to neurological injury or disease. This Review discusses...

Just out in @nature.com BME: Our Review unpacks intracortical microstimulation: axons, not somas, drive activation; direct + indirect pathways shape perception; parameters interact with neuron type, layer, and network; long-term use limited by neural depression & tissue response. 🧠⚑

rdcu.be/eZbTz

15.01.2026 16:51 πŸ‘ 19 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Robust Foraging Competition Can your AI visually navigate better than a mouse?

πŸŽ‰ Mouse vs AI #NeurIPS2025 Challenge 2025

The first year was a great success:
πŸ€– 290 submissions
πŸ‘₯ 22 teams
🌎 7 countries
robustforaging.github.io

A huge thank you to all who participated!πŸ‘

This was our first attempt at a global competition built around real mouse behavior and visual robustness

26.11.2025 19:56 πŸ‘ 9 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

Presenting β€œHuman in the loop optimisation for efficient intracortical microstimulation temporal patterns in visual cortex” again this afternoon at #SfN!!

Come discuss!

An amazing collaboration between the Biomedical Neuroengineering group at UMH and @bionicvisionlab.org

19.11.2025 18:02 πŸ‘ 5 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Schematic labeled human-in-the-loop optimization (HILO). It shows two stimuli on the left: pulse trains with varying stimulus amplitude over time. A participant has to choose which stimulus appears brighter. This feedback is used to inform a Gaussian process model that chooses the next stimulus pair, with the goal of finding the stimulus with the lowest overall charge to elicit perception

Schematic labeled human-in-the-loop optimization (HILO). It shows two stimuli on the left: pulse trains with varying stimulus amplitude over time. A participant has to choose which stimulus appears brighter. This feedback is used to inform a Gaussian process model that chooses the next stimulus pair, with the goal of finding the stimulus with the lowest overall charge to elicit perception

Our final poster at #SfN2025 explores human-in-the-loop optimization for intracortical microstimulation, presented by @lozaneuro.bsky.social, in collaboration with @umh.es:

PSTR450.20
Nov 19 at 1:00 PM
www.abstractsonline.com/pp8/#!/21171...

#SfN25 #VisionScience #NeuroTechnology

16.11.2025 18:58 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
left: experimental setup showing an implantee with an introcortical prosthesis and example phosphenes described as a large filled circle, a half-moon, and a tiny dot.
right: schematic showing cross-temporal decoding of delay period activity. Over 406 trials, working memory content could be decoded during the delay period in 88.5% of delay period windows.

left: experimental setup showing an implantee with an introcortical prosthesis and example phosphenes described as a large filled circle, a half-moon, and a tiny dot. right: schematic showing cross-temporal decoding of delay period activity. Over 406 trials, working memory content could be decoded during the delay period in 88.5% of delay period windows.

Lily Turkstra is presenting new findings on stimulus-selective spiking activity recorded during a working memory experiment in a unique intracortical dataset.

PSTR341.13
Nov 18 at 1:00 PM
www.abstractsonline.com/pp8/#!/21171...

#SfN25 #VisionScience #Neuroscience

16.11.2025 18:58 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Table showing different network diagrams under test, trained either on MNIST or Fashion-MNIST, with either soft or hard winner-take-all (WTA) wiring. Synaptic weights showed either holistic or parts-based representations of images

Table showing different network diagrams under test, trained either on MNIST or Fashion-MNIST, with either soft or hard winner-take-all (WTA) wiring. Synaptic weights showed either holistic or parts-based representations of images

Our second poster at #SfN2025 dives into biologically plausible networks for efficient image encoding, presented by Hasith Basnayake:

PSTR154.08
Nov 17 at 8:00 AM
www.abstractsonline.com/pp8/#!/21171...

#SfN25 #VisionScience #NeuroTechnology

16.11.2025 18:58 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Simulated responses of a bipolar cell mosaic to simulated electrical stimulation and the corresponding decoded phosphenes. Small phosphenes appear focal and colored, whereas larger phosphenes most often appear with a white-ish, yellow-ish tint.

Simulated responses of a bipolar cell mosaic to simulated electrical stimulation and the corresponding decoded phosphenes. Small phosphenes appear focal and colored, whereas larger phosphenes most often appear with a white-ish, yellow-ish tint.

If you are into #VisionScience or #neuroengineering, come check out our first poster at #SfN2025 this afternoon!

Emily Joyce is presenting new work on modeling the bipolar circuitry in the human fovea

PSTR122.22
Nov 16 at 1:00 PM
www.abstractsonline.com/pp8/#!/21171...

16.11.2025 18:58 πŸ‘ 4 πŸ” 2 πŸ’¬ 1 πŸ“Œ 1
Preview
bionic-vision.org | Research Spotlights | Yossi Mandel In a new Advanced Functional Materials paper, Prof. Yossi Mandel and colleagues unveiled a hybrid retinal prosthesis that fuses living neurons with a high-density electrode array. By nestling human st...

What if retinal prostheses could speak the brain’s language? πŸ‘οΈπŸ§ πŸ§ͺ

Prof. Yossi Mandel and team built a hybrid implant that merges neurons and electrodes to restore high-acuity sight.

New Data Drop interview ↓
www.bionic-vision.org/research-spo...

#BionicVision #Neurotech #Blindness

30.10.2025 20:01 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Can Fruit Ninja train the #BionicEye?

A new JoV paper finds that while participants improved with distorted β€œprosthetic” input, gaming-based training didn’t generalize to object recognition - hinting that rehab may need to stay task-specific.

doi.org/10.1167/jov....

22.10.2025 01:55 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Subretinal Photovoltaic Implant to Restore Vision in Geographic Atrophy Due to AMD | NEJM Geographic atrophy due to age-related macular degeneration (AMD) is the leading cause of irreversible blindness and affects more than 5 million persons worldwide. No therapies to restore vision in ...

🚨 Breakthrough alert: A new study in the NEJM reports that the PRIMA subretinal implant helped restore meaningful central vision in ~80% of participants with advanced geographic atrophy (an untreatable form of #AMD).

www.nejm.org/doi/10.1056/...

#BionicVision #NeuroTech

21.10.2025 17:32 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.

πŸ‘οΈπŸ§  New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...

27.09.2025 02:52 πŸ‘ 93 πŸ” 25 πŸ’¬ 2 πŸ“Œ 6
Preview
Thrilling progress in brain-computer interfaces from UC labs UC researchers and the patients they work with are showing the world what's possible when the human mind and advanced computers meet.

As federal research funding faces steep cuts, UC scientists are pushing brain-computer interfaces forward: restoring speech after ALS, easing Parkinson’s symptoms, and improving bionic vision with AI (that’s us πŸ‘‹ at @ucsantabarbara.bsky.social).

🧠 www.universityofcalifornia.edu/news/thrilli...

17.09.2025 18:03 πŸ‘ 4 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Epic collage of Bionic Vision Lab activities. From top to bottom, left to right:
A) Up-to-date group picture
B) BVL at Dr. Beyeler's Plous Award celebration (2025)
C) BVL at The Eye & The Chip (2023)
D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony
E) BVL logo cake created by Tori LeVier
G) Dr. Beyeler with symposium speakers at Optica FVM (2023)
H, I, M, N) Students presenting conference posters/talks
J) Participant scanning a food item (ominous pizza study)
K) Galen Pogoncheff in VR
L) Argus II user drawing a phosphene
O) Prof. Beyeler demoing BionicVisionXR
P) First lab hike (ca. 2021)
Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022)
R) BVL at Club Vision
S) Students drifting off into the sunset on a floating couch after a hard day's work

Epic collage of Bionic Vision Lab activities. From top to bottom, left to right: A) Up-to-date group picture B) BVL at Dr. Beyeler's Plous Award celebration (2025) C) BVL at The Eye & The Chip (2023) D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony E) BVL logo cake created by Tori LeVier G) Dr. Beyeler with symposium speakers at Optica FVM (2023) H, I, M, N) Students presenting conference posters/talks J) Participant scanning a food item (ominous pizza study) K) Galen Pogoncheff in VR L) Argus II user drawing a phosphene O) Prof. Beyeler demoing BionicVisionXR P) First lab hike (ca. 2021) Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022) R) BVL at Club Vision S) Students drifting off into the sunset on a floating couch after a hard day's work

Excited to share that I’ve been promoted to Associate Professor with tenure at UCSB!

Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!

Full post: www.linkedin.com/posts/michae...

02.08.2025 18:12 πŸ‘ 25 πŸ” 5 πŸ’¬ 1 πŸ“Œ 0
Preview
Program – EMBC 2025 Loading...

At #EMBC2025? Come check out two talks from my lab in tomorrow’s Sensory Neuroprostheses session!

πŸ—“οΈ Thurs July 17 Β· 8-10AM Β· Room B3 M3-4
🧠 Efficient threshold estimation
πŸ§‘πŸ”¬ Deep human-in-the-loop optimization

πŸ”— embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS

16.07.2025 16:54 πŸ‘ 3 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Efficient spatial estimation of perceptual thresholds for retinal implants via Gaussian process regression | Bionic Vision Lab We propose a Gaussian Process Regression (GPR) framework to predict perceptual thresholds at unsampled locations while leveraging uncertainty estimates to guide adaptive sampling.

🧠 Building on Roksana Sadeghi’s work: Calibrating retinal implants is slow and tedious. Can Gaussian Process Regression (GPR) guide smarter sampling?

βœ… GPR + spatial sampling = fewer trials, same accuracy
πŸ” Toward faster, personalized calibration

πŸ”— bionicvisionlab.org/publications...

#EMBC2025

13.07.2025 17:24 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Evaluating deep human-in-the-loop optimization for retinal implants using sighted participants | Bionic Vision Lab We evaluate HILO using sighted participants viewing simulated prosthetic vision to assess its ability to optimize stimulation strategies under realistic conditions.

πŸŽ“ Proud of our undergrad(!) Eirini Schoinas for leading this:
bionicvisionlab.org/publications...

🧠 Human-in-the-loop optimization (HILO) works in silicoβ€”but does it hold up with real people?
βœ… HILO outperformed naΓ―ve and deep encoders
πŸ” A step toward personalized #BionicVision

#EMBC2025

13.07.2025 17:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
Program – EMBC 2025 Loading...

πŸ‘οΈβš‘ Headed to #EMBC2025? Catch two of our lab’s talks on optimizing retinal implants!

πŸ“ Sensory Neuroprostheses
πŸ—“οΈ Thurs July 17 Β· 8-10AM Β· Room B3 M3-4
🧠 Efficient threshold estimation
πŸ§‘πŸ”¬ Deep human-in-the-loop optimization

πŸ”— embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS #Retina

13.07.2025 17:24 πŸ‘ 1 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0

This matters. Checkerboard rastering:

βœ”οΈ works across tasks
βœ”οΈ requires no fancy calibration
βœ”οΈ is hardware-agnostic

A low-cost, high-impact tweak that could make future visual prostheses more usable and more intuitive.

#BionicVision #BCI #NeuroTech

09.07.2025 16:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Boxplots showing task accuracy for two experimental tasksβ€”Letter Recognition and Motion Discriminationβ€”grouped by five raster patterns: No Raster (blue), Checkerboard (orange), Vertical (green), Horizontal (brown), and Random (pink). Each colored boxplot shows the median, interquartile range, and individual participant data points.

In both tasks, Checkerboard and No Raster yield the highest median accuracy.

Horizontal and Random patterns perform the worst, with more variability and lower scores.

Significant pairwise differences (p < .05) are indicated by horizontal bars above the plots, showing that Checkerboard significantly outperforms Random and Horizontal in both tasks.

A dashed line at 0.125 marks chance-level performance (1 out of 8).

These results suggest Checkerboard rastering improves perceptual performance compared to conventional or unstructured patterns.

Boxplots showing task accuracy for two experimental tasksβ€”Letter Recognition and Motion Discriminationβ€”grouped by five raster patterns: No Raster (blue), Checkerboard (orange), Vertical (green), Horizontal (brown), and Random (pink). Each colored boxplot shows the median, interquartile range, and individual participant data points. In both tasks, Checkerboard and No Raster yield the highest median accuracy. Horizontal and Random patterns perform the worst, with more variability and lower scores. Significant pairwise differences (p < .05) are indicated by horizontal bars above the plots, showing that Checkerboard significantly outperforms Random and Horizontal in both tasks. A dashed line at 0.125 marks chance-level performance (1 out of 8). These results suggest Checkerboard rastering improves perceptual performance compared to conventional or unstructured patterns.

βœ… Checkerboard consistently outperformed the other patternsβ€”higher accuracy, lower difficulty, fewer motion artifacts.

πŸ’‘ Why? More spatial separation between activations = less perceptual interference.

It even matched performance of the ideal β€œno raster” condition, without breaking safety rules.

09.07.2025 16:55 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Diagram showing the four-step pipeline for simulating prosthetic vision in VR.
Step 1: A virtual camera captures the user’s view, guided by eye gaze. The image is converted to grayscale and blurred for preprocessing.
Step 2: The preprocessed image is mapped onto a simulated retinal implant with 100 electrodes. Electrodes are activated based on local image intensity and grouped into raster groups. Raster Group 1 is highlighted.
Step 3: Simulated perception is shown with and without rastering. Without rastering (top), all electrodes are active, producing a more complete but unrealistic percept. With rastering (bottom), only 20 electrodes are active per frame, resulting in a temporally fragmented percept. Phosphene shape depends on parameters for spatial spread (ρ) and elongation (λ).
Step 4: The rendered percept is updated with temporal effects and presented through a virtual reality headset.

Diagram showing the four-step pipeline for simulating prosthetic vision in VR. Step 1: A virtual camera captures the user’s view, guided by eye gaze. The image is converted to grayscale and blurred for preprocessing. Step 2: The preprocessed image is mapped onto a simulated retinal implant with 100 electrodes. Electrodes are activated based on local image intensity and grouped into raster groups. Raster Group 1 is highlighted. Step 3: Simulated perception is shown with and without rastering. Without rastering (top), all electrodes are active, producing a more complete but unrealistic percept. With rastering (bottom), only 20 electrodes are active per frame, resulting in a temporally fragmented percept. Phosphene shape depends on parameters for spatial spread (ρ) and elongation (Ξ»). Step 4: The rendered percept is updated with temporal effects and presented through a virtual reality headset.

We ran a simulated prosthetic vision study in immersive VR using gaze-contingent, psychophysically grounded models of epiretinal implants.

πŸ§ͺ Powered by BionicVisionXR.
πŸ“ Modeled 100-electrode Argus-like array.
πŸ‘€ Realistic phosphene appearance, eye/head tracking.

09.07.2025 16:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Raster pattern configurations used in the study, shown as 10Γ—10 electrode grids labeled with numbers 1 through 5, representing five sequentially activated timing groups.

1. Horizontal: Each row of electrodes belongs to one group, with activation proceeding top to bottom.

2. Vertical: Each column is a group, activated left to right.

3. Checkerboard: Electrode groups are arranged to maximize spatial separation, forming a checkerboard-like layout.

4. Random: Group assignments are randomly distributed across the grid, with no spatial structure. This pattern was re-randomized every five frames to test unstructured activation.
Each group is represented with different shades of gray and labeled numerically to indicate activation order.

Raster pattern configurations used in the study, shown as 10Γ—10 electrode grids labeled with numbers 1 through 5, representing five sequentially activated timing groups. 1. Horizontal: Each row of electrodes belongs to one group, with activation proceeding top to bottom. 2. Vertical: Each column is a group, activated left to right. 3. Checkerboard: Electrode groups are arranged to maximize spatial separation, forming a checkerboard-like layout. 4. Random: Group assignments are randomly distributed across the grid, with no spatial structure. This pattern was re-randomized every five frames to test unstructured activation. Each group is represented with different shades of gray and labeled numerically to indicate activation order.

Checkerboard rastering has been used in #BCI and #NeuroTech applications, often based on intuition.

But is it actually better, or just tradition?

No one had rigorously tested how these patterns impact perception in visual prostheses.

So we did.

09.07.2025 16:55 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Raster patterns in simulated prosthetic vision. On the left, a natural scene of a yellow car is shown, followed by its transformation into a prosthetic vision simulation using a 10Γ—10 grid of electrodes (red dots). Below this, a zoomed-in example shows the resulting phosphene pattern. To comply with safety constraints, electrodes are divided into five spatial groups activated sequentially across ~220 milliseconds. Each row represents a different raster pattern: vertical (columns activated left to right), horizontal (rows top to bottom), checkerboard (spatially maximized separation), and random (reshuffled every five frames). For each pattern, five panels show how the scene is progressively built across the five raster groups. Vertical and horizontal patterns show strong directional streaking. Checkerboard shows more uniform activation and perceptual clarity. Random appears spatially noisy and inconsistent.

Raster patterns in simulated prosthetic vision. On the left, a natural scene of a yellow car is shown, followed by its transformation into a prosthetic vision simulation using a 10Γ—10 grid of electrodes (red dots). Below this, a zoomed-in example shows the resulting phosphene pattern. To comply with safety constraints, electrodes are divided into five spatial groups activated sequentially across ~220 milliseconds. Each row represents a different raster pattern: vertical (columns activated left to right), horizontal (rows top to bottom), checkerboard (spatially maximized separation), and random (reshuffled every five frames). For each pattern, five panels show how the scene is progressively built across the five raster groups. Vertical and horizontal patterns show strong directional streaking. Checkerboard shows more uniform activation and perceptual clarity. Random appears spatially noisy and inconsistent.

πŸ‘οΈπŸ§  New paper alert!

We show that checkerboard-style electrode activation improves perceptual clarity in simulated prosthetic visionβ€”outperforming other patterns in both letter and motion tasks.

Less bias, more function, same safety.

πŸ”— doi.org/10.1088/1741...

#BionicVision #NeuroTech

09.07.2025 16:55 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1
Preview
Assistive Technology Use In The Home and AI and Adaptive Optics Ophthalmoscopes Lily Turkstra (University of California - Santa Barbara) Dr. Johnny Tam (National Eye Institute - Bethesda, MD) Lily Turkstra , PhD Student,...

πŸŽ™οΈOur very own Lily Turkstra was featured on WYPL-FM’s Eye on Vision podcast to discuss how blind individuals use assistive tech at home, from tactile labels to digital tools.

πŸ“» Listen: eyeonvision.blogspot.com/2025/05/assi...
πŸ“° Read: bionicvisionlab.org/publications...

#BlindTech #Accessibility

24.06.2025 20:14 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
bionic-vision.org | Research Spotlights | Frederik Ceyssens, ReVision Implant Frederik Ceyssens is Co-Founder and CEO of ReVision Implant, the company behind Occular: a next-generation cortical prosthesis designed to restore both central and peripheral vision through ultra-flex...

πŸ‘οΈπŸ§ πŸ§ͺ Next on the Horizon: Frederik Ceyssens from ReVision Implant on scaling bionic vision to the cortex with Occular, a high-res, deep-brain prosthesis.

Why performance might beat invasiveness - and what comes next:
www.bionic-vision.org/research-spo...

#BionicVision #NeuroTech #BCI

12.06.2025 17:31 πŸ‘ 0 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
VSS PresentationPresentation – Vision Sciences Society

Last but not least is Lily Turkstra, whose poster is assessing the efficacy of visual augmentations for high-stress navigation:

Tue, 2:45 - 6:45pm, Pavilion: Poster #56.472
www.visionsciences.org/presentation...

πŸ‘οΈπŸ§ͺ #XR #VirtualReality #Unity3D #VSS2025

20.05.2025 14:10 πŸ‘ 3 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0