6/ Check out our paper and code 👇
📄 Paper: arxiv.org/abs/2601.09322
💻 Code: github.com/lciernik/att...
#VisionTransformer #MachineLearning #AIResearch
Latest posts tagged with #VisionTransformer on Bluesky
6/ Check out our paper and code 👇
📄 Paper: arxiv.org/abs/2601.09322
💻 Code: github.com/lciernik/att...
#VisionTransformer #MachineLearning #AIResearch
Image from article in Radiology: Artificial Intelligence
MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI https://doi.org/10.1148/ryai.240373 @cem.bsky.social @cai2r.net #MSKRad #MachineLearning #VisionTransformer
VPNeXt Rethinks Dense Decoding for Vision Transformers
VPNeXt, a simplified Vision Transformer model, beats the prior state‑of‑the‑art mIoU on the VOC2012 benchmark, with its version 3 released on September 27 2025. Read more: getnews.me/vpnext-rethinks-dense-de... #vpnext #visiontransformer #semanticsegmentation
PVTAdpNet Boosts Real-Time Polyp Segmentation with Vision Transformers
PVTAdpNet combines U‑Net and a Pyramid Vision Transformer, achieving a Dice score of 0.8851 and mIoU 0.8167 for real‑time polyp segmentation on standard GPUs. getnews.me/pvtadpnet-boosts-real-ti... #polypsegmentation #visiontransformer
Orthogonal Residual Updates Improve Stability and Accuracy in Deep Networks
Orthogonal Residual Updates add only the component orthogonal to the activation stream; a ViT‑B model gained +4.3% top‑1 accuracy on ImageNet‑1k benchmark tests. Read more: getnews.me/orthogonal-residual-upda... #neurips2025 #visiontransformer
Hyperspectral Adapter Improves Semantic Segmentation via Vision Models
A new hyperspectral adapter linking vision transformers achieved segmentation accuracy on three autonomous‑driving benchmarks, training on small datasets. Read more: getnews.me/hyperspectral-adapter-im... #hyperspectral #visiontransformer
EfficienT-HDR: Lightweight Transformer Improves Edge HDR Imaging
EfficienT‑HDR cuts FLOPS by ~67% and boosts CPU inference speed over fivefold, with about 2.5× speedup on edge processors, delivering HDR quality without ghosting. Read more: getnews.me/efficient-hdr-lightweigh... #efficienthdr #visiontransformer #edgeai
Parameter-Efficient Multi-Task Learning Reduces Model Size by Fivefold
A new progressive adaptation for Swin Transformers improves accuracy on PASCAL and NYUD‑v2 while using only about 20% of the trainable parameters of a fully fine‑tuned model. getnews.me/parameter-efficient-mult... #multitask #visiontransformer
Visual Instruction Pretraining Boosts Domain-Specific Vision Models
ViTP embeds a Vision Transformer in a Vision‑Language Model, was tested on 16 remote‑sensing & medical benchmarks and achieved scores. Code on GitHub. Read more: getnews.me/visual-instruction-pretr... #visualinstructionpretraining #visiontransformer
Reproducing Vision Transformers with Diffusion Denoised Smoothing
Researchers reproduced the Vision Transformer study and found Diffusion Denoised Smoothing boosts explanation‑map robustness, though it adds extra overhead. Read more: getnews.me/reproducing-vision-trans... #visiontransformer #diffusiondenoisedsmoothing
Activation‑Space Tuning Improves Parameter‑Efficient Fine‑Tuning
Activation‑space tuning (NoRA) updates only 0.4% of a vision transformer’s parameters (~0.02 M) and yields +0.17% accuracy on CIFAR‑10 and +0.27% on CIFAR‑100. Read more: getnews.me/activation-space-tuning-... #activationtuning #peft #visiontransformer
I-Segmenter: Vision Transformer for Efficient Segmentation
I‑Segmenter is the first fully integer‑only Vision Transformer for semantic segmentation, achieving only a 5.1 % accuracy gap to FP32 while shrinking model size up to 3.8×. Read more: getnews.me/i-segmenter-vision-trans... #visiontransformer #segmentation
Autoencoder‑Vision Transformer Boosts Dental Age Estimation Accuracy
A framework merging an autoencoder with a Vision Transformer raised dental age‑estimation accuracy to 0.815 for second molars and 0.543 for third molars. getnews.me/autoencoder-vision-trans... #dentalage #visiontransformer
Image from article in Radiology: Artificial Intelligence
MR-Transformer: A Vision Transformer-based Deep Learning Model for Total Knee Replacement Prediction Using MRI https://doi.org/10.1148/ryai.240373 @cem.bsky.social @cai2r.net #MSKRad #DJD #VisionTransformer
Image from article in Radiology: Artificial Intelligence
MR-Transformer predicts knee osteoarthritis progression to joint replacement using MRI https://doi.org/10.1148/ryai.240373 @cem.bsky.social @cai2r.net #DJD #VisionTransformer #ML
#TimeSeries #ComputerVision #GAF #QuantFinance #CNN #VisionTransformer #Forecasting #FeatureEngineering #Python #DataScience #MachineLearning #DeepLearning #Finance #JPmorgan #AlgoTrading
Please check out our work!
📄 Preprint: arxiv.org/abs/2506.08641
💻 Code: github.com/ExplainableM...
#TimeSeries #VisionTransformer #FoundationModel
I posted my blog about Genie🧙
skyfoliage.com/pubstore/cm0...
or copied version is written in medium
medium.com/@taks.skyfol...
#magic #genie #vision #game #creator #environment #transformer #VisionTransformer #ICML #google #DeepMind #skyfoliage.com
New AI model sets benchmark in digital pathology with superior cancer diagnostics 🤖🔬🧬 www.news-medical.net/news/2024052... #ProvGigaPath #Digital #Pathology #Cancer #Diagnostics #AI #Healthcare #VisionTransformer #MachineLearning #HealthTech @natureportfolio.bsky.social