me, emotionally writing an essay on the use of force by federal agents: ok but what if i packaged this in the most insane way possible www.theverge.com/policy/86857...
me, emotionally writing an essay on the use of force by federal agents: ok but what if i packaged this in the most insane way possible www.theverge.com/policy/86857...
Apple's researchers continue to focus on multimodal LLMs, with studies exploring their use for image generation, understanding, and multi-turn web searches with cropped images.
A great primer - and ICYMI: a *single* parameter (among billions) - a "super weight" can determine if an LLM's output will be coherent or nonsense machinelearning.apple.com/research/the...
Apple continues to focus on AI-powered image modification, with new studies detailing evaluation frameworks and an AI model that can turn 2D images into 3D scenes in a second.
The post above shares just a handful of highlights, and a comprehensive schedule of Apple's contributions at #NeurIPS2025 can be found here: machinelearning.apple.com/updates/appl... 7/7
And at booth 1103, attendees will be able to see demos of Apple research - including distributed compute using MLX to run a 1T model on cluster of 4 Mac Studios, and a demo of FastVLM: machinelearning.apple.com/research/fas... 6/7
A Principled Approach to Determining Training Data Mixtures:
"Scaling Laws for Optimal Data Mixtures" machinelearning.apple.com/research/opt... 5/7
Innovative Approaches to Generative AI:
"STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis" machinelearning.apple.com/research/sta...
"LinEAS: End-to-end Learning of Activation Steering with a Distributional Loss" machinelearning.apple.com/research/end... 4/7
Understanding the Strengths and Limitations of Reasoning Models:
"The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity" machinelearning.apple.com/research/ill... 3/7
Advancing Privacy-Preserving ML:
"Instance-Optimality for Private KL Distribution Estimation" machinelearning.apple.com/research/ins...
"Privacy Amplification by Random Allocation" machinelearning.apple.com/research/pri... 2/7
Heading to #NeurIPS2025 next week? Check out this post for some highlights of the work Apple researchers will present: machinelearning.apple.com/research/neu... - across topics including... 1/7
MLX + the Neural Accelerators in the M5 GPU = up to 4x faster LLM inference🚀 machinelearning.apple.com/research/exp...
Apple researchers develop SimpleFold, a lightweight AI for protein folding prediction
tl;dr: some parameters are much more important than others, and in some cases removing just 1 can turn an LLM's output to nonsense
New Apple #ML Research Highlight: The "Super Weight:" How Even a Single Parameter can Determine an #LLM's Behavior machinelearning.apple.com/research/the...
The inference code, model checkpoints, and an iOS/macOS demo app based on MLX are available here: github.com/apple/ml-fas...
How fast is it? Here's the demo app running FastVLM 0.5B model on iPhone 16 Pro. Time to first token is shown on the screen, highlighting near real-time performance.
New Apple #ML Research Highlight: "FastVLM: Efficient Vision Encoding for Vision Language Models" machinelearning.apple.com/research/fas...
New paper: 'Apple Intelligence Foundation Language Models Tech Report 2025' provides technical details for two multilingual, multimodal foundation language models that power Apple Intelligence features across Apple devices and services:
machinelearning.apple.com/research/app...
And for a comprehensive overview of Apple research at the conference - including the complete schedule of orals, posters, workshops, booth programming and more - see this post: machinelearning.apple.com/updates/appl...
Next week at #ICML2025, Apple researchers will present many new papers across a range of topics in #AI & #ML - check out this post for some highlights: machinelearning.apple.com/research/icm...
New post: "Updates to Apple's On-Device and Server Foundation Language Models" - details the architectures, training data and recipes, inference optimization techniques, and evaluation results compared to comparable models: machinelearning.apple.com/research/app...
New Apple #ML Research Highlight: "An LLM-Based Approach to Review Summarization on the App Store" - details the multi-step #LLM workflow for generating high quality summaries from diverse crowdsourced reviews in a dynamic environment: machinelearning.apple.com/research/app...
New post: "Apple Machine Learning Research at #ICLR 2025" - highlighting a selection of the many Apple #ML research papers to be presented at @iclr-conf.bsky.social this week: machinelearning.apple.com/research/icl...
Accepted as a Spotlight at @iclr-conf.bsky.social the work shares a new method for fine-grained control over #genAI output - without the computational overhead, complexity, and volume of data needed by #RLHF or fine-tuning, and with more reliable results than prompt engineering.
New Apple #ML Research Highlight: "Controlling Language and Diffusion Models by Transporting Activations” machinelearning.apple.com/research/tra...
Modern science wouldn’t exist without the online research repository known as arXiv. Three decades in, its creator still can’t let it go.
Very cool work from an international team of researchers including @www.helmholtz-munich.de and Apple: applying optimal transport to give scientists a new ability to observe millions of cells simultaneously, as they develop across time and in space
Today is a great day for optimal transport 🎉! Lots of gratitude 🙏 for all folks who contributed to ott-jax.readthedocs.io and pushed for the MOSCOT (now @ nature!) paper, from visionaries @dominik1klein.bsky.social, G. Palla, Z. Piran to the magician, Michal Klein! ❤️
www.nature.com/articles/s41...