Matteo Collina's Avatar

Matteo Collina

@nodeland.dev

Platformatic.dev Co-Founder & CTO, Node.js TSC member, Lead maintainer Fastify, Board OpenJS, Conference Speaker, Ph.D. Views are my own.

4,364
Followers
357
Following
1,323
Posts
17.03.2023
Joined
Posts Following

Latest posts by Matteo Collina @nodeland.dev

Built on top of @platformatic/image-optimizer and @platformatic/job-queue. Supports memory, filesystem, or Redis/Valkey for queue storage.

If you are self-hosting Next.js and want predictable image performance under load, this is the missing building block.

blog.platformatic.dev/scale-nextjs...

10.03.2026 16:59 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

The setup is a three-app Watt workspace:

1. Gateway routes traffic
2. Frontend runs your Next.js app unchanged
3. Optimizer handles /_next/image

For Kubernetes, route /_next/image to separate pods at the ingress level. Full guide in the docs.

10.03.2026 16:59 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

In our internal benchmarks, moving image optimization to a dedicated Watt Application reduced 95th percentile response times during peak traffic by up to 40%.

Unpredictable slowdowns turned into consistently fast delivery under heavy load.

10.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

What you get:

- Independent scaling for image workloads
- SSR/RSC rendering isolated from image spikes
- Queue-backed pipeline with deduplication (same image requested 100 times = processed once)
- Redis/Valkey support for distributed state

10.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The fix: run image optimization as a dedicated Watt Application.

@platformatic/next now has an Image Optimizer mode.

Flip one flag, route /_next/image to its own service, and keep your frontend workers focused on rendering.

10.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The root cause: image resizing is bursty and CPU-heavy.

SSR needs low latency and consistent resources.

Running both in the same process means one bad spike cascades into everything else.

This is the classic noisy neighbour problem.

10.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Here is what happens during a product launch or campaign:

/_next/image traffic spikes. CPU maxes out. SSR and API routes start competing for the same workers. 95th percentile render times jump from 600ms to over 2 seconds.

Your app code didn't change. The architecture failed you.

10.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Are you hosting Next.js on K8s?

Your Next.js image optimization is quietly killing your frontend performance.

We (@platformatic) just shipped a way to fix it without changing a single line of your app code ๐Ÿงต

10.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

In our internal benchmarks, moving image optimization to a dedicated Watt Application reduced 95th percentile response times during peak traffic by up to 40%.

Unpredictable slowdowns turned into consistently fast delivery under heavy load.

10.03.2026 13:57 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

What you get:

- Independent scaling for image workloads
- SSR/RSC rendering isolated from image spikes
- Queue-backed pipeline with deduplication (same image requested 100 times = processed once)
- Redis/Valkey support for distributed state

10.03.2026 13:57 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The fix: run image optimization as a dedicated Watt Application.

@platformatic/next now has an Image Optimizer mode.

Flip one flag, route /_next/image to its own service, and keep your frontend workers focused on rendering.

10.03.2026 13:57 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The root cause: image resizing is bursty and CPU-heavy.

SSR needs low latency and consistent resources.

Running both in the same process means one bad spike cascades into everything else.

This is the classic noisy neighbour problem.

10.03.2026 13:57 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Here is what happens during a product launch or campaign:

/_next/image traffic spikes. CPU maxes out. SSR and API routes start competing for the same workers. 95th percentile render times jump from 600ms to over 2 seconds.

Your app code didn't change. The architecture failed you.

10.03.2026 13:57 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Have you got any specific questions for our show?

You can also read about our approach at:

09.03.2026 17:41 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Join us March 11th for the full breakdown: streamyard.com/watch/uYJ4Mb... Kubernetes just got a lot less scary.

09.03.2026 16:59 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

This is not just about avoiding broken sessions. It is about giving enterprise dev teams the confidence to ship smaller, ship faster. Running Next.js, Remix, or monorepos on Kubernetes? This changes everything.

09.03.2026 16:59 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0

In this episode we unpack: Cookie-based version-aware routing, Active โ†’ Draining โ†’ Expired lifecycle, Immutable per-version Deployments, Prometheus-driven traffic monitoring.

09.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

Frontend teams solved this years ago. Vercel pins users to their deployment version. Zero-downtime, zero-skew. Kubernetes? You have been on your own. Until now. ICC Skew Protection brings the same model to your existing K8s setup.

09.03.2026 16:59 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0

Fear of breaking changes leads to bigger, rarer deployments. Bigger deployments carry more risk. More risk means more rollbacks, more testing cycles, more let us wait until Monday. Version skew does not just break sessions. It kills shipping velocity.

09.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

This is version skew. And it has been quietly destroying developer velocity on Kubernetes for years. Old client, new server. New client, old API. It happens on every single deployment. You just do not always notice.

09.03.2026 16:59 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 3 ๐Ÿ“Œ 0

You deploy a new version. A user mid-session hits the new backend. A renamed field breaks their form. Support queue fills up. Three teams join a bridge call. Nobody knows who broke what. @lucamaraschi and I explain what is really going on. ๐Ÿ“… Mar 11th

09.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

๐Ÿ”ฅ NEW BANTER: Kubernetes Finally Gets Vercel-Style Deployment Safety

09.03.2026 16:59 ๐Ÿ‘ 6 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0
Post image

๐Ÿ’ป Take your Node.js to the next level! Hands-on workshop with @nodeland.dev: learn async patterns, advanced caching, worker threads & observability to scale high-performance apps.

Limited seats! ๐ŸŽŸ๏ธ london.cityjsconf.org/

06.03.2026 07:15 ๐Ÿ‘ 4 ๐Ÿ” 2 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Skew Protection is available now in ICC as an experimental feature.

If your team wants to try it in a real enterprise setup, let me know; my DMs are open.

Full deep-dive blog post: blog.platformatic.dev/skew-protect...

06.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

The deployment lifecycle is a clean state machine.

ICC monitors traffic on draining versions. When there's zero traffic (or the grace period elapses), it removes routing rules, scales to zero, and optionally deletes the old Deployment.

06.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

How it works:

- Each app version runs as a separate, immutable K8s Deployment
- ICC detects new versions via label-based discovery
- A __plt_dpl cookie pins users to their deployment version
- Old versions drain gracefully, then get cleaned up automatically

06.03.2026 16:59 ๐Ÿ‘ 1 ๐Ÿ” 1 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

Our solution: ICC pins each user session to the version they started with.

User starts on version N? All their requests go to version N, even after you deploy version N+1.

We use the Kubernetes Gateway API for version-aware routing, with ICC as the control plane.

06.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

This is a distributed systems problem that slows teams down.

Fear of breaking changes leads to larger, less-frequent deployments that carry MORE risk.

In a world where AI lets you write code faster, the bottleneck lies in the gap between code and production.

06.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The problem: version skew.

When you deploy a new version, users still on the old frontend send requests to the new backend. APIs change, shared TypeScript types break, React Server Components hydration fails.

The result? Broken UI, data corruption, and support tickets piling up.

06.03.2026 16:59 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
Post image

We just shipped something big: Skew Protection for your Kubernetes apps, built right into the @platformatic Intelligent Command Center (ICC).

Think @Vercel-style deployment safety, but running in your own K8s cluster. No migration needed.

Here's why it matters. ๐Ÿงต

06.03.2026 16:59 ๐Ÿ‘ 9 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0