Trending
famstack.dev's Avatar

famstack.dev

@famstack.dev

Join the (painful) quest to create an open source home server stack for families.

1
Followers
4
Following
13
Posts
01.03.2026
Joined
Posts Following

Latest posts by famstack.dev @famstack.dev

Why Your Next Home Server Should Be a Mac Mini or Mac Studio // famstack.dev Real wattmeter numbers from a Mac Studio M1 Max running 25 Docker containers and local LLM inference. 12W average, 50W peak. Mac Mini should be similar. Under โ‚ฌ40/year in Germany.

Here is the article if someone is interested. Measured with a wattmeter.

famstack.dev/guides/mac-m...

13.03.2026 12:19 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Nice. I measured my Mac Studio M1 Max at 8W idle, 30-50W under full LLM inference. 12 Watt average over one week. Our ancient entertainment system draws more on standby. Apple Silicon is great for home server power efficiency. Especially in Germany with these crazy energy prices.

13.03.2026 12:18 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 2 ๐Ÿ“Œ 0
57 tok/s on Screen, 3 tok/s in Practice: MLX vs llama.cpp on Apple Silicon // famstack.dev MLX reports nearly 2x the generation speed of GGUF on Apple Silicon. The truth is more nuanced. I benchmarked both across three real workloads.

I used Ollama. Wanted to switch to LM Studio. But it turns out... its complicated
famstack.dev/guides/mlx-v...

13.03.2026 12:15 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The local LLM community is quite silent here unfortunately :-/
Everyone still hanging around at X?

13.03.2026 12:13 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 1
57 tok/s on Screen, 3 tok/s in Practice: MLX vs llama.cpp on Apple Silicon // famstack.dev MLX reports nearly 2x the generation speed of GGUF on Apple Silicon. The truth is more nuanced. I benchmarked both across three real workloads.

#LocalAI #AppleSilicon #Mac #SelfHosted
Reddit:
www.reddit.com/r/LocalLLaMA...

The initial article
famstack.dev/guides/mlx-v...

I am going to update the article with the insights from the community soon

13.03.2026 08:49 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Post image

Wow! My MLX vs llama.cpp benchmark hit #9 on r/LocalLLaMA today. Did not expect that.
Takeaway: benchmark actual scenarios, do not rely on just the tok/s counter in your UI. Ran into a caching bug specific to Qwen 3.5 (35B-A3B) on MLX. Effective tokens/s is what we experience

#MLX #LlamaCpp #Qwen

13.03.2026 08:47 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

The 1.67x claim, is that generation speed or effective throughput including ttft? I benchmarked MLX vs llama.cpp and MLX reported 2x faster generation, but throughput was actually lower for most workloads. prefill was way slower. what matters is effective tok/s, not just the tok/s counter for gen

11.03.2026 20:24 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Why Your Next Home Server Should Be a Mac Mini or Mac Studio // famstack.dev Real wattmeter numbers from a Mac Studio M1 Max running 25 Docker containers and local LLM inference. 12W average, 50W peak. Mac Mini should be similar. Under โ‚ฌ40/year in Germany.

Next thing I buy: A switch for the Bose system. The Mac server is going to save money then ๐Ÿ˜…

Here is the whole drill-down
famstack.dev/guides/mac-m...

09.03.2026 15:35 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0
Wattmeter showing 8.5W power consumption by a Mac Studio M1 Max

Wattmeter showing 8.5W power consumption by a Mac Studio M1 Max

Bought a Watt meter last week. Measured our ancient Bose 5.1 system in standby: 30 watts ๐Ÿซฅ My Mac Studio M1 Max running 25 Docker containers and local AI inference? 5-7W idle. 11.8W average. Old hardware on standby draws more than a full home server stack.
#selfhosted #homelab #AppleSilicon #localAI

09.03.2026 15:30 ๐Ÿ‘ 2 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0

I am going to check it out. Thank you!

06.03.2026 22:13 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

I'll let you know!

06.03.2026 16:24 ๐Ÿ‘ 1 ๐Ÿ” 0 ๐Ÿ’ฌ 0 ๐Ÿ“Œ 0

Hi @getmeos.com bot. How is life? What did you accomplish today? For now, we use tailscale to give tunelled access to our family server. Planning to connect our local instance to a VPS hosted though. Just an idea. Maybe we replicate certain galleries with the remote accessible instance then.

06.03.2026 15:57 ๐Ÿ‘ 0 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0
famstack.dev // home server guides for Mac Guides, build logs, and real experience running a self-hosted home server on Apple Silicon. Photos, documents, local AI, backups. All open source.

Building a self-hosted home server for my family on a Mac Studio / Mac Mini. Photos, documents, local AI. No cloud, nothing leaves the house. Documenting everything along the way. Follow, to join the pain.

#selfhosted #homeserver #localai #privacy

06.03.2026 15:36 ๐Ÿ‘ 5 ๐Ÿ” 0 ๐Ÿ’ฌ 1 ๐Ÿ“Œ 0