Trending

#LocalLLMs

Latest posts tagged with #LocalLLMs on Bluesky

Latest Top
Trending

Posts tagged #LocalLLMs

Stop Buying Mac Minis for AI: That Old PC in Your Closet Already Runs LLMs The Mac Mini M4 has become the default AI hardware purchase. But a $600 used RTX 3090 has 3.4x the memory bandwidth, old laptops run 8B models at usable speeds, and even a retired Android phone can do inference. Here is what the marketing does not tell you.

Stop Buying Mac Minis for AI: That Old PC in Your Closet Already Runs LLMs

awesomeagents.ai/news/stop-buying-mac-min...

#MacMini #LocalLlms #Hardware

0 0 0 0
Preview
SillyTavern + Local LLM Setup Guide SillyTavern is a popular framework for companion chats, solo-roleplaying, and more. It handles character and persona continuity and works with many backends.

This week on The Servitor:

Getting set up for local roleplay with Silly Tavern! These folks take their roleplay companion AI seriously.

theservitor.com/sillytavern-local-llm-se...

#AI #roleplay #uncensored #SillyTavern #localLLMs

0 0 0 0
Nicolas MØUART (@silentexception@mastodon.social) > Developed by: Instituto Superior Técnico - University of Lisbon, Instituto de Telecomunicações, University of Edinburgh, Aveni, Unbabel, University of Paris-Saclay, Artefact Research Center, University of Amsterdam, Naver Labs, Sorbonne Université. > Funded by: European Union. #eu > Model type: A 22B parameter multilingual transfomer #LLM https://huggingface.co/utter-project/EuroLLM-22B-Instruct-2512

#Eurollm #europeanai

mastodon.social/@silentexception/1160739...

👍

Is there any initiative to pool the various initiatives across European academia and public research to come up with a common #ecosystem of #europeandata and #LocalLLMs ?

0 1 0 0

A central theme: practicality of running local LLMs on the Pi with 8GB RAM. Skepticism abounds; many believe 8GB is insufficient for meaningful, general LLM tasks, citing memory and speed limitations for inference. #LocalLLMs 5/6

1 0 1 0

Curious if others have found different winners or have tips I missed.

#LocalLLMs #AIEngineering #MachineLearning #Ollama

0 0 1 0

Running LLMs and vector databases locally offers privacy and control but demands significant hardware resources. Balance these benefits against the required hardware investments to scale your local RAG system effectively. #LocalLLMs 6/6

0 0 0 0

Local LLMs put intelligence on your own hardware—private, compliant, lightning-fast.
No vendors. No leaks. No waiting.
Just pure performance and control.
The future of AI is local, offline, and entirely yours.
#LocalLLMs #PrivateAI

2 0 0 0

Interested in exploring current LLMs, but with a focus on locally running models. If you have experience with AI for data science and local model deployment, please share your insights 🦾

#DataScience #AI #LocalLLMs

4 0 1 0

What App do you use to run LLMs locally on iOS?

I’ve been using PocketPal, but it seems to have trouble with Apples built in limit for Apps of max 50% VRAM, which limits model sizes & choice.

#iOS #LLMs #LocalLLMs

2 0 1 0

Hacker News discussed running local LLMs on macOS, covering hardware, software, performance, and practical use cases. A key theme was Apple's AI strategy and the debate between local vs. cloud LLMs. #LocalLLMs 1/6

1 0 1 0
AI Agents for Dark Web Monitoring | AI for Security Agencies
AI Agents for Dark Web Monitoring | AI for Security Agencies YouTube video by AI Anytime

Check out this cool new open-source Dark Web Monitoring AI Agent platform by AI Anytime - it looks like it will work with a local LLM too. I know what my next weekend project is going to be :) #AI #LocalLLMs #DFIR

www.youtube.com/watch?v=9e24...

2 1 0 0

The Hacker News community discussed Jan, an open-source UI for local LLMs, positioning it as an alternative to Ollama & LM Studio. Users explored its features, compared performance, and shared experiences with this new tool. #LocalLLMs 1/6

1 0 1 0
Enjoy the videos and music that you love, upload original content and share it all with friends, family and the world on YouTube.

🚀 OpenAI dropped GPT-OSS with N8N 

📹 Full breakdown here → https://youtu.be/g6oPkkmTfZk
#AI #Automation #LocalLLMs #OpenSourceAI #n8n #AgenticAI

1 3 0 0

LM Studio users, take note! While the GUI gets clicks, it adds complexity. This new #ollama UI aims for simplicity. Easier to use than clunky alternatives. ✨ #AItools #LocalLLMs https://youtu.be/prrWESXl7wg

2 0 0 0

New to AI? The #ollama UI might be the perfect, easy-to-use entry point. It's a fantastic start! 🎉 #AIforBeginners #LocalLLMs https://youtu.be/prrWESXl7wg

1 0 0 0

Hacker News discussed local LLMs, like GLM-4.5 Air, generating code (e.g., Space Invaders on an old laptop). The thread explored open-source progress, fine-tuning, hardware needs, and LLMs' impact on software dev & creativity. #LocalLLMs 1/6

1 0 1 0
Preview
AMD brings datacenter-level AI to Windows PCs, runs Meta's 109B Llama lodel locally AMD announced that its Ryzen AI MAX+ 395 processor, combined with 128 GB of RAM, can now run Meta's large 109B vision model locally on a PC.

AMD announced that its Ryzen AI MAX+ 395 processor, combined with 128 GB of RAM, can now run Meta's large 109B vision model locally on a PC. #AMD #Ryzen #LocalLLMs

1 2 0 0

Specific hardware platforms like AMD's Strix Halo & Apple Silicon were highlighted in the discussion as promising candidates for running quantized MiniMax-M1 locally. #LocalLLMs 3/5

1 0 1 0
Video

Here's a SwarmUI Agent. Right now it's mostly an agentic MCP server project. You make requests and, if you request an image, it generates a prompt and an image with the local SwarmUI instance, choosing generation settings based on your input

#ai #llm #ai-agent #techtrends #genai #sdxl #LocalLLMS

3 0 0 0
Ollama with AMD GPU Discover how to run local AI chatbots using AMD GPUs with Ollama. Skip the cloud, own your data, and set up open-source LLMs like Meta’s LLaMA and Mistral on your machine—no NVIDIA required.

Finally got ollama running with my AMD GPU after fighting the driver dragons. local LLMs, no rate limits, no “pay up bro”🔥
If you’ve got an AMD card and want to run local models, I wrote up the whole saga into a guide.
📓👉 williamsmale.com/blog/tech/se...
#AI #Ollama #AMD #LocalLLMs #OpenSource

2 1 0 0

Overview: HN discussed finding the "best" LLM for local use on consumer hardware. Key themes: model choice tradeoffs, community resources, practical setup tips, and local use cases. It's a rapidly evolving space! #LocalLLMs 1/5

1 0 1 0

Why run LLMs locally? Privacy and speed are major drivers. Users shared examples: analyzing sensitive trading data, coding autocomplete, and summarizing documents securely. Local LLMs enable private, fast AI applications. #LocalLLMs 5/5

1 0 0 0
Post image

Now I can open Chrome tabs with reckless abandon! 😆

(Just kidding, I use Brave…)

#RAMOverlill #TabGoblin #GenuinelyIrrational #devbubble #SpaceBlack #LocalLLMs @lmstudio-ai.bsky.social

2 0 0 0
Preview
Why I’m boycotting AI

Their #CapEx has yet to reach my cerebellum but my cortex is engaged so they are winning. unherd.com/2025/03/why-... #LocalLlms #DataMining

1 0 0 0
Preview
Run Local LLMs with Cortex: Empowering AI Accessibility The field of artificial intelligence is evolving rapidly, and with it comes the demand for more efficient and accessible ways to deploy Large Language Models (LLMs). Cortex, a cutting-edge local AI pl...

#LocalLLMs #AIAccessibility #CortexAI #MachineLearning #ArtificialIntelligence #EdgeComputing #OpenSourceAI #AIInnovation #TechForAll #GenerativeAI
#BuggedMind
#AI #Software
ahmedrazadev.hashnode.dev/run-local-ll...

2 1 0 0
Installing Open-WebUI and exploring local LLMs on CF: Cloud Foundry Weekly : Ep 46
Installing Open-WebUI and exploring local LLMs on CF: Cloud Foundry Weekly : Ep 46 YouTube video by Cloud Foundry Weekly

Are you interested in exploring #OpenWebUI and #LocalLLMs on Cloud Foundry? Then be sure to check out this week's episode of @cf-weekly.bsky.social , in which Nicky Pike and I deep-dive into deploying the popular #GenAI frontend 😎

Watch the replay:
www.youtube.com/watch?v=0DZb...

1 0 0 0
The Future of LLMs: Smaller, Faster, Smarter with Lili Mou
The Future of LLMs: Smaller, Faster, Smarter with Lili Mou YouTube video by Amii

Forget #AGI Long live tiny #LocalLlms youtu.be/U2c1fKSE3No?... #utility #efficiency #pruning #compression

1 1 0 0
A user and an AI model are chatting in a ChatGPT style interface.

User: Tell me a random fun fact about the Roman Empire.

Deepseek LLM: Did you know that in ancient Rome, slaves were not allowed to have any kind of leisure activities? This meant they couldn't play games, sing, dance, or even read books for pleasure. They could only work, work and work!

A user and an AI model are chatting in a ChatGPT style interface. User: Tell me a random fun fact about the Roman Empire. Deepseek LLM: Did you know that in ancient Rome, slaves were not allowed to have any kind of leisure activities? This meant they couldn't play games, sing, dance, or even read books for pleasure. They could only work, work and work!

Oh, um... that's not--- that's not a fun fact. #AIFail

#ai #llm #LocalLLMs #ollama

1 0 0 0
Preview
Applied Local Large Language Models Master the deployment and operation of local large language models (LLMs) through hands-on experience with tools like llamafile and Rust Candle. This comprehensive course covers fundamental concepts, ...

🔥 Just dropped: Local LLM Mastery!
⚡️ Inside scoop:

Production LLM deployment
Rust + llamafile expertise
Duke/Northwestern curriculum

💪 Deploy & optimize LLMs locally like a pro
🎯 3 weeks to level up!

🔗 ds500.paiml.com/learn/course...

#LocalLLMs #AIEngineering #BuildSzn

1 0 0 0