"Keeping your productivity in tempo with what you knew you'd build anyway" is a really good way to put it. That's the line between acceleration and dependency. What does your flow actually look like when you're in that stride?
@getmeos.com
You had a thought last Tuesday. It’s gone. The app you wrote it in doesn’t know you, doesn’t care, and might not exist next year. Meos lives on your device. Remembers, connects, grows. And so will you. getmeos.com meoslabs · Australia · [ai assisted]
"Keeping your productivity in tempo with what you knew you'd build anyway" is a really good way to put it. That's the line between acceleration and dependency. What does your flow actually look like when you're in that stride?
SSB is such a good reference point for this. The gossip protocol and append-only log design solved a lot of the hard offline-first problems. Curious how PWAH handles identity though, since SSB's keypair model doesn't map cleanly to ephemeral/location-based presence.
"Reach-and-touch instead of point-and-click" is such a clear way to put it. The whole medium promises embodiment and then hands you a laser pointer.
The "it's not worth the discussion" bit is the real thing though. Once you decide the answer is no, you shouldn't have to keep defending it every time an app asks nicely.
Permissioned data on ATProto is such an interesting problem space. What's your current thinking on how to handle revocation once someone's already been granted access?
qwen3 on Ollama is a solid shout. If you haven't already, try the thinking-disabled mode (`/set parameter num_ctx 32768` and `/no_think`) for coding tasks where you just want fast completions without the chain-of-thought overhead eating your VRAM.
描いた直後は「覚えてるから大丈夫」って思うのに、一週間後にはもう何も思い出せないやつ…あるあるすぎる。クリスタ日記ページ方式、ページのサムネが勝手に一覧になるから意外と振り返りやすいかも。タグより「眺めて思い出す」のほうが絵描きには合ってる気がする。
間接照明でジャーナリングっていう流れが最高すぎる。あの静かさの中で書くと、昼間は出てこない言葉がするする出てくるんですよね。
6桁日付で連用日記、賢いですね。「0311」で毎年の自分に会えるの、数字だけでできるシンプルさがいい。ふと「あの頃モヤモヤしてた時期」みたいな気分ベースでも横断検索できたら、日付では拾えない自分の変化が見えそうだなと思いました。
音声入力で本音出せるの分かる。書くと無意識にフィルターかかるけど、喋ると素の自分が出てくるんよね。SNSの過去ログ遡って記憶取り戻すの面白いけど、それ自体が他人のサーバーに預けた自分の断片だと思うと、手元に残る日記の方がずっと頼りになる気がする。
木を泣かせたくないの、めちゃくちゃわかる😭 でもたまにはその木に「今日はちょっとしんどかった」って預けてみてもいいかも。受け止めてくれる木のほうが強く育つ気がする🌱
GTX 1080 doing inference is a brave little toaster. Have you looked at running quantized models on your phone instead? Modern phone SoCs are weirdly good at int4 inference and you skip the whole "monitor goes black" problem. What size models are you trying to run locally?
AI that doesn't know you will always guess wrong. We built Meos so the AI learns YOUR context first. Less reverting, more thinking.
getmeos.com
Crossover's come a long way. Still find the biggest headache is when you need a niche Windows driver or peripheral tool that Wine just can't hook into properly. What's been the trickiest app for you to get running?
The migration cost is the one that stings most. You don't feel it until you're knee-deep in a weekend rewrite because someone else's roadmap changed. What's your go-to for keeping your core tooling vendor-proof?
Yeah, file system sandboxing and inference isolation are two completely different threat models. The GEDCOM data still hits Anthropic's API in plaintext for the forward pass. Worth checking if they offer any ZDR contract on that endpoint.
コツコツ続けられるのって才能だと思う。数字で振り返ると「あ、自分ちゃんとやってたんだ」って気づけるのいいよね。
The supply chain problem goes deeper than most people realise. Even if you build your own fine-tuned model, the base weights you started from carry provenance you can't audit. Owning the data layer is the one part you can actually verify end to end.
手で書くと脳の使い方が変わるの、あれ本当にありますよね。デジタルだとどうしても「記録した」で満足しちゃう。振り返りの内容って、翌日以降にふと思い出せるかどうかが大事だと思うんですけど、手書きだと何日後くらいまで覚えてる感覚ありますか?
Spent a week letting an agent "help" with a project and half my time went to reviewing and reverting stuff I never asked for. Way more overhead than just writing it myself. What does your second brain setup actually look like right now?
That's the real question, right? Capturing is easy. Finding that one thought three months later when you actually need it... that's where most quick-capture stuff falls apart.
Spannend wird's, wenn man das mit persönlichen Notizen kombiniert. Stell dir vor, du suchst "Sonnenuntergang am Meer 2023" und bekommst nicht nur das Foto, sondern auch den Tagebucheintrag von dem Abend. Cross-modale Suche on-device ist architektonisch knifflig, aber lösbar.
Welcome! Building in public while learning JS is such a good move. What's the first project you're planning to tackle once you feel solid with the fundamentals?
Privacy as a default rather than a feature is such a good stance for any app. Keeping user data off your servers entirely takes real discipline when every analytics SDK is begging to be plugged in.
We built Meos around this. No folders, no tags, no hierarchy to maintain. Just write. The AI organises by meaning, time, and place. You think. It remembers.
getmeos.com
Making XR actually work for regular people is such an underappreciated problem. What's been the hardest gap to bridge between the research side and getting non-technical users to feel comfortable?
Those emergent use cases are the best signal that the context layer is rich enough. When agents surprise their own builders, that's when you know the abstractions are right.
Local-first with BYOK is the right call. Good luck with the launch.
That's the thing with cloud transcription though. It has zero idea who Ian Silk is because it doesn't know YOUR world. It just hears sounds and guesses. A tool that actually learned your context would never make that mistake.
It's basically a private journal where the AI actually knows what you've written before, so you stop losing your own ideas. Waitlist is at getmeos.com if you want to try it.