nyahaaaahaha, this is exactly zuck's vision for facebook -bots posting incessantly, keeping engagement high. Makes total sense 🤣
nyahaaaahaha, this is exactly zuck's vision for facebook -bots posting incessantly, keeping engagement high. Makes total sense 🤣
You should check out the new qwen3.5 series. Interesting stuff: for 25% of it's attention network, they use the regular KV cache for global coherency, but for 75% they use a new KV cache with linear(!!!!) memory-token-context size expansion! which means, eg ~100K token -> 1.5gb VRAM use
The actual takeaway here, is that anthropic is surcharging retail API costs by a factor of about 20x. Which definitely makes lots of business sense! But also leaves a wide margin for prices to compress into once the relevant pareto-frontier moves from quality/brand into pricing.
#vrchat is beautiful
#vrchatphoto
speech -> translation + speechbubble: github.com/misyaguziya/...
audio output -> transcription is actually windows 11 built-in transcript tool, then extended with github.com/SakiRinn/Liv... which provides translation
#vrchat #vrchat_jp
finally got my fully bidirectional en<->jp kit setup: speech -> translation -> speech bubble; audio output -> transcription -> translation -> text
and got my first headpats from jp peeps 😸 kit in 🧵
One more experiment: "Draw yourself"
2x waiIllustriousSDXL 2x ilustmix_v70Cinematic
I think, for the most part, these are just reflecting strong priors on what it should generally put inside the mirror -and where the priors aren't strong enough, fill it with emptiness -rather than being reflective on what the model thinks itself to be
waiNSFWIllustrious_v110 mirror-test #aiArt
ilustmix_v70Cinematic mirror #aiArt #comfyui
slightly drunk conversation with @jcorvinus.bsky.social yesterday "but what do image diffusion models see *themselves* as? how do they reflect to themselves?" "mirrors. let's just prompt it to draw a mirror, and see what it hallucinates into it".
And so I give you: waiIllustriousSDXL more in 🧵
#vrchat is beautiful
New hair! 😸💞 #vrchat
#vrchat avi creators are betting the wrong horse -the real money is in live2d avis :DDD
"What are local models even good for" one of my fav harness for these is github.com/Open-LLM-VTu... ,essentially an OSS reimplementation of Neurosama: live2d avatar with any LLM, and S2T/T2S back-end.
See below: Luna, my cute catgirl, who lives on my desktop sometimes 😸
Drives my memory harnesses I originally wrote for gemma-3 natively, without any prompting -just vibes and MCP descriptions.
Best of all: new memory architecture drops context-size - VRAM usage from quadratic relation to 75% linear relation -this can do 100K+ tokens on Q8 with 12GB VRAM!
is it okie if we talk AI stuff here? today we're super hype on qwen3.5 -the 9b variant- the little model that Can Do. This is a smol, thinking, agentic tool-using, text-image multimodal creature, with very good internal chain of thought, running on my compy 😸
with cancanvrc 😊
Elsyian: the manga!
Lacy’s arrival in Anther, the home of the flower fairies 🌈💐🌼🌷
#ocsky #canterwit
humans will pack bond with _anything_ 🤣
#VRChat is beautiful 💞
#VRChatPhotography
catgirl in vrchat
Hi bsky & #vrchat , hope you're having a fun weekend 😸