Ok [*rolls sleeves*] I want to properly understand money. What do I read?
@matheus23.com
Building iroh with the amazing folks at number 0 (n0.computer). Generally striving to increase user agency and excited about commons networks. Only works for Canadian CEOs, apparently. Rust, cryptography, CRDTs & more on my feed
Ok [*rolls sleeves*] I want to properly understand money. What do I read?
I'd love to hear a "common misconception around bevy" take of yours :)
Screenshot of Aljoscha's message on discord: I'm not on Bluesky, so I can't chime in directly. But in a nutshell: - you can incrementally recompute the root hash when appending data, in amortised `O(l)` time when appending `l` bytes (worst case per append is `O(l + log(n))`), where `n` is the total length so far - this means you can hash a string incrementally in linear time - this is identical in Blake3 and Bab - the main differences between Blake3 and Bab: - Bab has constant-size length proofs - in Bab you can speed up computation when the input string repeats (for example, you could compute the hash of `n` successive zero bytes in O(log(n)) time, whereas Blake3 deliberately requires O(n) there to thwart timing attacks) - Bab admits multiple instantiations (different digest sizes, different merkle tree label computations), Blake3 is one-size-fits-all
I asked Aljoscha to fact-check on discord:
Iroh blobs uses the exact same construction that BLAKE3/bao uses. Files hashed with iroh-blobs give you BLAKE3 hashes.
We store the inner tree hashes only up to 16KiB chunks and recalculate the rest on the fly, but other than that it's identical to bao.
And actually a minor correction: I think I was wrong claiming that length changes require rehashing. Seems like that's not the case.
All good :) Bab is cool IMO. Just wanted to correct the fact that bao also supports random access and took the chance to blurt out some additional facts
And IIRC this comes with the downside of needing to know the size of what you're hashing in advance/needing to rehash everything on append.
Bab is inspired by BLAKE3/bao (hence the similar name).
Both feature random access.
IIRC, something that Bab improves over bao is that each random access Merkle Proof also comes with a proof of total size.
Given this methodology, it might even be a low estimate. Servo likely tackled low hanging fruits with big impact first. So this report would assume this stays the same.
screenshot of the one-pager version of the servo readiness report
How do we get to more than just three web engines owned by three US companies?
It's a gargantuan question, with no easy or right answer.
I've put together a draft report, thinking about it through a very specific approach - please enjoy:
Servo Readiness Report
webtransitions.org/servo-readin...
I have a lot of worries, but one is that expectations of speed increase to the extent that I can't either take the time to handwrite it if I want, or take the time to do it in collaboration with an LLM at a level of quality that would have been prohibitively time consuming before.
A clean looking graphic with sharp lines and crisp colour
The same graphic, but muddy and blocky. The previously sharp lines are blurry.
I think it's often overlooked that AVIF is also really good at flat colour & sharp edges.
Don't go straight for a lossless format just because it's the kind of image that would look bad as a JPEG.
Here's an 11kb image as an AVIF, vs JPEG XL.
I love this! Was just looking through reverse dependencies of bevy yesterday.
No webrtc in iroh! And so no - no base64 overheads.
Virtual networked memory?
There's this account:
bsky.app/profile/auto...
which has an automated list for e.g. unverified accounts over 10k followers, so you can mute/block them all at once.
I'm strongly in team "any markdown field could have been a facet"
Many plates spinning right now, but I keep feeling like I need to port/rewrite some of the atjson tooling, because atproto facets really need better tooling.
Oooh my god this feels sooo hard to read after working in GitHub all day and dealing with buggy CI, slow page loads and various UI bugs.
making software that’s good and durable and solves a user problem remains hard
architecture remains an essential consideration and thinking through it is a job
while you can speed them up, you cannot replace an engineer with a robot
their skills and insights…
Remain special.
Two chat rooms next to each other showing the same thing, a new chat with a video ("I don't wanna do the work today") inline
need to figure out adding friends in an easier flow (right now you both have to have added each other to the friend list to start getting status pings) but p2p chat is very easy to build with iroh
We plan on integrating WebRTC eventually, but then we lose control over how exactly the holepunching works, so it all depends on how well the browser implements it again (i.e. you don't benefit from our tireless work of getting holepunching right).
so I personally like to think of this more like a "progressive enhancement" thing where you point people towards a native app that can actually do peer to peer stuff.
Yeah. We have examples in the iroh-examples repo that check whether stuff compiles to Wasm & runs.
The only downside is that you can only send data via relays in browser at the moment,
Well, I for one, am excited to read more 👀
Friendly folks wrote a great article about Privacy Pass, a way of cryptographically unlinking the issuance and redemption of auth tokens:
research.chainbound.io/private-api-...
inspired by CLAUDE.md, I’ve started putting markdown files named after coworkers into work code repos so I can remind them to stop doing shit to the codebase that annoys me
for some reason they’re all mad at me now, which means ill be adding commands to JEREMY.md for an attitude adjustment
Ah right. Thanks for clarity. For some reason I was thinking of the cases where one shows both the command and output.
Yeah for command only that 100% makes sense.
Why not? (Genuinely)
BTW, this may be uncharitable and ignorant of me but I totally interpret this arbitrary 16 GiB limit in the browser as "That one mistake—virtual mem is a proxy for real mem—that people keep making", which the entire smalloc project is kind of an attempt to dispel. github.com/zooko/smallo...