It needs to propagate, by carrier pigeon!
It needs to propagate, by carrier pigeon!
Wrote a short blog post praising the direction Mojo programming language is taking.
mzaks.medium.com/when-magic-b...
Last Wednesday I gave a 25 minutes talk on Mojo, the new programming language from Chris Latner. In the talk I give an overview of what makes Mojo special in the field of AI/ML and as programming language in general. www.youtube.com/live/Wi6xnD-...
I wrote something today.
mzaks.medium.com/was-2025-the...
Did some weightlifting in my Mojo socks today.
Achieved 20% of SOTA performance.
Was I wearing my socks in debug mode?
One interesting side effect of introducing defaults is:
Now users can deprecate required fields if they provide a default. As the default will be set in the binary so old code will be ok, users just need to identify what kind of default value will make sense for an older client.
For union type you define the union case and then the prefab name of the type or directly a value if the type is a primitive one.
This implies that users should be able to describe a default for a complex node if the field points to a node, or even better if it points to a union type.
Hence I decided to introduce a concept of a prefab. Users can add named prefabs to a node, defining an instance of the node.
This saves a little bit of payload, but stiffens the schema for later evolutions. I decided to always store the values in the binary so the defaults become only API convenience.
Another difference, I allow you to provide defaults to all types of fields.
Working on default values for fields in Dagr. Turned out to be a much deeper rabbit hole than I expected.
Formats like ProtoBuff and FlatBuffer allow users to define default values in schema and then do not store the values in the binary if they are equal to default values.
Then it provides a function to generate the source code from the graph.
What Dagr does not provide, is an executable.
You build the code generator yourself.
This reduces tons of βlast mileβ complexity.
What do I mean by that?
Dagr provides all building blocks for code generation. It has an internal DSL for expressing the data grab of your dreams. It validates the grab for correctness, it even checks if you already generated code with other graph and if those graphs follow the evolution strategy.
Dagr is the IKEA of code generation. π
I am also experimenting with bubbling up of the values on find for larger dictionaries. Something like self balancing the dictionary on usage. Often used keys should come first.
Hence the striding of the hash values. This way, the linear search compares 16, 32 or 64 elements at a time (dependent on your CPU supported SIMD instructions) and is quite efficient for small dictionaries. The overhead of Smap is quite minimal.
Another experiment I am doing right now is called Smap. S stand for simple/small/strides β¦
The idea is to have just list with key values and and additional buffer with hash values computed from key values and strided over SIMD width. The lookup is a vectorized linear search.
Iβm in the associated array rabbit hole again. Implemented a StringDict where the key is a String. This restriction allows for compact key storage but that makes compression upon deletion expensive. But who deletes an entry in a dictionary? Am I right? π
AI agents = retained GUIs with ambition π
Same old state sync chaos, just more tools.
New post π mzaks.medium.com/what-do-ai-a...
Thanks π
If you wondered, what kept me busy this year. Here is a glimpse:
www.producthunt.com/posts/zencod...
Give it a try!
I would assume that as the field is optional, the server should return null for the fields it doesnβt know, but it seems like the server returns a validation error. π€¦ββοΈ
I am genuinely surprised to see that GraphQL does not support forward compatibility.
Meaning an old server will error out when a new client sends it a query which it canβt validate due to schema evolution, like for example an addition of an optional field on a type.
I agree that it is always best to understand the domain and the business you are in, but will the prima donnas be affected by AI? I am very skeptical about that π.
There are tons of use cases where this will not be a problem, but there are probably even more use cases where itβs not a good idea.
So you build something, which now you need to maintain and it is also just built for you, there is no guarantees others will find it useful. So instead of a system used by many and maintained/evolved by many, you lock yourself in into a DIY solution.
You need to test the software and although you build it for yourself and you should know what you need, itβs often that what we believe is best for us is only based of luck of imagination or experience. βFaster horseβ and suchβ¦
This however didnβt make professional cooks obsolete. Moreover most people prefer not to cook for themselves, be it out of convenience, or expectation of higher quality.
Building a software for yourself is work, even if it is done by an AI assistant, you still need to explain everything.
A few days ago, I was thinking along the same lines. I came to an analogy with cooking. Nowadays everybody can cook a meal for them selves, there are different βtoolsβ, different recipes and ingredients e.g. instant noodles, where you just need bowling water.