AI-powered team collaboration
AI-powered team collaboration
All the latest models keep breaking SOTA coding benchmarks but Iβm not sure if theyβre that much better.
The only thing Iβm sure about is that Iβm rarely able to ask something without getting an overengineered solution and a random new README in my codebase.
I was wrong to trash React and other frontend frameworks for the past few years.
Once a project is big enough, they definitely make you more productive vs vanilla, htmx, etc
But I'm happy that I didn't switch earlier, writing frontend code without AI tools must be horrible.
Somehow, my most random side project made it to the FT π
It's a quick test designed to assess your estimation skills: estimator.dylancastillo.co/
This is inspired by @codinghorror's great posts: blog.codinghorror.com/how-good-an...
archive.is/qDc0v
The biggest life hack is having a job that feels like a hobby
Thank you, I'll update the article!
It's late but I finally finished my 2024 review.
Last year:
π΅ I worked on 9 projects with 7 clients. Doubled revenue, costs are up by 155%.
π» Coded 322 days. Wrote 14 blog posts.
π§ Struggled with focus. Nearly burned out.
πΈ Debi tirar mas fotos.
dylancastillo.co/posts/2024-...
I got an email from Google saying that one of my side projects, deepsheet, got 1,000% more clicks.
After a bit of digging, I realized that it was just due to people misspelling "DeepSeek."
There are now people out there who think that China's top AI is a π© that makes charts.
New pinned tab
Always remember that using a response schema for an LLM is not the same as using one for your API.
Sounds easy, but happens to everyone.
Here's OpenAI breaking the CoT reasoning of an LLM judge.
Note to self: your only job is not to break the chain.
Here's the full post: dylancastillo.co/posts/gemin...
and the github code: github.com/dylanjcasti...
In any case, for me, the key takeaway is that SO can decrease (or increase!) the performance in some tasks. Be conscious of that.
For now, there are no clear guidelines on where each method works better.
Your best bet is testing your LLM running your own evals.
So, if you only consider constrained decoding (JSON-Schema), performance decreases across the board vs. NL.
Given this result and the key sorting issue, I'd suggest avoiding using JSON-Schema, unless you really need to. JSON-Prompt seems like a better alternative.
Still, I could workaround the issue and re-run the benchmarks. NL and JSON-Prompt are tied.
But JSON-Schema performed worse than NL in 5 out of 6 tasks in my tests. Plus, in Shuffled Objects, it did so with a huge delta: 97.15% for NL vs.Β 86.18% for JSON-Schema.
There's a propertyOrdering param documented in Vertex AI that should solve this: cloud.google.com/vertex-ai/g...
But it doesn't work in the Generative AI SDK. Other users have already reported this issue.
For the benchmarks, I excluded FC and used already sorted keys for JSON-Schema.
Before generation, they reorder the schema keys. SO-Schema does it alphabetically and FC does it in a random manner (?). This can break your CoT.
You can fix SO-Schema by being smart with keys. Instead of "reasoning" and "answer" use something like "reasoning and "solution".
Gemini has 3 ways of generating SO:
1. Forced function calling (FC): ai.google.dev/gemini-api/...
2. Schema in prompt (SO-Prompt): ai.google.dev/gemini-api/...
3. Schema in model config (SO-Schema): ai.google.dev/gemini-api/...
SO-Prompt works well. But FC and SO-Schema have a major flaw.
Found 2 big issues with Gemini's structured outputs (SO):
1. Using constrained decoding seems to lower performance in reasoning tasks.
2. The Generative AI SDK can break your model's reasoning.
Just re-ran Let Me Speak Freely benchmarks with Gemini and got some interesting news
Pi by Darren Aronofsky
Here's the post with all the code required to replicate the results: dylancastillo.co/posts/say-w...
Once or twice per month I write a technical article about AI here: subscribe.dylancastillo.co/
Iβm not saying you should default to unstructured outputs. In fact, I usually go with structured.
But itβs clear to me that neither structured nor unstructured outputs are always better, and choosing one or the other can often make a difference.
Test things yourself. Run your own evals and decide.
Then I switched to GPT-4o-mini, using LMSF's results as a reference.
Tweaked the prompts and improved all LMSF metrics except for NL in GSM8k.
GSM8k and Last Letter looked as expected (no diff).
But in Shuffled Obj. unstructured outputs clearly surpassed structured ones.
I began by replicating .txt's results using LLaMA-3-8B-Instruct (the model considered in the rebuttal).
I was able to reproduce the results and, after tweaking a few minor prompt issues, achieved a slight improvement in most metrics.
Structured outputs can decrease LLM's performance in some tasks
I replicated @willkurt.bsky.social / @dottxtai.bsky.social rebuttal of Let Me Speak Freely? (LMSF) using gpt-4o-mini
The rebuttal correctly highlights many flaws with the original study, but ironically, LMSF's conclusion still holds
Me after using ChatGPT to reproduce and patch a security vulnerability in a package downloaded 1 million times per month.
Good stuff! Will be useful soon. I'm about to jump ship from Poetry but old habits die hard.
ML is a subset of AI
I believe you're the one creating the strawman.
People are lynching a researcher for publishing a dataset of publicly available data that, if anything, will be used to improve this same social network where they're doing the lynching.
I'm trying to make clear that AI has tons of positive use cases