I think the hard part will be serving blobs, but thats probably something you can work around. I think the rest is just periodically spinning up your pds endpoints, hitting the relay with a request crawl and letting it do its thing from there
I think the hard part will be serving blobs, but thats probably something you can work around. I think the rest is just periodically spinning up your pds endpoints, hitting the relay with a request crawl and letting it do its thing from there
yepp, very much agree
assuming this is openclaw? Whatever default voice the creators of that project gave it is super annoying and feels like the worst cryptobro personality
Yes, Paul did say that. And you're absolutely right!
π
oh sweet summer child
Yeah this basically. Hiring a CEO is a really complicated and difficult thing, and someone who has done it before is going to be more proficient at running that process. In some sense its like hiring a specialized recruiter
I mean, she did. The search process has been ongoing, plus she's still chair of the board and in the companies C-suite.
The important thing to the company has honestly been to get her freed up to work on more useful stuff than the day to day of being CEO
She's not leaving, she is actually going to be more free to work on the technology now, which is what she really enjoys doing. Running a large company as CEO just eats all of your time and leaves very little room for creative work.
Honestly though this guy here is my spirit animal:
All 20 of them are going to be very upset
I love me a good hornets nest
Can confirm both of these things
Not really, ive trained LLMs for moderation purposes that have significantly outperformed previous types of models for the task. I think you might be largely concerned about image generation models though?
very good news :)
Very excited for the new projects we have in the works that Jay will now be leading more directly as CIO. She has always been excellent at seeing the vision and knowing where to go, so glad that we're unburdening her from the day to day of running the company.
Im sure it can be carelessly used that way. Ive also just pointed it at codebases of mine and said βfind bugsβ and it pulls up and fixes some good stuff.
Im pretty sure it was that being in the sun for so long caused the battery to expand and pop the waterproofing seal
Yeah, i was very excited when the first waterproof phones started being a thing, was a big innovation imo, even if i have to occasionally cash in my apple care on mistakes like this one π
Happy to help
I was quite far from a fridge, no nearby kitchen, it was pretty hot out and the pool was right there (and i had been in said pool with said phone the day before reading). The phone was just displaying the βphone too hot!!β Error so I figured why not
I ran into one of these last night who insisted that because LLMs were "non-deterministic" and coding was "deterministic" that LLMs couldn't be trusted or something.
I wonder if they get together and come up with bad analogies that don't make sense together, or if these are individual efforts
Actually has gone quite well, still more to be done but we had a very significant improvement in our main toxicity metrics.
The words you are saying here make it very clear you don't understand software engineering enough to have an informed take on the use of LLMs in it.
"non-deterministic solution to a deterministic problem" - seriously? This makes zero sense as an argument. *I* am non deterministic.
This question actually does worry me quite a lot, and is one of the biggest things about AI I think we need to be figuring out. It's super unclear how things are going to shake out
Its possible LLMs open up new opportunities for fucking things up, but I trust Opus over any junior eng i've worked with, and a lot of seniors engs too.
Good engineering requires many different layers for issues to be caught at, as well as layers of systems to handle when things break and gracefully recover.
Your posts betray a significant lack of understanding of how software engineering works.
Any halfway decent software team already has to deal with things breaking in production, because stuff does break in production, often. Bad code makes it through review without AI, its a fact of life. The LLMs reviewing things actually helps with this a lot.
Your arguments are "what if someone has a bad day and just does something bad", which doesn't need AI to happen. Someone could push bad code and someone else could just slam the approve button without reviewing it any day of the week. Maybe someone even deploys it, so?
βWhat if they just push random bullshit to the repo and deploy it!!!?!!β
Bud I could do that right now without AI