People love the first amendment right up until it has a fully intended consequence that they find personally distasteful and this is fine it's fine I'm fine I'm completely normal about this.
@jmiers230
Law Prof @AkronLaw | Computer Scientist | bot psychologist π€ | 1A π¬ / tech expert Priors: Google, Twitter (no, not X), Chamber of Progress jmiers@uakron.edu | Signal: j230.95 Currently writing about chatbots, speech, and suicide.
People love the first amendment right up until it has a fully intended consequence that they find personally distasteful and this is fine it's fine I'm fine I'm completely normal about this.
I'm writing an article on this so I have more thoughts to share, but yes that's essentially the point.
As is often the case, I agree with Jess here, and I think Alan's version of things would effectively eliminate the First Amendment for code. The gov't should not be able to compel a developer to build specific features, whether it's backdoors or removing safety guardrails.
Model making is actually quite interestingly similar to what goes down in newsrooms.
Folks don't realize or seem to appreciate just how many expressive decisions go into the process. I could write an entire article just on the data scheduling for training alone -- it's that deep and intentional.
Disagree.
The development lifecycle of these models involves numerous rich editorial decisions developers make that influence the outputs. Viewing outputs as only implicating listener rights cuts out the expressive choices devs make.
And yes, this means 1A limits some lawmaking. That's the point.
You *sound* like a seasoned litigator. This is so interesting. Didn't even think about the billing issue after the fact.
And yep! This is largely why the bar exam is changing.
??? Say more....
I co-authored a @techdirt.com piece with Professors @brianlfrye.bsky.social Kevin Frazier, and Michael Goodyear on the human problems that AI is revealing to us, like suicide, and why policymaking approaches that ignore those human realities are ineffective.
www.techdirt.com/2026/03/10/h...
I co-authored a @techdirt.com article with @jmiers230.bsky.social @kevintfrazier.bsky.social & Michael Goodyear on why technology isn't always responsible for human problems. www.techdirt.com/2026/03/10/h...
Yes, lots of missing context and no screenshots of the dialogue. Need more info.
Looking to speak to immigrants rights groups and immigration lawyers ASAP for a story about ICE/DHS and mass surveillance. If thatβs you or anyone you know, DM me your best contact number!! Or tag ppl below who would be good to connect with.
The first research paper from WashU's AI Humanities Lab, which I co-direct with Gabi Kirilloff, is available now in the Harvard Data Science Review! Read to learn more about how (badly) current LLMs are at replicating literary style: doi.org/10.1162/9960...
This is crazy, but also this complaint is capital F frivolous. They're suing for tortious interference with contract, abuse of process, and unlicensed practice of law and guys, no, what are you doing? The first two are dead as a doornail because ChatGPT cannot have the needed intent. And the last 1?
a few hours of work later and it's looking like this
I think you're absolutely spot on. I teach my law students to bring their clients along with them! Meet them where they are and advise.
Another excellent insight for lawyers from the banking world
The other issue is that, in reality, there is such a wide skill differential among lawyers that sometimes ChatGPT would probably do a better job. The shit I saw when I was a legal aid lawyer from the hacks my clients had turned to before they learned I existed was truly horrifying.
What's interesting is that I think some legal research profs would even disagree with that! We aren't doing enough legal research training either.
But to your broader point, I tend to agree. This is largely what's driving the approach to the new bar exam. I'm all for it.
Good insight for lawyers from the IT world.
I used to do IT and I learned that when people call they have two problems. The tech problem, and the emotional problem that they're upset enough to call. You have to fix both for it to feel successful. People need to feel heard and respected, especially when they already feel dumb and defensive
That does make me wonder if lawyers can find ways to connect with clients by explaining our role / value. That's certainly not a trivial task and I have no helpful thoughts on how to do this better.
Plus I imagine it's really hard for the shorter term one-off interactions.
Clients simply donβt understand what we do or where we bring value, and (related but distinct) are unequipped to evaluate our skills/capability.
For these reasons, theyβre always second-guessed us, but never had a way to effectuate that second-guessing before.
That and probably more field trips to actually watch cases...
I'm stealing this for my torts slides...super interesting.
Folks in practice -- the bar exam is changing to focus on practical skills instead of rote memorization.
We are looking to y'all as the experts for insights to share with our classes (or I am at least).
I'm highly considering bringing guest lecturers in for my Torts class next fall...
YOU SHOULD BLOG. You have 30 years of practice wisdom!
Idk if you've heard but the next gen bar exam is testing on things like client management skills. So many of us profs either didn't practice long enough or have been out of practice for a while. We are looking to y'all for the insights now.
Good insight. Client management skills are now being tested on the next gen bar exam and I've been thinking a lot about how we integrate that into our curriculum.
Re: skill mismatch...say more?
Do you blog at all? You always have super interesting insights from the ground.
Also you should do a thread on this point -- client countermoves. I'm guessing this isn't really new to gen ai and gen ai is just an accelerant.
But see also this point!
bsky.app/profile/dani...
This is such an interesting point. I want to know more...