ok having a friend agree i'm not speaking english means i need to nuke this character, that shit is humbling. i had been using the job hunt as an excuse to stay on here during lent but i think thats ok now. back later as a different me, kinda
ok having a friend agree i'm not speaking english means i need to nuke this character, that shit is humbling. i had been using the job hunt as an excuse to stay on here during lent but i think thats ok now. back later as a different me, kinda
this sounds dope, as a long rime gfonts and dafont consumer, the digging is painful every time
what is it?
my point, aside from amusing myself, has consistently been somewhere between those two ferns, and apparently, accessible only to me to a degree that makes me pretty sure i don't speak english anymore
you're welcome!
nice!
ok
those in turn would be useful compositional parts if i wanted to do doc analysis on a professional level
so like, i didn't hit your target. but i did end up with
- codified info about how different states do formatting that could be useful for ripping through pdfs/docxs
- flows for classifying tiny chunks of legal language with small prompts in a structured way
as bad as the results were, individual subsections got very useful systems. if i'd been interested in decomposing the problem even slightly more, some useful little unix-style bits could be salvaged
something that popped out of my fuckery was that small tools work really well here. like, very small tools. state specific formatting understanders and shit
look now, i'm only on downers after 8pm, rude is one thing but this is getting defamatory ( i think, but, you'd know)
oh, yeah, i'm fucking infuriating to speak to/argue with, can confirm, as can tewson lmao. i think it's like, how cats and dogs are more irritating to each other than either is to, idk, crows
that's true! and likewise!
i think, based on compliance community interrogations of the tech, that could be made Safe, but i do hear the concern. We're able to pull it off with health data tho, it should be viable, just, painful
i...ahhh...t&r in particular, i have insight into, i think their eng capabilities do not currently represent the edge.
what i did wasn't engineering, i'll take that. but my distaste for the other's preferred pedantry vs my own stands
but in any case the applicability is very much less a property of the base model than the entire model/prompt/harness unit. a useful law unit would probably include subagents and targeted search tools
maybe authority turns out to be too abstract but precedent is somehow fairly coherent; it's really gonna depend a lot on what one might cal semantic differentiation; the individual elements of the system being analyzed must be a useful distance from each other in the weights
because that would let you 'just try shit' like "pull all my local documents referencing x use of authority in y idon'tfuckingdolaw" and if it doesn't work, you just do it anyway, like you were gonna before you typed that command
i think the actual thing to sell highly adventurous lawyers now, real gonzo ass shit, would be a claude desktop app with a system prompt written in collaboration with a law firm, so that certain terms (ex. 'authority') are fundamental nouns within it's comprehension / system-facing attitude
but the short version is that all the base models are approximately high on the entire universe and you gotta ground 'em, but if you're good at grounding them, they get human-style predictable for many categories of behaviour, semantic analysis specifically being a strong one
few-shot would be the fastest path, but also maybe insufficient? but if you have enough example pairs, you can _massively_ improve on the performance of the base model.
if i didn't like, dislike law experientially, i would follow up harder on this
THIS. THIS I CAN ANSWER.
the base model isn't very useful for code either! the thing you need is grading schema and time
like, sincerely, i keep picking lexis because their name was itt, but, _someone_ with the docs (call and response examples) lying around to do this could do it, using llms, specifically gpts, this year, and that's the most interesting thing i've said imo
i mean similarly, most of what i shove in claude-code i'd consider putting on fiverr, except that it'd take 4-6 weeks on fiverr.
for the two minutes i thought i cared enough to do shit right i wanted to ask what an intern pulled for this work, because the roi would be no more than that x frequency
that was, fwiw, never my read of david, he also explicitly said he was demonstrating that the model can reason about the problem space, like, several times, even in this subthread
but i think hearing us about "these models can actually reason about law in a way that could be useful to you if you hadn't been pre-poisoned against it" seems like a good idea for a lot of the law-folk itt
i mean yeah, if the thing being responded to is "you should accept a claude code session as a valid tool for law tomorrow" even as a claude code enjoyer that's the dumbest thing i've heard all week, and hegseth spoke this week
that is the single most interesting question (for my interests) itt, and i don't have an answer that wouldn't involve pasting the whole session log; the screenshot really doesn't demonstrate it i suppose