Funny, only folks I see using the word betrayed are Fuentis types.
@etvpod
A #philosophy podcast for surviving the worst possible timelines. Hosted by ethicist aaronrabinowitz.net. Ethics director and credentialed creator at creatoraccountabilitynetwork.org. Obsessed with luck. Sibling show: https://0gphilosophy.libsyn.com/
Funny, only folks I see using the word betrayed are Fuentis types.
Especially when theyβre creepers.
How else would you describe this? bsky.app/profile/alan...
Fair, I do think there is probably a lot of variation on this subject to go along with the total lack of standardization. In such a mixed climate it's probably still reasonable to be cautious about who you share your AI usage with either way.
I mean I think the tech is a massive breakthrough on several fronts that will reshape the world as previous technological revolutions have.
There are also just a very large number of jobs that involve pushing numbers around spreadsheets that are already being automated with human oversight as you suggested. We just donβt see nearly as much talk about it especially in leftist spaces for various reasons.
Sorry, are you confusing insulting assertions with the free exchange of ideas? I missed the part where you actually made an argument instead of just trashing peopleβs motives.
I dunno how much credibility I have with anyone these days but happy to help!
Thank you for contributing your case in point.
lol. Lmao.
Agreed for a lot of jobs.
Fair I can pass that along to Ella and ask if the my address it in the full paper.
I will say the AI need better filters on who they cite. They often cite popular terrible people.
I read that piece and thought it was interesting and then didnβt post about it cause I didnβt have it in me that day. π
Agreed, which is why it should be included in blinding along with names etc.
Interesting, can you say more?
I believe they controlled for concerns like this.
Yeah that seems true to me. I do think we have an obligation to do the best we can though, especially where students are involved.
Totally agree and there should be strict rules against these misuses. Unfortunately the history of tech often involves having to have massively bad uses before we get basic regulation. The history of medical ethics comes to mind.
I can totally respect that and I think I agree that can be a meaningful relationship as well. I do think thatβs separable from reading something good, finding out an AI was involved and then hating the writing on principle, just as one example.
Yeah, I think the most reasonable concerns about AI are things like what happens if we donβt get a UBI along side widespread implementation.
See, this is fascinating to me. I seem to lack this particular disgust reaction, which is not unusual for me. For me, if the writing is good I donβt really care where it came from.
What about if someone just used the AI to improve their own writing?
Honestly, given the state of AI under capitalism it would be weird to me if there wasnβt a ton of negativity towards it and people who use it. Most peopleβs experiences of it involve having it forced on them and it being terrible.
Whenever I talk about this Iβm worried. Iβm worried people will reflexively think worse of my writing or assume itβs all AI written. Iβm developing consulting materials for this topic content creators right now, thatβs another space where disclosure could seriously impact listenership.
I mean, the study I referenced suggests there are strong, sometimes unacknowledged biases against people who use AI even when they openly use it in appropriate ways. And it can hurt you in peer review. I havenβt seen data on career impact yet but I think itβs reasonable to be worried.
For sure, just not a ton of outspoken ones in leftist spaces in my experience. The topic has become very politically polarized. And in academia it feels like a teacher vs admin debate where being a teacher and pro-AI can get you labeled a traitor.
The author even explained to me the dark irony of having to declare AI usage when submitting a paper for peer review where they explicitly argue that AI disclosures bias editors.
In my experience definitely the first one. But thatβs just my experience. Especially in leftists spaces.
Iβll prolly never get tenure so fuck it AI is revolutionary tech and the discourses around it is maximally cursed. I saw one study at SPSP that showed how students are viewed as lacking merit for using AI even when they are allowed to and declare using it. Itβs reasonable to lie right now.