MIT graduate student council "calls for adoption of preprints...to
accelerate communication of scientific discoveries, and restore healthy incentives" drive.google.com/file/d/1KLbj...
@socarxiv
Open archive of social science. Free. Academy owned. Posts by director Philip N. Cohen. Say it: so-SHAR-kive (soʊʃɑrkaɪv). Website: socarxiv.org. New papers post at: https://bsky.app/profile/socarxivbot.bsky.social
MIT graduate student council "calls for adoption of preprints...to
accelerate communication of scientific discoveries, and restore healthy incentives" drive.google.com/file/d/1KLbj...
Thank you. Allowing this use on our service is different from recommending or encouraging it. We don't have the resources to evaluate the quality of translation.
And many of our submissions come from non-native English early career researchers. Allowing the use of AI for translation enables them to connect to readers they wouldn’t normally reach.
I’m a translator, but also the freelance blog manager of @ecer-eera.bsky.social blog.
I understand the concern that allowing authors to use AI for translating means people like me won’t be hired. But in reality, most people who submit to the EERA blog don’t have the money to hire a professional.
Interesting that the policy ends up coming down to "at least paraphrase what the AI wrote". I think there's a lot to be said for this to make sure at least one human brain paid enough attention to rewrite what the AI said. But fascinating how quickly we're having to grapple with all of this
We have posted our AI policy
An introduction: socopen.org/2026/03/09/s...
And the policy:
socopen.org/ai-policy/
"Funders serious about timely, open, and equitable access to research must explicitly recognize preprint sharing as a route to Open Access compliance"
I know I'm a broken record on this but it seems so blindingly obvious... #PlanU journals.plos.org/plosbiology/...
council.science/blog/open-sc...
Several people have objected to treating LLM generated language translation as acceptable. I'm interested in this, because to me this is a reasonable use case. With these tools (built into everything) I am able to read work and converse with people I would otherwise just never know.
New policy from @socarxiv.bsky.social regarding reviewing and the use of AI in research.
"This policy aims to protect the epistemic commons by distinguishing research that is part of advancing social science knowledge from that which merely dilutes our work"
SocArXiv Releases #AI Policy www.infodocket.com/2026/03/08/r... #repositories #preprints #scholcomm #sociology @socarxiv.bsky.social
It's not "taking work" in the vast majority of our cases, because authors who submit to our service are not in a position to hire such workers. And of course we are not "encouraging" any use of LLMs. We don't actually run the world, just trying to make a workable policy.
No. Thanks for reading :)
We have posted our AI policy
An introduction: socopen.org/2026/03/09/s...
And the policy:
socopen.org/ai-policy/
I’m given to understand the technology can and has been harnessed to useful effect. Ok. But in aggregate it is also increasing the relative volume of garbage that editors and reviewers must now contend with. With limited time and resources, that leads invariably to inefficiencies. Not good! (2/2)
As a journal editor, the most noticeable consequence of LLMs I’ve observed is an increase in slop submissions. While this might show up on someone’s ledger as greater productivity, what it has meant for me is more toil at the desk review stage and challenges with workflow. (1/2)
Now on @socarxiv.bsky.social !
osf.io/preprints/so...
New version of our preprint on bioRxiv about bioRxiv up. Now that’s what I call a revision – 6 years after the first version!
It has new data about our progress and highlights from a massive user survey. 1/n
www.biorxiv.org/content/10.1...
Yet.
Prof. Cohen is just venting. This is not an official policy statement.
It's one thing to ask authors to disclose how they used #AI. But it's another to have an articulate policy on what degrees and kinds of use make a work unacceptable, esp at a #preprint repository. Thoughtful take by @philipncohen.com, director of @socarxiv.bsky.social.
socopen.org/2026/02/22/w...
Nope.
It is definitely a risk factor for poor quality work of various kinds (one pattern we have observed is some folks who do this produce papers on very different topics very quickly...)
I'm not convinced by the case for here - GenAI can't think, and writing is part of the thought process / refinement of ideas. Outsourcing all the "thinking" to GenAI leaves a gap where the core intellectual endeavour must be.
This is actually my take on the @socarxiv.bsky.social question of whether to set policy banning fully AI-generated submissions - literally writing the prose is an important part of connecting with readers. As in faith-based fellowship, the act of being in community is an important part of Science.
"no one should be looking at the corpus of SocArXiv as a repository of the best research. (...) There's a lot of bad work on it, which, unlike most journals & some preprint servers, we are not shy about admitting, because it doesn’t hurt the good work that is here, & we’re not trying to make money"
Where Should #SocArXiv Draw the #AI Line? (via @socarxiv.bsky.social) socopen.org/2026/02/22/w... #scholcomm #preprints #publishing
Raises important questions: how will #scholcomm adapt, which norms around publishing will emerge? How will research assessment work in the future?
What do we want "research" to look like?
Curious to see where @socarxiv.bsky.social will end up with their policy.
The blog post and the thread suggest that It was a well written paper that -however- marginally contributed to the scientific discourse. If not for the author's (authors'?) own declaration, it wouldn't have been removed.
The degree of AI use by the author(s) is imho a bad reason for removal.
I'm the moderator who flagged the submission Philip is talking about here. The goal is a conversation about balance of "AI" and human work in papers. I think I may write a post about my own opinions, but we social scientists need to have a broader conversation about this. In particular, 1/2
So it passes the minimal quality bar. And apparently non-hallucinatory. The basis for rejection would be the disclosed AI use. But we don't have that policy (yet). Would should our policy be?
/4