Not sure having AI produce more research faster means better research. We already have enough issues with salami slicing, p hacking, paper mills. AI stands to exacerbate these issues more. It will likely benefit research in some capacity, but higher number of X metric is not beneficial
03.03.2026 23:06
π 2
π 0
π¬ 0
π 0
Exactly! Little to no discussion on p hacking, false discovery rates, salami slicing etc. More on producing more papers, more grant applications. How is that making academic research better? We already have thousands of poor quality articles, is millions better?
27.02.2026 13:31
π 1
π 0
π¬ 0
π 0
Confused why people argue that agentic AI is good for academia. It seems the arguments revolve around time savings and more papers produced quicker but ignores the impact this will have on peer review and an already overburdened system. Not to mention just more slop
#academicsky #metascience
27.02.2026 08:58
π 4
π 0
π¬ 1
π 0
A funding organisation aimed at more meta science ideas. They have had a few calls so far and a couple focused on AI, but importantly looking at AI's impact on science rather than build a new AI tool. I'm a fellow with them so ask any questions!
08.02.2026 08:11
π 0
π 0
π¬ 0
π 0
A great read by @aidybarnett.bsky.social and @jabyrnesci.bsky.social!
06.02.2026 17:57
π 1
π 0
π¬ 0
π 0
I also love funding opportunities aimed to develop applicants into independent researchers. Wasn't that the point of the PhD!?!? π
05.02.2026 10:14
π 0
π 0
π¬ 0
π 0
I think people in 30s/40s can be ECRs if they changed careers etc. But I do agree that its funny that 8 years experience (with PhD) counts as a junior in academia when there are seniors with less experience in other industries.
05.02.2026 10:14
π 0
π 0
π¬ 1
π 0
OpenAlex
Our team recently highlighted the abuse of the CDC WONDER dataset, with a significant increase in the number of publications since 2023 (GenAI onset). In the first month of 2026, 276 new papers have been published, ~20% of the output in 2025 and ~79% of 2024 already.
03.02.2026 08:50
π 3
π 3
π¬ 1
π 0
Thoughts on averaging over 10 thousand citations a year in the last 10 years? Is this possible in healthcare research or a sign of unethical behaviour by inflating citations? π€
Curious to hear some perspectives!
#academicsky #metascience
29.01.2026 12:27
π 0
π 1
π¬ 0
π 0
methods to p-hack a publishable finding or randomly splitting cohorts provides little, if any, benefit to the scientific literature
#metascience #academicsky #openscience
23.01.2026 14:28
π 3
π 1
π¬ 0
π 0
concerns that articles still need to meet peer review standards. It's obvious that peer review is struggling to be a gate keeper at this stage and more needs to be done to limit these articles.
There is still obviously a place for analyses of large datasets, but the mass repetition of
23.01.2026 14:28
π 3
π 1
π¬ 1
π 0
Science Is Drowning in AI Slop
Peer review has met its match.
Very good read from @rossandersen.bsky.social and @theatlantic.com
It summarises nicely the downside to AI, and why the abuse of Open Access data is becoming a larger problem. It's also disheartening to see publishers state they will continue to publish some of these works, hand waving away
23.01.2026 14:28
π 8
π 5
π¬ 1
π 0
#academicsky #metascience #opencience @medrxivpreprint.bsky.social
15.01.2026 17:56
π 1
π 0
π¬ 0
π 0
CDC WONDER appears to be the latest open access dataset to become exploited by unethical actors, though is unique in authorship patterns and collaborations, continuing to highlight the tension between Open Science and research integrity.
15.01.2026 16:21
π 3
π 3
π¬ 1
π 0
We also see a large portion of these outputs are being submitted to conferences and published in journals such as Chest or Gastroenterology showcasing the link between the CDC WONDER exploitation and medical researchers as conferences have previously been a source of low-quality/unethical outputs.
15.01.2026 16:21
π 1
π 0
π¬ 1
π 0
Unethical actors can utilise the pressure of medical residents/trainees to publish to advertise courses with an almost guarantee to publish to at the end by using such templates. No single course is being mentioned or linked as it's hard to gauge which of these may be unethical or ethical.
15.01.2026 16:21
π 0
π 1
π¬ 1
π 0
medical residency and training programs. Recent articles (doi: 10.1001/jama.2025.23320; doi.org/10.1093/asj/...) have highlighted concerns about the research coming out from these programs as they appear focused on quantity of output rather than quality and integrity.
15.01.2026 16:21
π 0
π 0
π¬ 1
π 0
We also see strong connections between authors from Pakistan, India, United States, and the United Kingdom, including authors associated with Imperial College London and Duke University. The reason behind the connections is unknown, but we suspect it could possibly be due to
15.01.2026 16:21
π 0
π 0
π¬ 1
π 0
We can see a large, unprecedented increase in the number of articles being published, with most of them following similar patterns including the use of "trends" and "discrepancies" in the title, as well as eerily similar methodology and language in the write-up.
15.01.2026 16:21
π 0
π 0
π¬ 1
π 0
Drastic changes in collaboration networks and publication patterns in research using the CDC WONDER dataset
The growth of generative AI and easily available Open Access health datasets has transformed researcher productivity, leading to an explosion in publications that has in part been attributed to paper ...
New preprint out now! This should be interesting particularly for those interested in paper-mills and their abuse of open-access datasets. This one stands to be particularly interesting as its one of the first to implicate actors associated with Western institutions.
15.01.2026 16:21
π 6
π 1
π¬ 1
π 0
Questions not just A is related to B without considering any confounders and no hypothesis really other than we are looking for significant relationships
22.12.2025 08:46
π 0
π 0
π¬ 0
π 0
Agree with Rich here, asking Chat to identify associations in a dataset would likely produce exactly the kind of low-quality publication we are seeing. Yes we need to continue to analyze existing data (always collecting data is too expensive), but we need to ensure people are asking legit and robust
22.12.2025 08:46
π 0
π 0
π¬ 1
π 0
How would you like to see it broadened?
19.12.2025 16:07
π 0
π 0
π¬ 0
π 0
I don't think having novelty in this context is bad, again unless they are incredibly strict about it. This checklist will probably just be ticked off anyway with some sentence of novelty regardless if there's anything there. But steps do need to be taken to manage this
17.12.2025 10:16
π 0
π 0
π¬ 0
π 0
Still p-hack and trick their way to significance regardless of novelty. I don't think focusing on a broad definition of novelty (don't repeat the same study just to sell a publication) automatically encourages p-hacking. That feels like a broader issue of academia
17.12.2025 10:16
π 1
π 0
π¬ 1
π 0
It is interesting that the default thought that novelty contributes to p-hacking. Because to me something can be novel but show no significant finding. If X has never been studied and then there's no relationship that's still novel. And as you can see from the crap research out there people will
17.12.2025 10:16
π 1
π 0
π¬ 1
π 0
They become very strict and reject everything unless it's ground breaking then that's an issue. But these open datasets are being sliced in so many ways mainly to serve the interests of paper mills or other unethical actors
16.12.2025 11:19
π 0
π 0
π¬ 1
π 0
Best add nothing. There is no reason in 2025 to dove a cohort into 2010-2015 and then have the same paper but the cohort is 2015-2020 unless there was some significant change but then that would be noted and likely considered novel (e.g., COVID). That's hopefully how they handle it but yes if
16.12.2025 11:19
π 0
π 0
π¬ 1
π 0