Yes I think that's a general feeling! Still, the act of sharing pushes one to improve?
As long as results are correct, it should be fine? And if there were errors, better that they can be caught than hidden forever?
Yes I think that's a general feeling! Still, the act of sharing pushes one to improve?
As long as results are correct, it should be fine? And if there were errors, better that they can be caught than hidden forever?
Oh yes, I was thinking of honest papers from honest authors. Made up papers are a different ugly beast...
Agree. Still I think it's better to have it published than hidden? Helps interpret the paper.
Also I like to think that asking ourselves 'is it good enough?' before publishing would increase the final quality rather than deciding beforehand not to share?
97% of the papers archived the data but only 35% archived the code.
Most people are writing code but not sharing it. Time to bring up this again: scispace.com/pdf/publish-... (and if the code isn't good enough yet, maybe it's too early to publish the paper)
#openscience #reproducibility #ecopubs
You can have subfolders within `R` folder: github.com/cynkra/dir
I like the package structure for complex projects, but not all (see milesmcbain.xyz/posts/an-oka...)
iDiv and @uni-jena.de are recruiting a Postdoctoral Researcher (f/m/d) in Trait-Based Community Ecology and Modelling.
Highlights:
πΊFull-time (100%, 40 h/week) with the option to reduce to 80%
πΊFixed-term for 2 years
πΊSalary up to E13 TVβL
π Apply by: 29 March 2026
www.idiv.de/career/job-o...
#PhDPosition in Ecological #DataScience in our group. For details, see www.uni-regensburg.de/universitaet... #MachineLearning #AI #DeepLearning #Statistics #Ecology #AcademicJobs
π [blog] How do we keep software peer review human-centered in the age of generative AI?
Thatβs the question behind our latest post at rOpenSci, where we describe:
β’ new expectations around AI use
β’ what this means for authors, reviewers, and editors
β’ why transparency matters more than ever [β¦]
This can help:
stevage.github.io/WhatTheProj/
Or using @kylewalker.bsky.social's {crsuggest} package:
search.r-project.org/CRAN/refmans...
In support of #OpenScience, we routinely ask authors to openly share their #research #code before publication.
We are now formalizing this practice with a mandatory #CodeSharing policy and clarifying what we mean by code sharing.
NEW STUDY: Winter downpours are getting heavier in parts of Spain, Portugal and Morocco as the region recovers from a month of relentless storms.
Between mid-January and mid-February nine named storms brought torrential rain hurricane force winds causing major damage and disruption. 1/5
"β¦ turning a blind eye to malfeasance in our ranks gives the public legitimate reason to distrust our work." Naomi Oreskes, Harvard University
As the U.S. doubles down on glyphosate, a widely cited study asserting its safety has been retracted after it was discovered the paper was ghostwritten by Monsanto employees.
This incident, and ghostwriting in general, harms scientific integrity, writes Naomi Oreskes. https://scim.ag/4rIIu4a
This blog post @rfortherestofus.com also good (using Typst)
rfortherestofus.com/2025/11/quar...
Science sleuths share their common-sense tips for sniffing out fishy articles
go.nature.com/4ceKgVF
Are #openaccess fees that some journals charge authors too high? The Chinese Academy of Sciences, the worldβs largest research institution, reportedly thinks so and plans to stop funding some, a move that could shake up #scientificpublishing. @science.org www.science.org/content/arti...
Per protocol analysis strikes again!
Folks, if you randomize but then donβt analyze some of the people who got randomized (maybe because they didnβt adhere to instructions, maybe because they dropped out), randomization will no longer do all the heavy causal inference lifting.
Excited to share our new paper in @natcomms.nature.com We synthesize causal discovery & inference approaches across traditions (regression adjustment, quasi-expts, SEMs, Granger causality, convergent cross-mapping, and more) into a unified workflow for ecologists. www.nature.com/articles/s41...
Tenemos nuevo #SeminarioEcoinformΓ‘tico !!
Esteban Alonso (IPE-CSIC) nos hablarΓ‘ sobre supercomputaciΓ³n y paralelizaciΓ³n de tareas π»π
*Esta vez no habrΓ‘ grabaciΓ³n del seminario, asΓ que os animamos a asistir!
Β‘Os esperamos a todas y a todos!
π jueves 12 marzo 2026, 16:00h
π shorturl.at/G9efC
1. Kevin Gross and I have a new paper out today PLOS Biology.
We used economic models based around screening games and the market for unpaid labor to highlight a meltdown cycle threatening peer review.
The billion-dollar case for sustaining palaeontologyβs digital databases π§ͺ www.nature.com/articles/s41...
Dowding et al survey palaeontological databases, documenting their contributions to science as well as vulnerabilities, and provide recommendations for the future of open science databases
Really useful package for placing annotations in #ggplot2 charts π
(Helps avoid the endless spiral of "slightly to the left" then "slightly to the right" π)
#RStats
If you haven't yet changed how you teach and evaluate thanks to AI, you probably need to. This is not something that educators asked to happen, but it is worth noting what is out there - in this case an OpenClaw tool designed to cheat.
The political effects of X's feed algorithm https://doi.org/10.1038/s41586-026-10098-2 Received: 16 December 2024 Accepted: 4 January 2026 Published online: 18 February 2026 Open access β’ Check for updates Germain Gauthier,5, Roland Hodler?5, Philine Widmer35 & Ekaterina Zhuravskaya3,4,5 m Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects'. Here we present results from a 2023 field experiment on Elon Musk's platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects. Neither switching the algorithm on nor switching it off significantly affected affective polarization or self-reported partisanship. To investigate the mechanism, we analysed users' feed content and behaviour. We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects. These results suggest that initial exposure to X's algorithm has persistent effects on users' current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship.
A new paper shows that less than 2 months of exposure to Twitterβs algorithmic feed significantly shifts peopleβs political views to the right.
Moving from chronological feed to the algorithmic feed also increases engagement.
This is one of the most concerning papers Iβve read in awhile.
π iDiv and @uni-jena.de are hiring! Join the Ecological Networks Lab as an IT Specialist (full-time, permanent). Work on lab automation, software dev, and cutting-edge research! πΏπ»
ποΈ Apply by 27 Mar 2026
π Leipzig (Germany)
π www.idiv.de/career/job-o...
#JobAlert #ITJobs #Biodiversity #iDiv
Calling all #quarto users!
If you are making some slides, what has caused you to reach for powerpoint/keynote/google slides instead of making them with quarto?
You usually donβt want to test for fit to a Normal distribution. If you do, you usually donβt care primarily about the alternatives to which the Shapiro-Wilk test is sensitive. Unless the conclusion is very clear, pre-testing is going to completely munt your subsequent tests. And if the conclusion is clear then you can look at the qq-plot; you donβt need to get maths to look at it for you, and you might learn something else about the data.
But don't trust me. Read experts like @tslumley.bsky.social or @allendowney.bsky.social
notstatschat.rbind.io/2019/02/09/w...
allendowney.blogspot.com/2013/08/are-...
A Shapiro-Wilk test of the response variable concludes very significant deviation of Normality. But residuals of linear model consistent with Normal distribution.
Visual check of the linear model with DHARMa
Periodic reminder that we should avoid testing the Normality of the response variable.
For a linear model, what matters is the Normality of residuals (and not that much). Visual checks better than test. #statistics
If you're an #EcoEvo #PhD candidate anywhere in the world looking for project funding, consider applying for the @asn-amnat.bsky.social Student Research Award.
Ten proposals for $2k in research funds will be awarded. Due 13 March 2026.
Please share widely! π§ͺ #grants #ecology #evolution #behavior