https://github.com/davdittrich/robscale
#RStats #RobustStatistics #DataScience #Optimization
orig https://fediscience.org/@davdittrich/116201886682143700 4/4
@davdittrich.economicscience.net
Interested in bounded rationality, trust, discrimination, fair compensation, labor economics, quantitative methods, dataviz, rstats, open source 1stgen Professor of Economics (2005-2022) Senior Economist at the Stepstone Group https://economicscience.net
https://github.com/davdittrich/robscale
#RStats #RobustStatistics #DataScience #Optimization
orig https://fediscience.org/@davdittrich/116201886682143700 4/4
cache-aware median selection.
The result? A 1.6x up to ~28x performance leap over pure-R implementations. The mathematical results remain identical; only the computational underpinnings change.
π¦ CRAN: https://cran.r-project.org/package=robscale
π» Code: 3/4
exceptional reliability, computing them requires intensive math.
robscale 0.1.5 is now on CRAN. It delivers a native C++17/Rcpp implementation designed for absolute speed. The package utilizes SIMD-vectorized $\tanh$ evaluation, Newton-Raphson iteration, and optimal sorting networks for 2/4
A two-panel benchmark charting performance multipliers of the optimized C++ robscale package against legacy pure-R implementations across sample sizes from $n = 3$ up to $10^7$, with the vertical axis starting honestly at 0x. The left panel reveals a massive speedup for M-estimators (robLoc, robScale, adm vs. revss), pushing up to ~28x for robScale. The right panel tracks scale estimators ($Q_n$, $S_n$ vs. robustbase), showing the speedup curve upward from 1.6x, approaching 10x at large sample sizes. Shaded ribbons show 95% bootstrap confidence intervals, visually confirming dramatic computational efficiency.
Robust estimation demands highly efficient computation, especially in streaming anomaly detection where latency budgets are tight.
While Rousseeuw & Croux's robust estimators ($Q_n$ and $S_n$), and Rousseeuw & Verboven's M-estimators of location and scale for very small samples, provide 1/4
this would be a nice addition to the dichotomisation chapter of discourse.datamethods.org/t/reference-...
(unfortunately, Andrew Althouse doesn't seem to be active on Bluesky?)
to use AI, which artificially suppresses productivity per worker.
https://archive.md/CABjm
#gdp #LaborMarkets
orig https://fediscience.org/@davdittrich/116090657645560821 3/3
Output.
https://archive.md/vs2B4
β¦Brynjolfsson is more optimistic.
The #productivity boost is coming, but it is currently masked by the massive intangible investments companies must make to reorganize their workflows. Companies are keeping expensive human staff while they figure out how 2/3
#EconSky
While there is some "boardroom disillusionment" with #AI β¦
Companies have seen "micro-efficiencies" (faster emails, quicker coding), but these haven't scaled to "macro-gains," and the Cost of Implementation (energy, licensing, and talent) is currently outstripping the Value of 1/3
is retained after #dichotomization
β¦ hope that quantifying the loss of information will discourage researchers from dichotomizing continuous outcomes"
#statistics #rstats
orig https://fediscience.org/@davdittrich/116076960331479669 3/3
but also larger standard errors and fewer statistically significant results. We conclude that researchers tend to increase the sample size to compensate for the low information content of #binary outcomes, but not sufficiently.
β¦ estimate that on average, only about 60% of the information 2/3
An Empirical Assessment of the Cost of Dichotomization of the Outcome of Clinical Trials https://onlinelibrary.wiley.com/doi/10.1002/sim.70402
"Burdening more participants than necessary is wasteful, costly, and unethical.
β¦ trials with a binary outcome have larger sample sizes on average, 1/3
that work is harder to step away from, especially as organizational expectations for speed and responsiveness rise."
#LaborEcon
orig https://fediscience.org/@davdittrich/116048241413469185 5/5
employees juggle multiple AI-enabled workflows
β¦ overwork can impair judgment, increase the likelihood of errors, and make it harder for organizations to distinguish genuine productivity gains from unsustainable intensity
β¦ the cumulative effect is fatigue, #burnout, and a growing sense 4/5
a continual switching of attention, frequent checking of #AI outputs, and a growing number of open tasks. This created #cognitiveload and a sense of always juggling
β¦ What looks like higher #productivity in the short run can mask silent workload creep and growing cognitive strain as 3/5
parallel, or reviving long-deferred tasks because AI could βhandle themβ in the background. They did this, in part, because they felt they had a βpartnerβ that could help them move through their workload.
While this sense of having a βpartnerβ enabled a feeling of momentum, the reality was 2/5
#EconSky
AI Doesnβt Reduce WorkβIt Intensifies It https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it
βAI introduced a new rhythm in which workers managed several active threads at once: manually writing code while AI generated an alternative version, running multiple agents in 1/5
disproportionately attract more educated and experienced workers
β¦ stringent #RTO mandates may induce the most productive employees to leave firms that do not offer WFH."
#LaborMarkets
orig https://fediscience.org/@davdittrich/116048099955111626 4/4
understate #inequality, as the best-paid workers are also more likely to receive the WFH amenity.
β¦ changes in WFH policies (e.g., through widely debated RTO mandates) could have important implications for the allocation of talent and for aggregate productivity: firms offering WFH 3/4
negotiation skills or bargaining power). Indeed, WFH was more prevalent for workers who already had high hourly wages before the pandemic, and was not associated with higher post-pandemic wage growth.
β¦ in a world with more widespread #WFH, differences in hourly #wages may significantly 2/4
#EconSky
The Work-from-home Wage Premium https://www.frbsf.org/wp-content/uploads/wp2026-02.pdf
"β¦ find that workers who work from home earn higher hourly wages than those who do not.
β¦ premium is driven by selection on unobservable worker characteristics (which could include ability, 1/4
https://fediscience.org/@davdittrich/116014030137174545 2/2
How I Use Claude Code for Empirical Research https://causalinf.substack.com/p/claude-code-part-12-how-i-use-claude
Scott has a lot of good and useful ideas about how to use #claudeCode & Co. I like the referee #2 idea. There is more in his other posts.
#AI #llm
orig 1/2
#EconSky
The Legacy of Daniel Kahneman https://ejpe.org/journal/article/view/1075/753
by Gerd Gigerenzer
#BoundedRationality
#economics #Psychology #heuristics #Biases #statisticalThinking
orig https://fediscience.org/@davdittrich/116013924158823849
they are directly harming another human."
#economics #IntellectualProperty #ExperimentalEcon #ExperimentalLaw
orig https://fediscience.org/@davdittrich/116013622450354382 4/4
stolen so everyone take copies,β explicitly rejecting the application of βstolenβ to discs.
β¦ Humans can state that digital piracy is illegal and take measures to prevent it. However, it will be difficult to cause an individual engaging in piracy to feel guilty as they do when they believe 3/4
disc and can still consume the full value of it.
β¦ Participants discuss discs often enough to reveal how they conceptualize the resource. In many instances, they articulate the positive-sum logic of zero-marginal-cost copying. For example, β¦ farmer Almond reasons, βok so disks cant be 2/4
#EconSky
Everyone Take Copies https://www.econlib.org/econlog/everyone-take-copies/
"β¦ discs: Non-rivalrous goods are goods that can be used by multiple people without any loss to the other users. If participants exercise the ability to take a disc, then the original disc holder still has a 1/4
https://fediscience.org/@davdittrich/116013554710446301 5/5
40% chance of vanishing. It is more likely to be reorganized.
β¦ Technology automates, accelerates or reduces the cost of specific tasks within a job, allowing employees to spend more time on higher-value activities. As a result, output expands and #wages often rise."
#LaborMarkets
orig 4/5
differently.
β¦ software has automated large portions of bookkeeping and tax preparation without eliminating accountants, who have moved up the value chain toward advisory, forensic and judgment-intensive work.
β¦ A job that scores as 40% βexposedβ to AI in these rankings doesnβt have a 3/5