Ricardo Rey-Sáez (β)'s Avatar

Ricardo Rey-Sáez (β)

@ricardoreysaez

I'm a psychometrician in the experimental field, currently doing my PhD research on the Psychometric Properties of Experimental Tasks at the Universidad Autónoma de Madrid.

194
Followers
81
Following
53
Posts
18.11.2024
Joined
Posts Following

Latest posts by Ricardo Rey-Sáez (β) @ricardoreysaez

Reluctant to ReLU: Uncontrolled Connectivity Pruning Underlying Trainable Excitatory-Inhibitory Recurrent Neural Networks: https://osf.io/9uzhq

24.01.2026 17:40 👍 6 🔁 4 💬 0 📌 2

Paper alert! 📢 We just published a Registered Report Many-Labs study! 🌍🔬

I have to highlight the massive amount of work @aliciafrancomnez.bsky.social put into leading this. I feel truly honored to be part of this team and to learn from her example of exactly how science should be done.

24.01.2026 12:43 👍 3 🔁 0 💬 0 📌 0

Many thanks, Gidon! It’s an exciting time to work at this intersection (and to meet others doing the same!). I’ve bookmarked your paper! Fun fact: the researcher who inspired me to be a scientist worked in intelligence too, so I feel a special connection to the field. Hope you enjoy the preprint!

29.12.2025 19:31 👍 1 🔁 0 💬 0 📌 0
OSF

Thank you for reading through this long thread! All the data, scripts, and models needed to reproduce this work are available here: osf.io/5dutv/files/...

Feel free to reach out if you have any questions!

29.12.2025 17:54 👍 3 🔁 0 💬 0 📌 0

This represents a new framework where any hierarchical model used in experimental psychology, regardless of the measure (accuracies, RTs, EEG...), can be transformed into a HFM to (1) recover true correlations, (2) estimate common factors and (3) refine experimental measures and tasks.

29.12.2025 17:54 👍 2 🔁 0 💬 1 📌 0

At this point, both contributions are exciting, but they are just the start. In our discussion, we explore the potential of HFMs for: (i) any other experimental effect, (ii) mapping different metrics (e.g., RTs and SDT d' measures) onto the same latent factor, and (iii) longitudinal effects.

29.12.2025 17:54 👍 2 🔁 0 💬 1 📌 0
Post image

While standardizing sounds simple, it was remarkably problematic for the EFA latent model. We had to develop a joint prior for communality and factor loadings that solves these hurdles without being a headache for the users. See the preprint for more, but here is the figure for this joint prior!

29.12.2025 17:54 👍 1 🔁 0 💬 1 📌 0
Post image

In our empirical illustrations, we show how standardized loadings allow us to detect the presence of common factors. Look how different these posterior distributions are! Clearly, something happens with tasks from experiment 3 that doesn’t happen with experiments 1 and 2!

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

By standardizing the model, loadings are constrained between -1 and 1, and communality between 0 and 1. Priors on this scale are intuitive and easy to manipulate, making the model more transparent. Additionally, this allows us to set priors on random slope variances.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

Previous HFMs use unstandardized metrics, which is less intuitive: (1) you need priors for factor loadings in raw units (like ,iliseconds), making it hard to judge how informative they are, and (2) you cannot set priors on random slope variances, even with prior empirical knowledge is avalible.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
Preview
Attentional control data collection: A resource for efficient data reuse - PubMed Publicly available data are required to (1) assess the reproducibility of each individual findings in the literature, and (2) promote the reuse of data for a more efficient use of participants' time and public resources. Current data-sharing efforts are well suited for the first goal, yet they do no …

To illustrate this, we fitted 59 Gaussian, ex-Gaussian, and shifted-lognormal multilevel models to 59 datasets (Flanker, Stroop, and Simon tasks). We then meta-analyzed every parameter to obtain robust prior knowledge for inhibition research!

pubmed.ncbi.nlm.nih.gov/40555897/

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

However, this doesn't apply to the rest of the model! Notice that hierarchical/multilevel/mixed models are the gold standard in experimental psychology, and we can use prior knowledge from single-task research to set from diffuse to informative priors for all hierarchical parameters.

29.12.2025 17:54 👍 2 🔁 0 💬 1 📌 0

The second goal, a standardized formulation, is a bit technical, but here is why it matters. These models are necessarily Bayesian, meaning you must set priors on factor loadings and other latent parameters. Since these models are new, the most reasonable prior these parameters is "I have no idea".

29.12.2025 17:54 👍 1 🔁 0 💬 1 📌 0
Post image

Oh, and in the empirical illustrations we compare our HFMs with Diffusion Models (See the PPD plot!) and Gaussian hierarchical models with covariates. Our ex-Gaussian HFM consistently outperformed both in terms of predictive performance!

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

The preprint includes two empirical illustrations where we apply this model comparison. We also provide efficient R functions to implement PSIS-LOO and other robust versions (moment-matching and Mixture Importance Sampling) by computing log-densities outside Stan.

29.12.2025 17:54 👍 1 🔁 0 💬 1 📌 0

Take-home message: Individual differences and correlation estimates are highly model-dependent. You need a formal criterion to select the trial-level distribution. We used PSIS-LOO for this purpose, but other methods are available. In our simulation, LOO always recovered the true generative model.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
Post image

We implemented both ex-Gaussian and shifted-lognormal HFMs, though any other distribution is available. When fitting a Gaussian HFM to skewed data, the results can range from trivial to absolutely terrible. By ignoring skewness, in some cases, the estimated correlation is only half of its true value

29.12.2025 17:54 👍 1 🔁 0 💬 1 📌 0

In our work, we extend HFMs in two key ways:

1. Incorporating skewed RT distributions to assess how much Gaussian HFM results are compromised when fitting non-normal data.
2. Implementing a fully standardized latent model, simplifying the way we evaluate common cognitive processes.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
OSF

But unlike traditional psychometrics, we can specify any distribution at the trial level! While Mehrvarz & Rouder (and Rouder et al., see the link) originally assumed a Gaussian distribution for RTs, they also indicate that this can be adapted in a straightforward manner.

osf.io/preprints/ps...

29.12.2025 17:54 👍 1 🔁 0 💬 1 📌 0

Furthermore, this would allow us to study construct validity much like we do in psychometrics. We could refine experimental tasks to better capture these common cognitive processes, in the same way a psychometrician refines items to measure extraversion. This implies a new way to see our tasks!

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

While the focus is usually on true correlations, a psychometrician would likely redirect their attention to the latent factor itself and the factor loadings. These loadings allow us to evaluate which task best discriminates the common underlying process across all tasks, a new interesting question.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

In both cases, the result was clear: one could recover the true correlation between experimental effects (Mehrvarz & Rouder, 2025) or between DDM parameters (Stevenson et al., 2025) with greater precision.

And this was where, once again, I thought back to what I knew about psychometrics.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
APA PsycNet

The work was truly brilliant, but the story didn't end there. Simultaneously in Amsterdam, Niek Stevenson & colleagues were developing these same HFMs, but applied to cognitive process models like the Drift Diffusion Model! Again, you can find it here: psycnet.apa.org/record/2024-...

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
Preview
Estimating correlations in low-reliability settings with constrained hierarchical models - PubMed It is popular to study individual differences in cognition with experimental tasks, and the main goal of such approaches is to analyze the pattern of correlations across a battery of tasks and measures. One difficulty is that experimental tasks are often low in reliability as effects are small relat …

Mahbod Mehrvarz and Jeff Rouder published a wonderful paper introducing "Hierarchical Factor Models", the same model shown in figure 1C previously. These models bridge the gap between hierarchical models and EFA from psychometrics. You can find it here:
pubmed.ncbi.nlm.nih.gov/39825164/

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
JAGS and Stan Workshop: Bayesian Modeling for Cognitive Science The workshop provides a theoretical background on Bayesian statistics and introduces the programs JAGS or Stan to implement diverse statistical models.

At the same time, Miguel made it possible for Alicia and me to attend the Bayesian Cognitive Modeling course! There, I finally gained the skills needed for this work.

Ironically, it was then that I realized others were developing HFMs at that exact same moment.

jasp-stats.org/jags-workshop/

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

It wasn't clear how to achieve this using the frequentist statistics I knew, but everything became possible once I decided to succumb to the dark side of probability: Bayesian statistics.

And that's where Javier Revuelta enters the scene, the first Bayesian psychometrician I ever met.

29.12.2025 17:54 👍 1 🔁 0 💬 1 📌 0

I shared this idea with Miguel and @aliciafrancomnez.bsky.social, my main support for all my experimental research questions. They loved it, but two years ago it was like: "Okay, sounds great! But... how?".

To be honest, I didn’t even know where to start.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

In other words, hierarchical/multilevel/mixed models and SEM models have had a baby. But my goal was to ensure generalized hierarchical/multilevel/mixed models and SEM had a baby, too!

And, as I realized later on, everything gets much more complicated once you step outside the Gaussian world.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0
Post image

But notice that this path diagram assumes Gaussian RTs! And we all know that RTs are anything but Gaussian... In fact, other distributions, like the ex-Gaussian or the shifted-lognormal, better reflect the empirical shape of reaction times.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0

This makes the second key insight easy to grasp: when I talk about Hierarchical Factor Models, I am essentially describing a hierarchical/multilevel/mixed model where we include a common factor as a predictor of these true scores per task inside the same model.

29.12.2025 17:54 👍 0 🔁 0 💬 1 📌 0