Is the meetup full? At least based on the meetup.com site, there is a waiting list as large as the actual event.
Is the meetup full? At least based on the meetup.com site, there is a waiting list as large as the actual event.
Abstract Introduction A key step in the Bayesian workflow for model building is the graphical assessment of model predictions, whether these are drawn from the prior or posterior predictive distribution. The goal of these assessments is to identify whether the model is a reasonable (and ideally accurate) representation of the domain knowledge and/or observed data. There are many commonly used visual predictive checks which can be misleading if their implicit assumptions do not match the reality. Thus, there is a need for more guidance for selecting, interpreting, and diagnosing appropriate visualizations. As a visual predictive check itself can be viewed as a model fit to data, assessing when this model fails to represent the data is important for drawing well-informed conclusions. Demonstration We present recommendations for appropriate visual predictive checks for observations that are: continuous, discrete, or a mixture of the two. We also discuss diagnostics to aid in the selection of visual methods. Specifically, in the detection of an incorrect assumption of continuously-distributed data: identifying when data is likely to be discrete or contain discrete components, detecting and estimating possible bounds in data, and a diagnostic of the goodness-of-fit to data for density plots made through kernel density estimates. Conclusion We offer recommendations and diagnostic tools to mitigate ad-hoc decision-making in visual predictive checks. These contributions aim to improve the robustness and interpretability of Bayesian model criticism practices.
New paper Säilynoja, Johnson, Martin, and Vehtari, "Recommendations for visual predictive checks in Bayesian workflow" teemusailynoja.github.io/visual-predi... (also arxiv.org/abs/2503.01509)
This allows us to fit the model on the two datasets (observed and posterior predictive) simultaneously, to obtain samples from the augmented posterior.
I'm sorry for the delay, still making a habit of this platform…
I hope this clarifies the idea a bit:
Mark x_o = x, x_i = y. Now, as x, and y are independent conditional on θ, we have
p(θ|x, y) =
p(θ, x, y)/p(x, y)
∝ p(θ, x, y)
= p(x, y|θ)p(θ)
= p(x|θ, y)p(y|θ)p(θ)
= p(x|θ)p(y|θ)p(θ).
Hello, thank you for the kind words.
You first run posterior inference to obtain draws from the posterior with just the dataset of interest. Then for each posterior draw, create a prediction and run the inference to obtain posterior draws conditioned on both the observation and the dataset.
Title: Posterior SBC: Simulation-Based Calibration Checking Conditional on Data Authors: Teemu Säilynoja, Marvin Schmitt, Paul Bürkner, Aki Vehtari Abstract: Simulation-based calibration checking (SBC) refers to the validation of an inference algorithm and model implementation through repeated inference on data simulated from a generative model. In the original and commonly used approach, the generative model uses parameters drawn from the prior, and thus the approach is testing whether the inference works for simulated data generated with parameter values plausible under that prior. This approach is natural and desirable when we want to test whether the inference works for a wide range of datasets we might observe. However, after observing data, we are interested in answering whether the inference works conditional on that particular data. In this paper, we propose posterior SBC and demonstrate how it can be used to validate the inference conditionally on observed data. We illustrate the utility of posterior SBC in three case studies: (1) A simple multilevel model; (2) a model that is governed by differential equations; and (3) a joint integrative neuroscience model which is approximated via amortized Bayesian inference with neural networks.
If you know simulation based calibration checking (SBC), you will enjoy our new paper "Posterior SBC: Simulation-Based Calibration Checking Conditional on Data" with Teemu Säilynoja, @marvinschmitt.com and @paulbuerkner.com
arxiv.org/abs/2502.03279 1/7