Kert Viele's Avatar

Kert Viele

@kertviele

Director of Research, Berry Consultants. Clinical trial designer specializing in Bayesian adaptive trials. Fellow of the American Statistical Association.

288
Followers
86
Following
268
Posts
22.10.2023
Joined
Posts Following

Latest posts by Kert Viele @kertviele

I'm particularly nervous about trials which have no concurrent controls (augmenting with preferably randomized trials or RWE presents a safer risk profile, but certainly still may not be safe enough in many cases). Not saying never (rare diseases, huge effects, etc).

10.03.2026 19:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I have no inside knowledge, but there is a lot of pressure on regulators to have programs investigating RWE, and thus there are incentives to publicize where it is being used, but randomization remains the overwhelming norm, at least in the trials I have seen and am involved with.

10.03.2026 19:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 2 πŸ“Œ 0

There is a lot of research here, and several high profile uses, but by and large I don't see a high percentage of trials using RWE. Usually regulators have strong preferences for randomization (as do I) and are particularly concerned about having no concurrent controls.

10.03.2026 19:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
JAMAevidence JAMA Guide to Statistics and Methods; March 5, 17 min; Platform Clinical Trials for the Efficient Evaluation of Multiple...; JAMAevidence JAMA Guide to...; Play Button.

JAMAevidence JAMA Guide to Statistics and Methods; March 5, 17 min; Platform Clinical Trials for the Efficient Evaluation of Multiple...; JAMAevidence JAMA Guide to...; Play Button.

JAMA Statistical Editor @rogerjlewis.bsky.social, MD, PhD, and Steve Webb, MBBS, MPH, PhD, discuss the key features of platform trials, including adaptive structures, shared control groups, and domain-based randomization.

🎧 Listen now: ja.ma/3Pk33oU

05.03.2026 17:00 πŸ‘ 5 πŸ” 5 πŸ’¬ 0 πŸ“Œ 0

I have great memories of xwing vs tie fighter. My friend ran a small computer company and had a good internal network (big deal late 90s) and we would take a group in to play on the weekend. I still remember the group cheer when we destroyed Tortali after about 300 failed attempts.

30.01.2026 19:45 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This workplace has had 0 days since we were told the ICMJE guidelines for authorship are...optional...with respect to our employees, and they may not be included in a paper regardless of their level of involvement.

29.01.2026 16:20 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Multiplicities still matter but are usually handled through correlated priors (if the drug didn’t work on 8 endpoints, do you really believe that signal on the 9th? If the priors are correlated you automatically discount it).

28.01.2026 07:21 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Certainly the design invariant aspects of Bayes make it a great fit for adaptive designs. After you have the data you don’t need to worry about many of these issues. You do still need to worry about how to design but it’s an optimization problem (interims have costs, sample size vs information etc)

28.01.2026 07:19 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Most commonly this is attempting to borrow data where the FDA views the data as a poor match to the trial. In Bayesian terms, the FDA doesn't accept the data justifies the proposed informative prior, not "we won't take Bayes".

22.01.2026 18:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I do find the "FDA will now take Bayesian" takes...interesting...since FDA has been taking Bayesian for years. Generally when someone says "FDA said no to my Bayesian design"...if you dig carefully, FDA didn't say no to Bayes, but to something else....

22.01.2026 18:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

is the possibility of gaining agreement on priors and utilities and decision rules without any frequentist guardrails. The guidance is vague on this, I assume by design in the hope of experience eventually guiding processes. They don't have m/any existing examples to draw on.

22.01.2026 18:03 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The FDA has long accepted trials with Bayesian machinery, including borrowing of information and hierarchical modeling. In many ways this guidance simply codifies existing practice. Especially when some kind of type 1 error control is also present. What's potentially new...

22.01.2026 18:03 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Clearly if the parameters are generated from a different prior I could lose or gain money with this bet depending on the details of how the parameters are generated (the guidance gets at this through talking about analysis and design priors).

15.01.2026 17:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

If we do this repeatedly, and the parameters are generated from my prior, then this bet does indeed break even long term. I'm not "biased high". And importantly it does NOT matter how I designed the trial, nor does it matter which interim we look at etc.

15.01.2026 17:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Agree with Frank it's an odd statement. Suppose I run a sequential trial and claim superiority (at any interim), and then at that point I'm willing to make an even money bet the parameter is above my posterior median. If I'm systematically biased high, you should take the bet...

15.01.2026 17:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It's unclear how this will happen in practice. There will need to be a lot of thought into how to review trials at scale, while ensuring consistency (e.g. under what conditions does sponsor A get to use a more optimistic prior than sponsor B?). I favor this work happening, but it is work, and new.

15.01.2026 17:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

What is mostly new (hinted at previously, but nowhere near as explicitly) is designs which completely ignore frequentist constraints. Essentially you will need to convince the FDA to agree with your prior and decision utilities. To my knowledge this happens very rarely now.

15.01.2026 17:20 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

There is section of this document covering type 1 error controlled but Bayesian driven clinical trials. This codifies what has already been going on (good to see it in one place and endorsed, but agree it's not new). Same with borrowing of information and platform trials with noncurrent controls.

15.01.2026 17:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Bayesian designs have certainly been happening, first in CDRH under Greg Campbell in the 2000s and then later being more accepted in CDER/CBER by the 2010s. We touch about 100-200 clinical trials a year, virtually all Bayesian in some form. As for the specifics here.....

15.01.2026 17:20 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

The general inference method above gets used a lot in safety monitoring (eg look at all adverse events and estimate the proportion in the active arm) but there it isn’t connected to a VE like quantity.

10.01.2026 03:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I don’t know but I would guess that too. Two different camps solving a similar problem and generating their own separate inertias?

10.01.2026 03:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

These are Pfizer's splits for their COVID vaccine trial adaptive design (they needed to show VE>30%, or p<0.41 ish)

10.01.2026 03:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

So it's direct in that you aren't estimating Pr(infection|arm) separately for each arm, but inference is a function of VE rather than VE itself (p is estimated through some standard method for estimating a proportion). You test if p<0.5 (or something smaller for supersuperiority).

10.01.2026 03:15 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Depends on "directly". Often vaccine trials will enroll a bunch of people, wait for infections, and then base inferences on the proportion of infections that are in the treatment group (ideally small). So inference is really on p=Pr(active arm | infection) where p=(1-VE)/(2-VE) is a function of VE.

10.01.2026 03:15 πŸ‘ 1 πŸ” 0 πŸ’¬ 3 πŸ“Œ 0

This reminded me of an unbiasedness example. Coin has Pr(heads)=p. If your first flip is tails, stop (N=1). If your first flip is heads, flip it a million times. The only unbiased estimate of p is to use the first flip only, ignoring 1 million flips even if you have them. That feels…problematic.

09.01.2026 14:36 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

What do you do with non-binding futility, where the sponsor need not follow the futility rule (often required by regulators in type 1 error calculations). Do you handicap the sponsor’s potential behavior? (I assume you would do the adjustment under binding futility, but it often isn’t REALLY true).

09.01.2026 14:36 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

…With good futility rules, usually there is a very limited chance of stopping for futility and then reaching success, so perhaps that limits how big an adjustment would be needed in successful trials? (I have NOT looked at this numerically).

09.01.2026 14:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I tend to be Bayesian and do not adjust in this way (the prior matters, not the design). But with my frequentist hat on, I would assume you do need to adjust. I’ve seen papers on this for Simon two stage designs. I’d be curious how big is the adjustment, particularly for successful trials….

09.01.2026 14:36 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

It could. It could also indicate the directions are not of equal value. Showing a drug works has a different ramification than showing it causes harm (and I would argue in most cases it is unethical to run a trial long enough to show harm).

07.01.2026 19:29 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In some situations you may actually want two trials (different populations, etc.). Then requiring one trial loses meaningful information.

05.12.2025 03:35 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0