Microbenchmark but basically the difference of precomputing a column sparse matrix before passing it to the function vs within. This may sound silly but I had written my own version of sparse linear algebra so that is why it wasnt clear to me.
Microbenchmark but basically the difference of precomputing a column sparse matrix before passing it to the function vs within. This may sound silly but I had written my own version of sparse linear algebra so that is why it wasnt clear to me.
Two days trying to optimize a function. Gained 6 milliseconds for a 750 x 750 matrix computation. The real lesson is better understanding what the machine is already doing well. Next time I will know what doesnβt need to be improved.
If doing optimization on the log scale of a parameter then need the Jacobian to find posterior mode. If its a penalty term then you do not. That in my experience is the difference.
No. Just typically data constrained and using maximum likelihood with priors as penalties to constrain their estimates to reasonable values.
Check out our vignette for details on how to use it: r-nimble.org/vignettes/ni...
In fisheries it is the wild west for what a penalty term is vs a prior. The main difference being whether or not a jacobian is included for any transformations. See optimize via stan that gets specific about whether to use jacobians or the prior as a penalty.
nimbleQuad is now out on Cran! #rstats bsky.app/profile/cran...
The dense linear algebra version that is fine for a small number of states. Thanks to #RTMB for their version.
For anyone trying to do matrix exponentials in #NIMBLE, here is a version I wrote that hacks sparse linear algebra for solving exp(A) %*% v. I'll put it into the package in a future update.
Discrete movement in continuous time. The data look like below with blue points detectors that detected an animal. Red lines the latent movement path of the animal.
More encouraging results with a few more animals (N=50)
First simulated run on a new SCR model that does animal movement in continuous time using an OU process. Only 1000 iterations of the MCMC, but very encouraging results!
Just found out about RTMBβs matrix exponential trick expmAv for solving exp(A) %*% v. It is based on a really nice paper called Direct Statistical Inference for Finite Markov Jump Processes vis the Matrix Exponential by Chris Sherlock. Super fast for problems with more than 2000 states!!!!
Anyone using the polyagamma sampler in nimble for logistic regression? Going to update it soon to include negative binomial and multinomial distributions. Would be keen to see how it is being used and if there are any special requests for functionality.
Section 12.3 of the user manual: Advanced user-defined functions and distributions has the basic details laid out.
Not sure. It should be added if not. Here is a basic example that I have not actually run so may need to be edited. But data input in this case would be the animal ID to link it to the cached count. Let me know if it doesnβt run.
You can cache data and constants in setup code for distribution functions now. Speeds up compiling and reduces memory a lot!
Excited that our new paper introducing the spatially balanced sampling R package 'spbal' is finally available. It can draw BAS, HIP, and Halton samples. With more to be added in the near future. It's on CRAN as well as GitHub github.com/docgovtnz/sp...
onlinelibrary.wiley.com/doi/full/10....
Alternatively, in glmm, I rarely (once) have had anyone worry that the marginal mean and the conditional mean are not the same. I appreciate the discussion by Gory et al. 2020 "A class of generalized linear mixed models adjusted for marginal interpretability" on this topic and keen to hear thoughts.
When log(R) ~ Normal(mu, s^2), then E(R) = mu + s^2/2. So instead, I often see log(R) ~ Normal(mu-s^2/2, s^2), so that E(R) = mu.
But then they add additionally complexity with random effects. But those terms aren't also included in the bias correction. So, what is actually happening?
Does anyone in Fisheries find the addition of a "bias correction" term used inconsistently and without much justification?
Teaching a NIMBLE course at DFO today. If you are interested in an intro to NIMBLE with applications in fisheries the material is all online at github.com/nimble-train...
Page limits have really improved my writing. Makes me ask if each sentence brings me joy.
Working on quadrature in NIMBLE (future package - nimbleQuad). Here is a nice trick for those who have to sum up probabilities on the real (not log) scale and worry about very small probabilities and numerical accuracy. Probably a niche crowd... :)