The flexible interface¶
In the previous tutorial, we have demonstrated how sbi
can be used to run simulation-based inference with just a single line of code.
In addition to this simple interface, sbi
also provides a flexible interface which provides several additional features implemented in sbi
.
Note, you can find the original version of this notebook at https://github.com/sbi-dev/sbi/blob/main/tutorials/02_flexible_interface.ipynb in the sbi
repository.
Features¶
The flexible interface offers the following features (and many more):
- performing sequential posterior estimation by focusing on a particular observation over multiple rounds. This can decrease the number of simulations one has to run, but the inference procedure is no longer amortized (tutorial).
- specify your own density estimator, or change hyperparameters of existing ones (e.g. number of hidden units for NSF) (tutorial).
- use an
embedding_net
to learn summary features from high-dimensional simulation outputs (tutorial). - provide presimulated data
- choose between different methods to sample from the posterior.
- use calibration kernels as proposed by Lueckmann, Goncalves et al. 2017.
Main syntax¶
from sbi.inference import SNPE, prepare_for_sbi, simulate_for_sbi
simulator, prior = prepare_for_sbi(simulator, prior)
inference = SNPE(prior)
theta, x = simulate_for_sbi(simulator, proposal=prior, num_simulations=1000)
density_estimator = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior(density_estimator)
Linear Gaussian example¶
We will show an example of how we can use the flexible interface to infer the posterior for an example with a Gaussian likelihood (same example as before). First, we import the inference method we want to use (SNPE
, SNLE
, or SNRE
) and other helper functions.
import torch
from sbi.inference import SNPE, prepare_for_sbi, simulate_for_sbi
from sbi.utils.get_nn_models import posterior_nn
from sbi import utils as utils
from sbi import analysis as analysis
Next, we define the prior and simulator:
num_dim = 3
prior = utils.BoxUniform(low=-2 * torch.ones(num_dim), high=2 * torch.ones(num_dim))
def linear_gaussian(theta):
return theta + 1.0 + torch.randn_like(theta) * 0.1
In the flexible interface, you have to ensure that your simulator and prior adhere the requirements of sbi
. You can do so with the prepare_for_sbi()
function.
simulator, prior = prepare_for_sbi(linear_gaussian, prior)
Then, we instantiate the inference object:
inference = SNPE(prior=prior)
Next, we run simulations. You can do so either by yourself by sampling from the prior and running the simulator (e.g. on a compute cluster), or you can use a helper function provided by sbi
called simulate_for_sbi
. This function allows to parallelize your code with joblib
.
theta, x = simulate_for_sbi(simulator, proposal=prior, num_simulations=2000)
Running 2000 simulations.: 0%| | 0/2000 [00:00<?, ?it/s]
We then pass the simulated data to the inference object. theta
and x
should both be a torch.Tensor
of type float32
.
inference = inference.append_simulations(theta, x)
Next, we train the neural density estimator.
density_estimator = inference.train()
Neural network successfully converged after 73 epochs.
Lastly, we can use this density estimator to build the posterior:
posterior = inference.build_posterior(density_estimator)
Once we have obtained the posterior, we can .sample()
, .log_prob()
, or .pairplot()
in the same way as for the simple interface.
x_o = torch.zeros(3,)
posterior_samples = posterior.sample((10000,), x=x_o)
# plot posterior samples
_ = analysis.pairplot(
posterior_samples, limits=[[-2, 2], [-2, 2], [-2, 2]], figsize=(5, 5)
)
Drawing 10000 posterior samples: 0%| | 0/10000 [00:00<?, ?it/s]
We can always print the posterior to know how it was trained:
print(posterior)
Posterior conditional density p(θ|x) of type DirectPosterior. It samples the posterior network and rejects samples that
lie outside of the prior bounds.