Skip to content

API of implemented methods

This notebook spells out the API for all algorithms implemented in the sbi toolbox:

For the examples below, we define a simple toy prior and simulator:

# Example setup
import torch

from sbi.utils import BoxUniform

# Define the prior
num_dims = 2
num_sims = 1000
num_rounds = 2
prior = BoxUniform(low=torch.zeros(num_dims), high=torch.ones(num_dims))
simulator = lambda theta: theta + torch.randn_like(theta) * 0.1
x_o = torch.tensor([0.5, 0.5])

Neural Posterior Estimation (NPE)

Default implementation

The core idea of Neural Posterior Estimation (NPE) is to train a conditional generative model that directly predicts the posterior given observations. This idea was originally developed by Papamakarios & Murray (NeurIPS 2016). The default implementation of NPE in the sbi package follows Greenberg, Nonnenmacher & Macke (ICML 2019), who proposed NPE with normalizing flows:

from sbi.inference import NPE

inference = NPE(prior)
theta = prior.sample((num_sims,))
x = simulator(theta)
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
samples = posterior.sample((1000,), x=x_o)

For the above default implementation of NPE in the sbi package, we recommend to cite:

  • Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation by Papamakarios & Murray (NeurIPS 2016) [PDF]

  • Automatic posterior transformation for likelihood-free inference by Greenberg, Nonnenmacher & Macke (ICML 2019) [PDF]

Beyond this, the sbi package implements many modifications and extensions of these algorithms for Neural Posterior Estimation, which are outlined below. These methods include

  • various types of embedding networks (Lueckmann et al., Greenberg et al., Radev et al.,),
  • various generative models (Dax et al., Geffner et al., Sharrock et al.), and
  • various algorithms for sequential NPE (called SNPE) which perform inference across several rounds (Papamakarios et al., Lueckmann et al., Greenberg et al., Deistler et al.). Note that all SNPE methods use the same loss function in the first round and, thus, NPE is equivalent to single-round SNPE.

Papamakarios & Murray, 2016

Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation by Papamakarios & Murray (NeurIPS 2016) [PDF]

Papamakarios et al. (2016) were the first to use neural networks to directly predict the posterior given observations. As density estimator, they used mixture density networks. In addition, they also proposed a sequential algorithm for neural posterior estimation. Their full algorithm can be implemented as follows:

from sbi.inference import NPE_A

inference = NPE_A(prior)
proposal = prior
for r in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    # NPE-A trains a Gaussian density estimator in all but the last round. In the last round,
    # it trains a mixture of Gaussians, which is why we have to pass the `final_round` flag.
    final_round = r == num_rounds - 1
    _ = inference.append_simulations(theta, x, proposal=proposal).train(final_round=final_round)
    posterior = inference.build_posterior().set_default_x(x_o)
    proposal = posterior

Lueckmann et al., 2017

Flexible statistical inference for mechanistic models of neural dynamics by Lueckmann, Goncalves, Bassetto, Öcal, Nonnenmacher, Macke (NeurIPS 2017) [PDF]

Like Papamakarios et al. (2016), Lueckmann et al. (2017) used mixture density networks for NPE. In addition, they also proposed embedding networks for time series, and proposed a different loss function for the sequential version of the algorithm.

from sbi.inference import NPE_B
from sbi.neural_nets import posterior_nn
from sbi.neural_nets.embedding_nets import FCEmbedding

embedding = FCEmbedding(num_dims)
density_estimator = posterior_nn("mdn", embedding_net=embedding)
inference = NPE_B(prior, density_estimator=density_estimator)

proposal = prior
for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    _ = inference.append_simulations(theta, x, proposal=proposal).train()
    posterior = inference.build_posterior().set_default_x(x_o)
    proposal = posterior

Greenberg et al., 2019

Automatic posterior transformation for likelihood-free inference by Greenberg, Nonnenmacher & Macke (ICML 2019) [PDF]

Greenberg, Nonnenmacher & Macke were the first to use normalizing flows for neural posterior estimation (NPE), which is the default for the NPE class in the sbi package (see above). In addition, they also proposed embedding networks in combination with normalizing flows, and proposed a modified loss function for the sequential version of the neural posterior estimation. These additional contributions can be implemented as follows:

from sbi.inference import NPE
from sbi.neural_nets import posterior_nn
from sbi.neural_nets.embedding_nets import FCEmbedding

embedding = FCEmbedding(num_dims)
density_estimator = posterior_nn("maf", embedding_net=embedding)
inference = NPE(prior, density_estimator=density_estimator)

proposal = prior
for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    _ = inference.append_simulations(theta, x, proposal=proposal).train()
    posterior = inference.build_posterior().set_default_x(x_o)
    proposal = posterior

Radev et al., 2020

BayesFlow: Learning complex stochastic models with invertible neural networks by Radev, S. T., Mertens, U. K., Voss, A., Ardizzone, L., & Köthe, U. (2020) (IEEE transactions on neural networks and learning systems 2020) [Paper]

The density estimation part of BayesFlow is equivalent to single-round NPE (Greenberg et al., 2019). The additional contribution of BayesFlow are permutation invariant embedding networks for iid data, which can be implemented in sbi as follows:

# Posterior estimation with BayesFlow is equivalent to single-round NPE.
from sbi.inference import NPE
from sbi.neural_nets import posterior_nn
from sbi.neural_nets.embedding_nets import FCEmbedding, PermutationInvariantEmbedding

num_iid = 3
simulator_iid = lambda theta: theta + torch.randn((num_iid, *theta.shape)) * 0.1

trial_net = FCEmbedding(num_dims, 20)
embedding = PermutationInvariantEmbedding(trial_net, 20)
density_estimator = posterior_nn("maf", embedding_net=embedding)
inference = NPE(prior, density_estimator=density_estimator)

theta = prior.sample((num_sims,))
x = simulator_iid(theta).permute(1, 0, 2)
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
samples = posterior.sample((1000,), x=torch.zeros((1, num_iid, num_dims)))

Deistler et al., 2022

Truncated proposals for scalable and hassle-free simulation-based inference by Deistler, Goncalves & Macke (NeurIPS 2022) [Paper]

Deistler et al. proposed a method for sequential inference. They propose to truncate the prior at every round. By doing this, the density estimator of NPE can be trained with the same loss (default) function at every round.

from sbi.inference import NPE
from sbi.utils import RestrictedPrior, get_density_thresholder

inference = NPE(prior)
proposal = prior
for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    _ = inference.append_simulations(theta, x).train(force_first_round_loss=True)
    posterior = inference.build_posterior().set_default_x(x_o)

    accept_reject_fn = get_density_thresholder(posterior, quantile=1e-4)
    proposal = RestrictedPrior(prior, accept_reject_fn, sample_with="rejection")

Dax et al., 2023

Flow Matching for Scalable Simulation-Based Inference by Dax, Wildberger, Buchholz, Green, Macke, Schölkopf (NeurIPS 2023) [Paper]

from sbi.inference import FMPE

inference = FMPE(prior)
# FMPE does not support multiple rounds of inference
theta = prior.sample((num_sims,))
x = simulator(theta)
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior().set_default_x(x_o)

Geffner et al., Sharrock et al.,

Neural posterior score estimation based on:

  • Compositional Score Modeling for Simulation-based Inference by Geffner, T., Papamakarios, G., & Mnih, A. (ICML 2023) [Paper]
  • Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models by Sharrock, L., Simons, J., Liu, S., & Beaumont, M. (ICML 2024) [Paper]

Note that currently only the single-round variant is implemented.

from sbi.inference import NPSE

theta = prior.sample((num_sims,))
x = simulator(theta)

inference = NPSE(prior, sde_type="ve")
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior().set_default_x(x_o)

Neural Likelihood Estimation (NLE)

Default implementation

The core idea of Neural Likelihood Estimation (NPE) is to train a conditional generative model that emulates the simulator and to then perform inference with traditional Bayesian methods such as MCMC or variational inference. This idea was originally developed by Papamakarios & Murray (AISTATS 2019). The default implementation of NLE in the sbi package follows Papamakarios & Murray, who proposed normalizing flows to emulate the simulator and MCMC to draw samples from the posterior.

from sbi.inference import NLE

inference = NLE(prior)
theta = prior.sample((num_sims,))
x = simulator(theta)
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
samples = posterior.sample((1000,), x=x_o)

For the above default implementation of NLE in the sbi package, we recommend to cite:

  • Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows by Papamakarios, Sterratt & Murray (AISTATS 2019) [PDF]

Beyond this, the sbi package implements many modifications and extensions of these algorithms for Neural Likelihood Estimation, which are outlined below.

Papamakarios et al., 2019

Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows by Papamakarios, Sterratt & Murray (AISTATS 2019) [PDF]

from sbi.inference import NLE
from sbi.inference.posteriors.posterior_parameters import MCMCPosteriorParameters

inference = NLE(prior)
proposal = prior
for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    _ = inference.append_simulations(theta, x).train()
    posterior = inference.build_posterior(posterior_parameters=MCMCPosteriorParameters(method="slice_np_vectorized", num_chains=20,
                                                           thin=5))

    proposal = posterior.set_default_x(x_o)

Glöckler et al., 2022

Variational methods for simulation-based inference by Glöckler, Deistler, Macke (ICLR 2022) [Paper]

from sbi.inference import NLE
from sbi.inference.posteriors.posterior_parameters import VIPosteriorParameters

inference = NLE(prior)
proposal = prior
for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    _ = inference.append_simulations(theta, x).train()
    posterior = inference.build_posterior(posterior_parameters=VIPosteriorParameters(vi_method="fKL")).set_default_x(x_o)
    proposal = posterior.train()  # Train VI posterior on given x_o.

Boelts et al., 2022

Flexible and efficient simulation-based inference for models of decision-making by Boelts, Lueckmann, Gao, Macke (Elife 2022) [Paper]

from sbi.inference import MNLE

inference = MNLE(prior)
theta = prior.sample((num_sims,))
# add a column of discrete data to x.
x = torch.cat((simulator(theta), torch.bernoulli(theta[:, :1])), dim=1)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior().set_default_x(x_o)

Neural Ratio Estimation (NRE)

Default implementation

The core idea of Neural Ratio Estimation (NRE) is to train a classifier that distinguishes whether simulation outputs were generated from a particular parameter set or not. The classifier is then used to perform inference with traditional Bayesian methods such as MCMC or variational inference. This idea was originally developed by Hermans et al. (ICML 2020). The default implementation of NRE in the sbi package follows Durkan et al. (ICML 2020), who used a modified loss function.

from sbi.inference import NRE

inference = NRE(prior)
theta = prior.sample((num_sims,))
x = simulator(theta)
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
samples = posterior.sample((1000,), x=x_o)

For the above default implementation of NRE in the sbi package, we recommend to cite:

  • Likelihood-free MCMC with Amortized Approximate Likelihood Ratios by Hermans, Begy & Louppe (ICML 2020) [PDF]

  • On Contrastive Learning for Likelihood-free Inference by Durkan, Murray & Papamakarios (ICML 2020) [PDF]

Beyond this, the sbi package implements many modifications and extensions of these algorithms for Neural Likelihood Estimation, which are outlined below.

Hermans et al., 2020

Likelihood-free MCMC with Amortized Approximate Likelihood Ratios by Hermans, Begy & Louppe (ICML 2020) [PDF]

from sbi.inference import NRE_A

inference = NRE_A(prior)
theta = prior.sample((num_sims,))
x = simulator(theta)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior().set_default_x(x_o)

Durkan et al., 2020

On Contrastive Learning for Likelihood-free Inference by Durkan, Murray & Papamakarios (ICML 2020) [PDF].

from sbi.inference import NRE
from sbi.inference.posteriors.posterior_parameters import MCMCPosteriorParameters

inference = NRE(prior)
proposal = prior
for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    _ = inference.append_simulations(theta, x).train()
    posterior = inference.build_posterior(
        posterior_parameters=MCMCPosteriorParameters(
            method="slice_np_vectorized", num_chains=20, thin=5
        )
    )

    proposal = posterior.set_default_x(x_o)

Delaunoy et al., 2022

Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation by Delaunoy, Hermans, Rozet, Wehenkel & Louppe (NeurIPS 2022) [PDF]

from sbi.inference import BNRE

inference = BNRE(prior)
theta = prior.sample((num_sims,))
x = simulator(theta)
_ = inference.append_simulations(theta, x).train(regularization_strength=100.)
posterior = inference.build_posterior().set_default_x(x_o)

Miller et al., 2022

Contrastive Neural Ratio Estimation by Benjamin Kurt Miller, Christoph Weniger, Patrick Forré (NeurIPS 2022) [PDF]

# The main feature of NRE-C is producing an exact ratio of densities at optimum,
# even when using multiple contrastive pairs (classes).

from sbi.inference import NRE_C

inference = NRE_C(prior)
proposal = prior
theta = proposal.sample((num_sims,))
x = simulator(theta)
_ = inference.append_simulations(theta, x).train(
    num_classes=5,  # sees `2 * num_classes - 1` marginally drawn contrastive pairs.
    gamma=1.0,  # controls the weight between terms in its loss function.
)
posterior = inference.build_posterior().set_default_x(x_o)

Diagnostics and utilities

Talts et al., 2018

Simulation-based calibration by Talts, Betancourt, Simpson, Vehtari, Gelman (arxiv 2018) [Paper]

from sbi.analysis import sbc_rank_plot
from sbi.diagnostics import run_sbc

thetas = prior.sample((1000,))
xs = simulator(thetas)

# SBC is fast for fully amortized NPE.
inference = NPE(prior)
theta = prior.sample((num_sims,))
x = simulator(theta)
inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()

ranks, dap_samples = run_sbc(
    thetas, xs, posterior, num_posterior_samples=1_000
)

fig, axes = sbc_rank_plot(
    ranks=ranks,
    num_posterior_samples=1000,
    plot_type="hist",
    num_bins=20,
)

Deistler et al., Rozet et al.,

Expected coverage (sample-based) as computed in Deistler, Goncalves, Macke (Neurips 2022) [Paper] and in Rozet, Louppe (2021) [Paper]

thetas = prior.sample((100,))
xs = simulator(thetas)

ranks, dap_samples = run_sbc(
    thetas,
    xs,
    posterior,
    num_posterior_samples=1_000,
    reduce_fns=posterior.log_prob  # Difference to SBC.
)

# NOTE: Here we obtain a single rank plot because ranks are calculated
# for the entire posterior and not for each marginal like in SBC.
fig, axes = sbc_rank_plot(
    ranks=ranks,
    num_posterior_samples=1000,
    plot_type="hist",
    num_bins=20,
)

Lemost et al., 2023

TARP: Sampling-Based Accuracy Testing of Posterior Estimators for General Inference by Lemos, Coogan, Hezaveh & Perreault-Levasseur (ICML 2023) [Paper]

from sbi.analysis import plot_tarp
from sbi.diagnostics.tarp import run_tarp

thetas = prior.sample((1000,))
xs = simulator(thetas)

expected_coverage, ideal_coverage = run_tarp(
    thetas,
    xs,
    posterior,
    references=None,  # optional, defaults to uniform samples across parameter space.
    num_posterior_samples=1_000,
)

fix, axes = plot_tarp(expected_coverage, ideal_coverage)

Deistler et al., 2022

Restriction estimator by Deistler, Macke & Goncalves (PNAS 2022) [Paper]

from sbi.inference import NPE
from sbi.utils import RestrictionEstimator

restriction_estimator = RestrictionEstimator(prior=prior)
proposal = prior

for _ in range(num_rounds):
    theta = proposal.sample((num_sims,))
    x = simulator(theta)
    restriction_estimator.append_simulations(theta, x)
    classifier = restriction_estimator.train()
    proposal = restriction_estimator.restrict_prior()

all_theta, all_x, _ = restriction_estimator.get_simulations()

inference = NPE(prior)
density_estimator = inference.append_simulations(all_theta, all_x).train()
posterior = inference.build_posterior()