R/stsfunctions.R
sts_fit_with_hmc.Rd
Markov chain Monte Carlo (MCMC) methods are considered the gold standard of Bayesian inference; under suitable conditions and in the limit of infinitely many draws they generate samples from the true posterior distribution. HMC (Neal, 2011) uses gradients of the model's logdensity function to propose samples, allowing it to exploit posterior geometry. However, it is computationally more expensive than variational inference and relatively sensitive to tuning.
sts_fit_with_hmc( observed_time_series, model, num_results = 100, num_warmup_steps = 50, num_leapfrog_steps = 15, initial_state = NULL, initial_step_size = NULL, chain_batch_shape = list(), num_variational_steps = 150, variational_optimizer = NULL, variational_sample_size = 5, seed = NULL, name = NULL )
observed_time_series 


model  An instance of 
num_results  Integer number of Markov chain draws. Default value: 
num_warmup_steps  Integer number of steps to take before starting to
collect results. The warmup steps are also used to adapt the step size
towards a target acceptance rate of 0.75. Default value: 
num_leapfrog_steps  Integer number of steps to run the leapfrog integrator
for. Total progress per HMC step is roughly proportional to 
initial_state  Optional Python 
initial_step_size 

chain_batch_shape  Batch shape ( 
num_variational_steps 

variational_optimizer  Optional 
variational_sample_size  integer number of Monte Carlo samples to use
in estimating the variational divergence. Larger values may stabilize
the optimization, but at higher cost per step in time and memory.
Default value: 
seed  integer to seed the random number generator. 
name  name prefixed to ops created by this function. Default value: 
list of:
samples: list
of Tensors
representing posterior samples of model
parameters, with shapes [concat([[num_results], chain_batch_shape, param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]
.
kernel_results: A (possibly nested) list
of Tensor
s representing
internal calculations made within the HMC sampler.
This method attempts to provide a sensible default approach for fitting StructuralTimeSeries models using HMC. It first runs variational inference as a fast posterior approximation, and initializes the HMC sampler from the variational posterior, using the posterior standard deviations to set pervariable step sizes (equivalently, a diagonal mass matrix). During the warmup phase, it adapts the step size to target an acceptance rate of 0.75, which is thought to be in the desirable range for optimal mixing (Betancourt et al., 2014).
Other stsfunctions:
sts_build_factored_surrogate_posterior()
,
sts_build_factored_variational_loss()
,
sts_decompose_by_component()
,
sts_decompose_forecast_by_component()
,
sts_forecast()
,
sts_one_step_predictive()
,
sts_sample_uniform_initial_state()
# \donttest{ observed_time_series < rep(c(3.5, 4.1, 4.5, 3.9, 2.4, 2.1, 1.2), 5) + rep(c(1.1, 1.5, 2.4, 3.1, 4.0), each = 7) %>% tensorflow::tf$convert_to_tensor(dtype = tensorflow::tf$float64) day_of_week < observed_time_series %>% sts_seasonal(num_seasons = 7) local_linear_trend < observed_time_series %>% sts_local_linear_trend() model < observed_time_series %>% sts_sum(components = list(day_of_week, local_linear_trend)) states_and_results < observed_time_series %>% sts_fit_with_hmc( model, num_results = 10, num_warmup_steps = 5, num_variational_steps = 15) # }