Pass-through layer that adds a KL divergence penalty to the model loss
layer_kl_divergence_add_loss(object, distribution_b, use_exact_kl = FALSE, test_points_reduce_axis = NULL, test_points_fn = tf$convert_to_tensor, weight = NULL, ...)
Model or layer object
Distribution instance corresponding to b as in
Logical indicating if KL divergence should be
calculated exactly via
Integer vector or scalar representing dimensions over which to reduce_mean while calculating the Monte Carlo approximation of the KL divergence. As is with all tf$reduce_* ops, NULL means reduce over all dimensions; () means reduce over none of them. Default value: () (i.e., no reduction).
A callable taking a
Multiplier applied to the calculated KL divergence for each Keras batch member. Default value: NULL (i.e., do not weight each batch member).
Additional arguments passed to
a Keras layer
For an example how to use in a Keras model, see