A Csiszar-function is a member of F = { f:R_+ to R : f convex }
.
vi_jensen_shannon(logu, self_normalized = FALSE, name = NULL)
logu |
|
---|---|
self_normalized |
|
name | name prefixed to Ops created by this function. |
jensen_shannon_of_u, float
-like Tensor
of the Csiszar-function
evaluated at u = exp(logu)
.
When self_normalized = True
, the Jensen-Shannon Csiszar-function is:
f(u) = u log(u) - (1 + u) log(1 + u) + (u + 1) log(2)
When self_normalized = False
the (u + 1) log(2)
term is omitted.
Observe that as an f-Divergence, this Csiszar-function implies:
D_f[p, q] = KL[p, m] + KL[q, m] m(x) = 0.5 p(x) + 0.5 q(x)
In a sense, this divergence is the "reverse" of the Arithmetic-Geometric f-Divergence.
This Csiszar-function induces a symmetric f-Divergence, i.e.,
D_f[p, q] = D_f[q, p]
.
Warning: this function makes non-log-space calculations and may therefore be
numerically unstable for |logu| >> 0
.
Lin, J. "Divergence measures based on the Shannon entropy." IEEE Trans. Inf. Th., 37, 145-151, 1991.
Other vi-functions:
vi_amari_alpha()
,
vi_arithmetic_geometric()
,
vi_chi_square()
,
vi_csiszar_vimco()
,
vi_dual_csiszar_function()
,
vi_fit_surrogate_posterior()
,
vi_jeffreys()
,
vi_kl_forward()
,
vi_kl_reverse()
,
vi_log1p_abs()
,
vi_modified_gan()
,
vi_monte_carlo_variational_loss()
,
vi_pearson()
,
vi_squared_hellinger()
,
vi_symmetrized_csiszar_function()