Stochastic gradient descent optimizer with support for momentum, learning rate decay, and Nesterov momentum.

optimizer_sgd(lr = 0.01, momentum = 0, decay = 0, nesterov = FALSE,
  clipnorm = NULL, clipvalue = NULL)

Arguments

lr

float >= 0. Learning rate.

momentum

float >= 0. Parameter updates momentum.

decay

float >= 0. Learning rate decay over each update.

nesterov

boolean. Whether to apply Nesterov momentum.

clipnorm

Gradients will be clipped when their L2 norm exceeds this value.

clipvalue

Gradients will be clipped when their absolute value exceeds this value.

Value

Optimizer for use with compile.

See also

Other optimizers: optimizer_adadelta, optimizer_adagrad, optimizer_adamax, optimizer_adam, optimizer_nadam, optimizer_rmsprop