Skip to contents

The metric creates two local variables, true_positives and false_positives that are used to compute the precision. This value is ultimately returned as precision, an idempotent operation that simply divides true_positives by the sum of true_positives and false_positives.

If sample_weight is NULL, weights default to 1. Use sample_weight of 0 to mask values.

If top_k is set, we'll calculate precision as how often on average a class among the top-k classes with the highest predicted values of a batch entry is correct and can be found in the label for that entry.

If class_id is specified, we calculate precision by considering only the entries in the batch for which class_id is above the threshold and/or in the top-k highest predictions, and computing the fraction of them for which class_id is indeed a correct label.

Usage

metric_precision(
  ...,
  thresholds = NULL,
  top_k = NULL,
  class_id = NULL,
  name = NULL,
  dtype = NULL
)

Arguments

...

For forward/backward compatability.

thresholds

(Optional) A float value, or a Python list of float threshold values in [0, 1]. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is TRUE, below is FALSE). If used with a loss function that sets from_logits=TRUE (i.e. no sigmoid applied to predictions), thresholds should be set to 0. One metric value is generated for each threshold value. If neither thresholds nor top_k are set, the default is to calculate precision with thresholds=0.5.

top_k

(Optional) Unset by default. An int value specifying the top-k predictions to consider when calculating precision.

class_id

(Optional) Integer class ID for which we want binary metrics. This must be in the half-open interval [0, num_classes), where num_classes is the last dimension of predictions.

name

(Optional) string name of the metric instance.

dtype

(Optional) data type of the metric result.

Value

a Metric instance is returned. The Metric instance can be passed directly to compile(metrics = ), or used as a standalone object. See ?Metric for example usage.

Usage

Standalone usage:

m <- metric_precision()
m$update_state(c(0, 1, 1, 1),
               c(1, 0, 1, 1))
m$result() |> as.double() |> signif()

## [1] 0.666667

m$reset_state()
m$update_state(c(0, 1, 1, 1),
               c(1, 0, 1, 1),
               sample_weight = c(0, 0, 1, 0))
m$result() |> as.double() |> signif()

## [1] 1

# With top_k=2, it will calculate precision over y_true[1:2]
# and y_pred[1:2]
m <- metric_precision(top_k = 2)
m$update_state(c(0, 0, 1, 1), c(1, 1, 1, 1))
m$result()

## tf.Tensor(0.0, shape=(), dtype=float32)

# With top_k=4, it will calculate precision over y_true[1:4]
# and y_pred[1:4]
m <- metric_precision(top_k = 4)
m$update_state(c(0, 0, 1, 1), c(1, 1, 1, 1))
m$result()

## tf.Tensor(0.5, shape=(), dtype=float32)

Usage with compile() API:

model |> compile(
  optimizer = 'sgd',
  loss = 'binary_crossentropy',
  metrics = list(metric_precision())
)

Usage with a loss with from_logits=TRUE:

model |> compile(
  optimizer = 'adam',
  loss = loss_binary_crossentropy(from_logits = TRUE),
  metrics = list(metric_precision(thresholds = 0))
)

See also

Other confusion metrics:
metric_auc()
metric_false_negatives()
metric_false_positives()
metric_precision_at_recall()
metric_recall()
metric_recall_at_precision()
metric_sensitivity_at_specificity()
metric_specificity_at_sensitivity()
metric_true_negatives()
metric_true_positives()

Other metrics:
Metric()
custom_metric()
metric_auc()
metric_binary_accuracy()
metric_binary_crossentropy()
metric_binary_focal_crossentropy()
metric_binary_iou()
metric_categorical_accuracy()
metric_categorical_crossentropy()
metric_categorical_focal_crossentropy()
metric_categorical_hinge()
metric_cosine_similarity()
metric_f1_score()
metric_false_negatives()
metric_false_positives()
metric_fbeta_score()
metric_hinge()
metric_huber()
metric_iou()
metric_kl_divergence()
metric_log_cosh()
metric_log_cosh_error()
metric_mean()
metric_mean_absolute_error()
metric_mean_absolute_percentage_error()
metric_mean_iou()
metric_mean_squared_error()
metric_mean_squared_logarithmic_error()
metric_mean_wrapper()
metric_one_hot_iou()
metric_one_hot_mean_iou()
metric_poisson()
metric_precision_at_recall()
metric_r2_score()
metric_recall()
metric_recall_at_precision()
metric_root_mean_squared_error()
metric_sensitivity_at_specificity()
metric_sparse_categorical_accuracy()
metric_sparse_categorical_crossentropy()
metric_sparse_top_k_categorical_accuracy()
metric_specificity_at_sensitivity()
metric_squared_hinge()
metric_sum()
metric_top_k_categorical_accuracy()
metric_true_negatives()
metric_true_positives()