Aggregate, store, and plot model metrics over time for monitoringSource:
These three functions can be used for model monitoring (such as in a monitoring dashboard):
vetiver_compute_metrics()computes metrics (such as accuracy for a classification model or RMSE for a regression model) at a chosen time aggregation
vetiver_pin_metrics()updates an existing pin storing model metrics over time
vetiver_plot_metrics()creates a plot of metrics over time
vetiver_compute_metrics( data, date_var, period, truth, estimate, ..., metric_set = yardstick::metrics, every = 1L, origin = NULL, before = 0L, after = 0L, complete = FALSE ) vetiver_pin_metrics( board, df_metrics, metrics_pin_name, .index = .index, overwrite = TRUE ) vetiver_plot_metrics( df_metrics, .index = .index, .estimate = .estimate, .metric = .metric, .n = .n )
estimatecolumns and any columns specified by
The column in
datacontaining dates or date-times for monitoring, to be aggregated with
A string defining the period to group by. Valid inputs can be roughly broken into:
The column identifier for the true results (that is
factor). This should be an unquoted column name although this argument is passed by expression and support quasiquotation (you can unquote column names).
The column identifier for the predicted results (that is also
factor). As with
truththis can be specified different ways but the primary method is to use an unquoted variable name.
A set of unquoted column names or one or more
dplyrselector functions to choose which variables contain the class probabilities. If
truthis binary, only 1 column should be selected. Otherwise, there should be as many columns as factor levels of
The number of periods to group together.
For example, if the period was set to
"year"with an every value of
2, then the years 1970 and 1971 would be placed in the same group.
[Date(1) / POSIXct(1) / POSIXlt(1) / NULL]
The reference date time value. The default when left as
NULLis the epoch time of
1970-01-01 00:00:00, in the time zone of the index.
This is generally used to define the anchor time to count from, which is relevant when the every value is
- before, after
[integer(1) / Inf]
The number of values before or after the current element to include in the sliding window. Set to
Infto select all elements before or after the current element. Negative values are allowed, which allows you to "look forward" from the current element if used as the
.beforevalue, or "look backwards" if used as
Should the function be evaluated on complete windows only? If
FALSE, the default, then partial computations will be allowed.
A tidy dataframe of metrics over time, such as created by
Pin name for where the metrics are stored (as opposed to where the model object is stored with
The variable in
df_metricscontaining the aggregated dates or date-times (from
data). Defaults to
TRUE(the default), overwrite any metrics for dates that exist both in the existing pin and new metrics with the new values. If
FALSE, error when the new metrics contain overlapping dates with the existing pin.
The variable in
df_metricscontaining the metric estimate. Defaults to
The variable in
df_metricscontaining the metric type. Defaults to
The variable in
df_metricscontaining the number of observations used for estimating the metric.
a dataframe of metrics. The
vetiver_plot_metrics() function returns a
Sometimes when you monitor a model at a given time aggregation, you
may end up with dates in your new metrics (like
new_metrics in the example)
that are the same as dates in your existing aggregated metrics (like
original_metrics in the example). This can happen if you need to re-run a
monitoring report because something failed. With
overwrite = TRUE (the
vetiver_pin_metrics() will replace such metrics with the new
overwrite = FALSE,
vetiver_pin_metrics() will error when
there are overlapping dates.
For arguments used more than once in your monitoring dashboard,
date_var, consider using
R Markdown parameters
to reduce repetition and/or errors.
library(dplyr) #> #> Attaching package: ‘dplyr’ #> The following objects are masked from ‘package:stats’: #> #> filter, lag #> The following objects are masked from ‘package:base’: #> #> intersect, setdiff, setequal, union library(parsnip) data(Chicago, package = "modeldata") Chicago <- Chicago %>% select(ridership, date, all_of(stations)) training_data <- Chicago %>% filter(date < "2009-01-01") testing_data <- Chicago %>% filter(date >= "2009-01-01", date < "2011-01-01") monitoring <- Chicago %>% filter(date >= "2011-01-01", date < "2012-12-31") lm_fit <- linear_reg() %>% fit(ridership ~ ., data = training_data) library(pins) b <- board_temp() ## before starting monitoring, initiate the metrics and pin ## (for example, with the testing data): original_metrics <- augment(lm_fit, new_data = testing_data) %>% vetiver_compute_metrics(date, "week", ridership, .pred, every = 4L) pin_write(b, original_metrics, "lm_fit_metrics") #> Guessing `type = 'rds'` #> Creating new version '20220525T201118Z-04c47' #> Writing to pin 'lm_fit_metrics' ## to continue monitoring with new data, compute metrics and update pin: new_metrics <- augment(lm_fit, new_data = monitoring) %>% vetiver_compute_metrics(date, "week", ridership, .pred, every = 4L) vetiver_pin_metrics(b, new_metrics, "lm_fit_metrics") #> Guessing `type = 'rds'` #> Replacing version '20220525T201118Z-04c47' with '20220525T201119Z-01714' #> Writing to pin 'lm_fit_metrics' #> # A tibble: 162 × 5 #> .index .n .metric .estimator .estimate #> <date> <int> <chr> <chr> <dbl> #> 1 2009-01-01 7 rmse standard 6.78 #> 2 2009-01-01 7 rsq standard 0.154 #> 3 2009-01-01 7 mae standard 5.25 #> 4 2009-01-08 28 rmse standard 4.61 #> 5 2009-01-08 28 rsq standard 0.576 #> 6 2009-01-08 28 mae standard 2.98 #> 7 2009-02-05 28 rmse standard 1.90 #> 8 2009-02-05 28 rsq standard 0.916 #> 9 2009-02-05 28 mae standard 1.17 #> 10 2009-03-05 28 rmse standard 1.24 #> # … with 152 more rows library(ggplot2) vetiver_plot_metrics(new_metrics) + scale_size(range = c(2, 4))