Bundling a model prepares it to be saved to a file and later restored for prediction in a new R session. See the 'Value' section for more information on bundles and their usage.
Usage
# S3 method for class 'luz_module_fitted'
bundle(x, ...)
Arguments
- x
A
luz_module_fitted
object returned fromluz::fit.luz_module_generator()
.- ...
Not used in this bundler and included for compatibility with the generic only. Additional arguments passed to this method will return an error.
Value
A bundle object with subclass bundled_luz_module_fitted
.
Bundles are a list subclass with two components:
- object
An R object. Gives the output of native serialization methods from the model-supplying package, sometimes with additional classes or attributes that aid portability. This is often a raw object.
- situate
A function. The
situate()
function is defined whenbundle()
is called, though is a loose analogue of anunbundle()
S3 method for that object. Since the function is defined onbundle()
, it has access to references and dependency information that can be saved alongside theobject
component. Callingunbundle()
on a bundled objectx
callsx$situate(x$object)
, returning the unserialized version ofobject
.situate()
will also restore needed references, such as server instances and environmental variables.
Bundles are R objects that represent a "standalone" version of their
analogous model object. Thus, bundles are ready for saving to a file; saving
with base::saveRDS()
is our recommended serialization strategy for bundles,
unless documented otherwise for a specific method.
To restore the original model object x
in a new environment, load its
bundle with base::readRDS()
and run unbundle()
on it. The output
of unbundle()
is a model object that is ready to predict()
on new data,
and other restored functionality (like plotting or summarizing) is supported
as a side effect only.
The bundle package wraps native serialization methods from model-supplying packages. Between versions, those model-supplying packages may change their native serialization methods, possibly introducing problems with re-loading objects serialized with previous package versions. The bundle package does not provide checks for these sorts of changes, and ought to be used in conjunction with tooling for managing and monitoring model environments like vetiver or renv.
See vignette("bundle")
for more information on bundling and its motivation.
Details
For now, bundling methods for torch are only available via the luz package, "a higher level API for torch providing abstractions to allow for much less verbose training loops."
See also
This method wraps luz::luz_save()
and luz::luz_load()
.
Other bundlers:
bundle()
,
bundle.H2OAutoML()
,
bundle.bart()
,
bundle.keras.engine.training.Model()
,
bundle.model_fit()
,
bundle.model_stack()
,
bundle.recipe()
,
bundle.step_umap()
,
bundle.train()
,
bundle.workflow()
,
bundle.xgb.Booster()
Examples
if (torch::torch_is_installed()) {
# fit model and bundle ------------------------------------------------
library(torch)
library(torchvision)
library(luz)
set.seed(1)
# example adapted from luz pkgdown article "Autoencoder"
dir <- tempdir()
mnist_dataset2 <- torch::dataset(
inherit = mnist_dataset,
.getitem = function(i) {
output <- super$.getitem(i)
output$y <- output$x
output
}
)
train_ds <- mnist_dataset2(
dir,
download = TRUE,
transform = transform_to_tensor
)
test_ds <- mnist_dataset2(
dir,
train = FALSE,
transform = transform_to_tensor
)
train_dl <- dataloader(train_ds, batch_size = 128, shuffle = TRUE)
test_dl <- dataloader(test_ds, batch_size = 128)
net <- nn_module(
"Net",
initialize = function() {
self$encoder <- nn_sequential(
nn_conv2d(1, 6, kernel_size=5),
nn_relu(),
nn_conv2d(6, 16, kernel_size=5),
nn_relu()
)
self$decoder <- nn_sequential(
nn_conv_transpose2d(16, 6, kernel_size = 5),
nn_relu(),
nn_conv_transpose2d(6, 1, kernel_size = 5),
nn_sigmoid()
)
},
forward = function(x) {
x %>%
self$encoder() %>%
self$decoder()
},
predict = function(x) {
self$encoder(x) %>%
torch_flatten(start_dim = 2)
}
)
mod <- net %>%
setup(
loss = nn_mse_loss(),
optimizer = optim_adam
) %>%
fit(train_dl, epochs = 1, valid_data = test_dl)
mod_bundle <- bundle(mod)
# then, after saveRDS + readRDS or passing to a new session ----------
mod_unbundled <- unbundle(mod_bundle)
mod_unbundled_preds <- predict(mod_unbundled, test_dl)
}