Exercises are interactive R code chunks that allow readers to directly execute R code and see its results. There are many options associated with tutorial exercises (all of which are described in more detail below):
||Caption for exercise chunk (defaults to “Code”)|
Whether to pre-evaluate the exercise so the reader can see some default output (defaults to
||Lines of code for exercise editor (default to size of code chunk).|
||Number of seconds to limit execution time to (defaults to 30).|
Function used to check exercise answers (e.g.,
A string containing R code to use for checking code when an exercise evaluation error occurs (e.g.,
||Whether to enable code completion in the exercise editor.|
||Whether to enable code diagnostics in the exercise editor.|
||Whether to include a “Start Over” button for the exercise.|
||Whether to display an invisible result warning if the last value returned is invisible.|
Whether or not the solution should be revealed to the user (defaults to
Note that these options can all be specified either globally or per-chunk. For example, the following code sets global default options using the
setup chunk and also sets some local options on the
There are also some other specialized chunks that can be used with an exercise chunk, including:
Exercise setup chunks, which enable you to execute code to setup the environment immediately prior to executing submitted code.
Exercise solution chunks which enable you to provide a solution to the exercise that can be optionally viewed by users of the tutorial.
The use of these special chunks is also described in detail below.
By default, exercise code chunks are NOT pre-evaluated (i.e there is no initial output displayed for them). However, in some cases you may want to show initial exercise output (especially for exercises like the ones above where the user is asked to modify code rather than write new code from scratch).
You can arrange for an exercise to be pre-evaluated (and its output shown) using the
exercise.eval chunk option. This option can also be set either globally or per-chunk:
Code chunks with
exercise=TRUE are evaluated within standalone environments. This means that they don’t have access to previous computations from within the document. This constraint is imposed so that users can execute exercises in any order (i.e. correct execution of one exercise never depends on completion of a prior exercise).
You can however arrange for setup code to be run before evaluation of an exercise to ensure that the environment is primed correctly. There are three ways to provide setup code for an exercise:
Add code to the global
setup chunk. This code is run once at the startup of the tutorial and is shared by all exercises within the tutorial. For example:
Create a setup chunk that’s shared by several exercises. If you don’t want to rely on global setup but would rather create setup code that’s used by only a handful of exercises you can use the
exercise.setup chunk attribute to provide the label of another chunk that will perform setup tasks. To illustrate, we’ll re-write the previous example to use a shared setup chunk named
Create a setup chunk that’s specific to another chunk using a
-setup chunk suffix. To do this give your exercise chunk a label (e.g.
filter) then add another chunk with the same label plus a
-setup suffix (e.g.
filter-setup). For example:
If you want to chain setup chunks such that a setup chunk can inherit its parent setup chunk and so on, you can use
exercise.setup (Note: You must use
exercise.setup for chaining. You cannot rely on the
-setup suffix labelling scheme.)
learnr will keep following the trail of
exercise.setup chunks until there are no more chunks to be found. To demonstrate, let’s turn the first exercise in the previous examples to be another setup chunk called
filtered-flights with its
exercise.setup=prepare-flights. This will now filter the data and store it and can be referenced inside the
You can chain exercise chunks as well, but keep in mind that pre-filled code is used to serve as the setup code instead of user input. For example, if you turned
filtered-flights back to an exercise, the pre-filled code is used as setup for the
arrange exercise that use it as its setup:
You can optionally provide a hint or solution for each exercise that can be optionally displayed by users. Hints can be based on either R code snippets or on custom markdown/HTML content.
To create a hint or solution based on R code simply create a new code chunk with “-hint” or “-solution” chunk label suffix. For example:
A “Hint” or “Solution” button is added to the left side of the exercise header region:
To create a hint based on custom markdown content simply add a
<div> tag with an
id attribue that marks it as hint for your exercise (e.g. “filter-hint”). For example:
The content within the
<div> will be displayed underneath the R code editor for the exercise whenever the user presses the “Hint” button.
For R code hints you can provide a sequence of hints that reveal progressively more of the solution as desired by the user. To do this create a sequence of indexed hint chunks (e.g. “-hint-1”, “-hint-2,”-hint-3", etc.) for your exercise chunk. For example:
By default, the exercise solution is made available to the user with the “Solution” or “Hint” button (if there are hints those will appear first). If you would prefer not to reveal the solution to an exercise, you can disable revealing the solution by adding
exercise.reveal_solution = FALSE to the chunk options of either the exercise or its corresponding
You can also set this option globally in the global
setup chunk with
tutorial_options(). When set this way, the chunk-level option will take precedence over the global option so that you can choose to always reveal or hide the solution to a particular exercise. The current default is to reveal exercise solutions, but in a future version of learnr the default behavior will change to hide solutions.
You might want users of your tutorials to see only one sub-topic at a time as they work through the material (this can be helpful to reduce distractions and maintain focus on the current task). If you specify the
progressive option then all Level 3 headings (
###) will be revealed progressively. For example:
You can also specify this option on a per topic basis using the
data-progressive attribute. For example, the following code enables progressive rendering for a single topic:
You can also use
data-progressive=FALSE to disable progressive rendering for an individual topic if the global
progressive option is
Progressive reveal provides an easy way to cue exercises one at a time: place each exercise under its own Level 3 header (
###). This can be useful when a second exercises builds on the first, giving away the answer to the first.
Note that this feature is only available if you are using the
learnr::tutorial R Markdown format (other custom formats may have their own internal mechanisms for progressive reveal).
progressive option is set to true, the tutorial will require students to complete any exercises in a sub-section before advancing to the next sub-section.
You may want to allow users of your tutorials to skip through exercises that they can’t quite figure out. This might especially be true if you want users to be able to optionally see the next exercise even if they haven’t completed the prior one.
If you specify the
allow_skip option then students will be able to advance through a sub-section without completing the exercises. For example:
You can also specify this option on a per sub-topic basis using the
data-allow-skip attribute. For example, the following code enables exercise skipping within a single sub-topic:
You can also use
data-allow-skip=FALSE to disable exercise skipping rendering for an individual sub-topic if the global
allow-skip option is
learnr provides allows full control over feedback provided to exercise submissions via
tutorial_options(). We’ll eventually cover how to implement a custom
exercise.checker, but for sake of demonstration, this section uses gradethis’s approach to exercise checking, which doesn’t require knowledge of
exercise.checker (NOTE: gradethis is still a work-in-progress. You may want to consider alternatives such as checkr). To use gradethis’s approach to exercise checking inside of learnr, just call
gradethis_setup() in a setup chunk, which will set
tutorial_options(exercise.checker = gradethis::grade_learnr) (among other things).
Checking of exercise results may be done through a
*-check chunk. With a gradethis setup, results can be graded with
grade_result(), like so:
When an exercise
*-check chunk is provided, learnr provides an additional “Submit Answer” button, which allows users to experiment with various answers before formally submitting an answer:
Checking of exercise code may be done through a
*-code-check chunk. With a gradethis setup, if you supply a
*-solution chunk and call
*-code-check, then you get detection of differences in the R syntax between the submitted exercise code and the solution code.
It’s worth noting that, when a
*-code-check chunk is supplied, the check is done prior to evaluation of the exercise submission, meaning that if the
*-code-check chunk returns feedback, then that feedback is displayed, no exercise code is evaluated, and no result check is performed.
In the event that an exercise submission generates an error, checking of the code (or its result, which is an error condition) may be done through either a
*-error-check chunk or through the global
exercise.error.check.code option. If an
*-error-check chunk is provided, you must also include a
*-check chunk, typically to provide feedback in case the submission doesn’t generate an error.
With a gradethis setup,
exercise.error.check.code is set to
grade_code(). This means that, by default, users will receive intelligent feedback for a submission that generates an error using the
*-solution chunk, if one is provided.
To learn more about grading exercises with gradethis, see its grading demo (
If you need custom exercise checking logic that isn’t already provided grading packages like gradethis, then you may want to write your own
exercise.checker function. This function is called on exercise submission whenever
*-code-check chunks exist. When called, this function receives all the information that learnr knows about the exercise at the time of the checking, including the
check_code, exercise environments, and the
last_value (if any). Checking can be performed at three different time points, so the values supplied can differ depending on the time point:
last_valuecontains the error condition object.
last_valuecontains the last printed value.
If, at any one of these stages, feedback should be provided, then
exercise.checker should return an R list with, at the very least, a
correct flag and feedback
||Feedback message. Can be a plain character vector or can HTML produced via the htmltools package.|
||TRUE/FALSE logical value indicating whether the submitted answer was correct.|
||Feedback type (visual presentation style). Can be “auto”, “success”, “info”, “warning”, “error”, or “custom”. Note that “custom” implies that the “message” field is custom HTML rather than a character vector.|
||Location for feedback (“append”, “prepend”, or “replace”).|
Below is a rough sketch of how one might implement an
exercise.checker function. Notice how the presence of
envir_result may be used to determine which type of check is being done (i.e., code or result check).
See the table below for a full description of all the arguments supplied to
||Label for exercise chunk.|
||R code submitted by the user.|
||Code provided within the “-solution” chunk for the exercise.|
||Code provided within the “-check” chunk for the exercise.|
||The R environment after the execution of the chunk.|
The return value from the
||A copy of the R environment before the execution of the chunk.|
||The last value of the evaluated chunk.|
||Unused (include for compatibility with parameters to be added in the future)|
By default, code completions are automatically provided as users type within the exercise editor:
You can optionally disable code completion (either globally or on a per-chunk basis) using the
exercise.completion option. For example, the following illustrates turning off completions globally then enabling them for an individual chunk:
Similarily, simple code diagnostics can also be provided, to inform users of errors in the R code written in exercise editors. Diagnostics can be controlled (either globally or on a per-chunk basis) using the
By default, the size of the exercise editor provided to users will match the number of lines in your code chunk (with a minimum of 2 lines). If the user adds additional lines in the course of editing the editor will grow vertically up to 15 lines, after which it will display a scrollbar.
You can also specify a number of lines explicitly using the
exercise.lines chunk option (this can be done on a per-chunk or global basis). For example, the following chunk specifies that the exercise code editor should be 15 lines high:
To mediate the problem of code which takes longer than expected to run you can specify the
exercise.timelimit chunk option or alternatively the global
The following demonstrates setting a 10 second time limit as a global option, document level option, and chunk level option:
Since tutorials are a highly interactive format you should in general be designing exercises that take no longer than 5 or 10 seconds to execute. Correspondingly, the default value for
tutorial.exercise.timelimit if not otherwise specified is 30 seconds.