The Ames housing data will be used to to demonstrate how regression models can be made using parsnip. We’ll create the data set and create a simple training/test set split:

Random Forests

We’ll start by fitting a random forest model to a small set of parameters. Let’s say that the model would include predictors: Longitude, Latitude, Lot_Area, Neighborhood, and Year_Sold. A simple random forest model can be specified via

The model will be fit with the ranger package. Since we didn’t add any extra arguments to fit, many of the arguments will be set to their defaults from the specific function that is used by ranger::ranger. The help pages for the model function describes the changes to the default parameters that are made and the translate function can also be used.

parsnip gives two different interfaces to the models: the formula and non-formula interfaces. Let’s start with the non-formula interface:

The non-formula interface doesn’t do anything to the predictors before giving it to the underlying model function. This particular model does not require indicator variables to be create prior to the model (note that the output shows “Number of independent variables: 5”).

For regression models, the basic predict method can be used and returns a tibble with a column named .pred:

Note that:

  • If the model required indicator variables, we would have to create them manually prior to using fit (perhaps using the recipes package).
  • We had to manually log the outcome prior to modeling.

Now, for illustration, let’s use the formula method using some new parameter values:

Suppose that there was some feature in the randomForest package that we’d like to evaluate. To do so, the only part of the syntaxt that needs to change is the set_engine argument:

Look at the formula code that was printed out, one function uses the argument name ntree and the other uses num.trees. parsnip doesn’t require you to know the specific names of the main arguments.

Now suppose that we want to modify the value of mtry based on the number of predictors in the data. Usually, the default value would be floor(sqrt(num_predictors)). To use a pure bagging model would require an mtry value equal to the total number of parameters. There may be cases where you may not know how many predictors are going to be present (perhaps due to the generation of indicator variables or a variable filter) so that might be difficult to know exactly.

When the model it being fit by parsnip, data descriptors are made available. These attempt to let you know what you will have available when the model is fit. When a model object is created (say using rand_forest), the values of the arguments that you give it are immediately evaluated… unless you delay them. To delay the evaluation of any argument, you can used rlang::expr to make an expression.

Two relevant descriptors for what we are about to do are:

  • .preds(): the number of predictor variables in the data set that are associated with the predictors prior to dummy variable creation.
  • .cols(): the number of predictor columns after dummy variables (or other encodings) are created.

Since ranger won’t create indicator values, .preds() would be appropriate for using mtry for a bagging model.

For example, let’s use an expression with the .preds() descriptor to fit a bagging model:

Penalized Logistic Regression

A linear model might work here too. The linear_reg model can be used. To use regularization/penalization, there are two engines that can do that here: the glmnet and sparklyr packages. The former will be used here and it only implements the non-formula method. parsnip will allow either to be used though.

When regularization is used, the predictors should first be centered and scaled before given to the model. The formula method won’t do that so some other methods will be required. We’ll use recipes package for that (more information here).

If penalty were not specified, all of the lambda values would be computed.

To get the predictions for this specific value of lambda (aka penalty):