`step_stem` creates a *specification* of a recipe step that will convert a list of tokens into a list of stemmed tokens.
step_stem(recipe, ..., role = NA, trained = FALSE, columns = NULL, options = list(), stemmer = "SnowballC", skip = FALSE, id = rand_id("stem")) # S3 method for step_stem tidy(x, ...)
A recipe object. The step will be added to the sequence of operations for this recipe.
One or more selector functions to choose variables. For `step_stem`, this indicates the variables to be encoded into a list column. See [recipes::selections()] for more details. For the `tidy` method, these are not currently used.
Not used by this step since no new variables are created.
A logical to indicate if the recipe has been baked.
A list of tibble results that define the encoding. This is `NULL` until the step is trained by [recipes::prep.recipe()].
A list of options passed to the stemmer
a character to select stemming method. Defaults to "SnowballC".
A logical. Should the step be skipped when the recipe is baked by [recipes::bake.recipe()]? While all operations are baked when [recipes::prep.recipe()] is run, some operations may not be able to be conducted on new data (e.g. processing the outcome variable(s)). Care should be taken when using `skip = TRUE` as it may affect the computations for subsequent operations.
A character string that is unique to this step to identify it.
A `step_stem` object.
An updated version of `recipe` with the new step added to the sequence of existing steps (if any).
Words tend to have different forms depending on context, such as organize, organizes, and organizing. In many situations it is beneficial to have these words condensed into one to allow for a smaller pool of words. Stemming is the act of choping off the end of words using a set of heristics.
Note that the steming will only be done at the end of the string and will therefore not work reliably on ngrams or sentences.
[step_stopwords()] [step_tokenfilter()] [step_tokenize()]
library(recipes) data(okc_text) okc_rec <- recipe(~ ., data = okc_text) %>% step_tokenize(essay0) %>% step_stem(essay0) okc_obj <- okc_rec %>% prep(training = okc_text, retain = TRUE) juice(okc_obj, essay0) %>% slice(1:2)#> # A tibble: 2 x 1 #> essay0 #> <list> #> 1 <chr > #> 2 <chr >juice(okc_obj) %>% slice(2) %>% pull(essay0)#> [] #>  "i'm" "chill" "and" "steadi" "br" "i'm" #>  "a" "teacher" "amp" "musician" "br" "i" #>  "like" "plai" "outsid" "dislik" "school" "night" #>  "br" "and" "i'm" "veri" "veri" "lucki" #>tidy(okc_rec, number = 2)#> # A tibble: 1 x 3 #> terms value id #> <chr> <chr> <chr> #> 1 essay0 <NA> stem_CErTVtidy(okc_obj, number = 2)#> # A tibble: 1 x 3 #> terms value id #> <S3: quosures> <chr> <chr> #> 1 ~essay0 SnowballC stem_CErTV