`embed`

is a package that contains extra steps for the `recipes`

package for embedding categorical predictors into one or more numeric columns. All of the preprocessing methods are *supervised*.

These steps are contained in a separate package because the package dependencies, `rstanarm`

, `lme4`

, and `keras`

, are fairly heavy.

The steps included are:

`step_lencode_glm`

,`step_lencode_bayes`

, and`step_lencode_mixed`

estimate the effect of each of the factor levels on the outcome and these estimates are used as the new encoding. The estimates are estimated by a generalized linear model. This step can be executed without pooling (via`glm`

) or with partial pooling (`stan_glm`

or`lmer`

). Currently implemented for numeric and two-class outcomes.`step_embed`

uses`keras::layer_embedding`

to translate the original*C*factor levels into a set of*D*new variables (<*C*). The model fitting routine optimizes which factor levels are mapped to each of the new variables as well as the corresponding regression coefficients (i.e., neural network weights) that will be used as the new encodings.

Some references for these methods are:

- Francois C and Allaire JJ (2018)
*Deep Learning with R*, Manning - Guo, C and Berkhahn F (2016) “Entity Embeddings of Categorical Variables”
- Micci-Barreca D (2001) “A preprocessing scheme for high-cardinality categorical attributes in classification and prediction problems,” ACM SIGKDD Explorations Newsletter, 3(1), 27-32.
- Zumel N and Mount J (2017) “
`vtreat`

: a`data.frame`

Processor for Predictive Modeling”

To install the package:

```
install.packages("embed")
## for development version:
require("devtools")
install_github("tidymodels/embed")
```

- Report a bug at

https://github.com/tidymodels/embed/issues

- Max Kuhn

Author, maintainer - All authors...