0
votes

I am trying to use knn in prediction but would like to conduct principal component analysis first to reduce dimensionality.

However, after I generated principal components and apply them on knn, it generates errors saying

"Error in [.data.frame(data, , all.vars(Terms), drop = FALSE) :
undefined columns selected"

as well as warnings:

"In addition: Warning message: In nominalTrainWorkflow(x = x, y = y, wts = weights, info = trainInfo, : There were missing values in resampled performance measures."

Here is my sample:

sample = cbind(rnorm(20, 100, 10), matrix(rnorm(100, 10, 2), nrow = 20)) %>%
  data.frame()

The first 15 in the training set

train1 = sample[1:15, ]
test = sample[16:20, ]

Eliminate dependent variable

pca.tr=sample[1:15,2:6]
pcom = prcomp(pca.tr, scale.=T)
pca.tr=data.frame(True=train1[,1], pcom$x)
#select the first 2 principal components
pca.tr = pca.tr[, 1:2]

train.ct = trainControl(method = "repeatedcv", number = 3, repeats=1)
k = train(train1[,1] ~ .,
          method     = "knn",
          tuneGrid   = expand.grid(k = 1:5),
          trControl  = train.control, preProcess='scale',
          metric     = "RMSE",
          data       = cbind(train1[,1], pca.tr))

Any advice is appreciated!

1
If the answer solved your problem consider marking as accepted.missuse

1 Answers

2
votes

Use better column names and a formula without subscripts.

You really should try to post a reproducible example. Some of your code was wrong.

Also, there is a "pca" method for preProc that does the appropriate thing by recomputing the PCA scores inside of resampling.

library(caret)
#> Loading required package: lattice
#> Loading required package: ggplot2
#> 
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#> 
#>     filter, lag
#> The following objects are masked from 'package:base':
#> 
#>     intersect, setdiff, setequal, union

set.seed(55)
sample = cbind(rnorm(20, 100, 10), matrix(rnorm(100, 10, 2), nrow = 20)) %>%
  data.frame()

train1 = sample[1:15, ]
test = sample[16:20, ]

pca.tr=sample[1:15,2:6]
pcom = prcomp(pca.tr, scale.=T)
pca.tr=data.frame(True=train1[,1], pcom$x)
#select the first 2 principal components
pca.tr = pca.tr[, 1:2]

dat <- cbind(train1[,1], pca.tr) %>% 
  # This
  setNames(c("y", "True", "PC1"))

train.ct = trainControl(method = "repeatedcv", number = 3, repeats=1)

set.seed(356)
k = train(y ~ .,
          method     = "knn",
          tuneGrid   = expand.grid(k = 1:5),
          trControl  = train.ct, # this argument was wrong in your code
          preProcess='scale',
          metric     = "RMSE",
          data       = dat)
k
#> k-Nearest Neighbors 
#> 
#> 15 samples
#>  2 predictor
#> 
#> Pre-processing: scaled (2) 
#> Resampling: Cross-Validated (3 fold, repeated 1 times) 
#> Summary of sample sizes: 11, 10, 9 
#> Resampling results across tuning parameters:
#> 
#>   k  RMSE      Rsquared   MAE     
#>   1  4.979826  0.4332661  3.998205
#>   2  5.347236  0.3970251  4.312809
#>   3  5.016606  0.5977683  3.939470
#>   4  4.504474  0.8060368  3.662623
#>   5  5.612582  0.5104171  4.500768
#> 
#> RMSE was used to select the optimal model using the smallest value.
#> The final value used for the model was k = 4.

# or 
set.seed(356)
train(X1 ~ .,
      method     = "knn",
      tuneGrid   = expand.grid(k = 1:5),
      trControl  = train.ct, 
      preProcess= c('pca', 'scale'),
      metric     = "RMSE",
      data       = train1)
#> k-Nearest Neighbors 
#> 
#> 15 samples
#>  5 predictor
#> 
#> Pre-processing: principal component signal extraction (5), scaled
#>  (5), centered (5) 
#> Resampling: Cross-Validated (3 fold, repeated 1 times) 
#> Summary of sample sizes: 11, 10, 9 
#> Resampling results across tuning parameters:
#> 
#>   k  RMSE       Rsquared   MAE      
#>   1  13.373189  0.2450736  10.592047
#>   2  10.217517  0.2952671   7.973258
#>   3   9.030618  0.2727458   7.639545
#>   4   8.133807  0.1813067   6.445518
#>   5   8.083650  0.2771067   6.551053
#> 
#> RMSE was used to select the optimal model using the smallest value.
#> The final value used for the model was k = 5.

Created on 2019-04-15 by the reprex package (v0.2.1)

These look worse in terms of RMSE but the previous run underestimates RMSE since it assumes that there is no variation in the PCA scores.