2
votes

I read a similar post related to this problem, but I am afraid this error code is due something else. I have a CSV file with 8-observation and 10 variables:

 > str(rorIn)

'data.frame':   8 obs. of  10 variables:
 $ Acuity             : Factor w/ 3 levels "Elective  ","Emergency ",..: 1 1 2 2 1 2 2 3
 $ AgeInYears         : int  49 56 77 65 51 79 67 63
 $ IsPriority         : int  0 0 1 0 0 1 0 1
 $ AuthorizationStatus: Factor w/ 1 level "APPROVED  ": 1 1 1 1 1 1 1 1
 $ iscasemanagement   : Factor w/ 2 levels "N","Y": 1 1 2 1 1 2 2 2
 $ iseligible         : Factor w/ 1 level "Y": 1 1 1 1 1 1 1 1
 $ referralservicecode: Factor w/ 4 levels "12345","278",..: 4 1 3 1 1 2 3 1
 $ IsHighlight        : Factor w/ 1 level "N": 1 1 1 1 1 1 1 1
 $ RealLengthOfStay   : int  25 1 1 1 2 2 1 3
 $ Readmit            : Factor w/ 2 levels "0","1": 2 1 2 1 2 1 2 1

I invoke the algorithm like this:

library("C50")
rorIn <- read.csv(file = "RoRdataInputData_v1.6.csv", header = TRUE, quote = "\"")
rorIn$Readmit <- factor(rorIn$Readmit)
fit <- C5.0(Readmit~., data= rorIn)

Then I get:

> source("~/R-workspace/src/RoR/RoR/testing.R")
c50 code called exit with value 1
> 

I am following other recommendations such as: - Using a factor as the decision variable - Avoiding empty data

Any help on this?, I read this is one of the best algorithm for machine learning, but I get this error all the time.

Here is the original dataset:

Acuity,AgeInYears,IsPriority,AuthorizationStatus,iscasemanagement,iseligible,referralservicecode,IsHighlight,RealLengthOfStay,Readmit
Elective  ,49,0,APPROVED  ,N,Y,SNF            ,N,25,1
Elective  ,56,0,APPROVED  ,N,Y,12345,N,1,0
Emergency ,77,1,APPROVED  ,Y,Y,OBSERVE        ,N,1,1
Emergency ,65,0,APPROVED  ,N,Y,12345,N,1,0
Elective  ,51,0,APPROVED  ,N,Y,12345,N,2,1
Emergency ,79,1,APPROVED  ,Y,Y,278,N,2,0
Emergency ,67,0,APPROVED  ,Y,Y,OBSERVE        ,N,1,1
Urgent    ,63,1,APPROVED  ,Y,Y,12345,N,3,0

Thanks in advance for any help,

David

1
Isn't your data too small? You have even more variables than observations which potentially can be a problem.Psidom
p > n is a problem but even then the data is relatively small. Trying to create a robust model with that few observations is usually not advised.zacdav

1 Answers

2
votes

You need to clean your data in a few ways.

  • Remove the unnecessary columns with only one level. They contain no information and lead to problems.
  • Convert the class of the target variable rorIn$Readmit into a factor.
  • Separate the target variable from the data set that you supply for the training.

This should work:

rorIn <- read.csv("RoRdataInputData_v1.6.csv", header=TRUE) 
rorIn$Readmit <- as.factor(rorIn$Readmit)
library(Hmisc)
singleLevelVars <- names(rorIn)[contents(rorIn)$contents$Levels == 1]
trainvars <- setdiff(colnames(rorIn), c("Readmit", singleLevelVars))
library(C50)
RoRmodel <- C5.0(rorIn[,trainvars], rorIn$Readmit,trials = 10)
predict(RoRmodel, rorIn[,trainvars])
#[1] 1 0 1 0 0 0 1 0
#Levels: 0 1

You can then evaluate accuracy, recall, and other statistics by comparing this predicted result with the actual value of the target variable:

rorIn$Readmit
#[1] 1 0 1 0 1 0 1 0
#Levels: 0 1

The usual way is to set up a confusion matrix to compare actual and predicted values in binary classification problems. In the case of this small data set one can easily see that there is only one false negative result. So the code seems to work pretty well, but this encouraging result can be deceptive due to the very small number of observations.

library(gmodels)
actual <- rorIn$Readmit
predicted <- predict(RoRmodel,rorIn[,trainvars])     
CrossTable(actual,predicted, prop.chisq=FALSE,prop.r=FALSE)
# Total Observations in Table:  8  
#
# 
#              | predicted 
#       actual |         0 |         1 | Row Total | 
#--------------|-----------|-----------|-----------|
#            0 |         4 |         0 |         4 | 
#              |     0.800 |     0.000 |           | 
#              |     0.500 |     0.000 |           | 
#--------------|-----------|-----------|-----------|
#            1 |         1 |         3 |         4 | 
#              |     0.200 |     1.000 |           | 
#              |     0.125 |     0.375 |           | 
#--------------|-----------|-----------|-----------|
# Column Total |         5 |         3 |         8 | 
#              |     0.625 |     0.375 |           | 
#--------------|-----------|-----------|-----------|

On a larger data set it would be useful, if not necessary, to separate the set into training data and test data. There is a lot of good literature on machine learning that will help you in fine-tuning the model and its predictions.