0
votes

Im struggling to find a learning algorithm that works for my dataset.

I am working with a typical regressor problem. There are 6 features in the dataset that I am concerned with. There are about 800 data points in my dataset. The features and the predicted values have high non-linear correlation so the features are not useless (as far as I understand). The predicted values have a bimodal distribution so I disregard linear model pretty quickly.

So I have tried 5 different models: random forest, extra trees, AdaBoost, gradient boosting and xgb regressor. The training dataset returns accuracy and the test data returns 11%-14%. Both numbers scare me haha. I try tuning the parameters for the random forest but seems like nothing particularly make a drastic difference.

Function to tune the parameters

def hyperparatuning(model, train_features, train_labels, param_grid = {}):
    grid_search = GridSearchCV(estimator = model, param_grid = param_grid, cv = 3, n_jobs = -1, verbose =2)
    grid_search.fit(train_features, train_labels)
    print(grid_search.best_params_)
    return grid_search.best_estimator_`

Function to evaluate the model

def evaluate(model, test_features, test_labels):
    predictions = model.predict(test_features)
    errors = abs(predictions - test_labels)
    mape = 100*np.mean(errors/test_labels)
    accuracy = 100 - mape
    print('Model Perfomance')
    print('Average Error: {:0.4f} degress. '.format(np.mean(errors)))
    print('Accuracy = {:0.2f}%. '.format(accuracy))

I expect the output to be at least ya know acceptable but instead i got training data to be 64% and testing data to be 12-14%. It is a real horror to look at this numbers!

4

4 Answers

1
votes

There are several issues with your question.

For starters, you are trying to use accuracy in what it seems to be a regression problem, which is meaningless.

Although you don't provide the exact models (it would arguably be a good idea), this line in your evaluation function

errors = abs(predictions - test_labels)

is actually the basis of the mean absolute error (MAE - although you should actually take its mean, as the name implies). MAE, like MAPE, is indeed a performance metric for regression problems; but the formula you use next

accuracy = 100 - mape

does not actually hold, neither it is used in practice.

It is true that, intuitively, one might want to get the 1-MAPE quantity; but this is not a good idea, as MAPE itself has a lot of drawbacks which seriously limit its use; here is a partial list from Wikipedia:

  • It cannot be used if there are zero values (which sometimes happens for example in demand data) because there would be a division by zero.
  • For forecasts which are too low the percentage error cannot exceed 100%, but for forecasts which are too high there is no upper limit to the percentage error.
1
votes

It is an overfitting problem. You are fitting the hypothesis very well on your training data. Possible solutions to your problem:

  1. You can try getting more training data(not features).
  2. Try less complex model like decision trees since highly complex models(like random forest,neural networks etc.) fit the hypothesis well on the training data.
  3. Cross-validation:It allows you to tune hyperparameters with only your original training set. This allows you to keep your test set as a truly unseen dataset for selecting your final model.
  4. Regularization:The method will depend on the type of learner you’re using. For example, you could prune a decision tree, use dropout on a neural network, or add a penalty parameter to the cost function in regression.

I would suggest you use pipeline function since it'll allow you to perform multiple models simultaneously. An example of that:

pipe = Pipeline(steps=[('pca', pca), ('logistic', logistic)])
# Parameters of pipelines can be set using ‘__’ separated parameter names:
param_grid = {
    'pca__n_components': [5, 20, 30, 40, 50, 64],
    'logistic__alpha': np.logspace(-4, 4, 5),
}
search = GridSearchCV(pipe, param_grid, iid=False, cv=5)
search.fit(X_train, X_test)
1
votes

I would suggest improving by preprocessing the data in better forms. Try to manually remove the outliers, check the concept of cook's distance to see elements which have high influence in your model negatively. Also, you could scale the data in a different form than Standard scaling, use log scaling if elements in your data are too big, or too small. Or use feature transformations like DCT transform/ SVD transform etc.

Or to be simplest, you could create your own features with the existing data, for example, if you have yest closing price and todays opening price as 2 features in stock price prediction, you can create a new feature saying the difference in cost%, which could help a lot on your accuracy.

Do some linear regression analysis to know the Beta values, to have a better understanding which feature is contributing more to the target value. U can use feature_importances_ in random forests too for the same purpose and try to improve that feature as well as possible such that the model would understand better.

This is just a tip of ice-berg of what could be done. I hope this helps.

0
votes

Currently, you are overfitting so what you are looking for is regularization. For example, to reduce the capacity of models that are ensembles of trees, you can limit the maximum depth of the trees (max_depth), increase the minimum required samples at a node to split (min_samples_split), reduce the number of learners (n_estimators), etc.

When performing cross-validation, you should fit on the training set and evaluate on your validation set and the best configuration should be the one that performs the best on the validation set. You should also keep a test set in order to evaluate your model on completely new observations.