1
votes

I'm writing a recommender system evaluator with Apache Mahout, using a train.csv training set and the Precision metric. My question is: it's possible to use a fixed test set, not generated by the evaluator?

To be more specific, I have a test.csv file that contains a list of UserIds and for these I want to provide recommendations and evaluate the results with the Precision metric, only for this fixed set of users that never changes. Their ratings are in the file train.csv, I use it to train the algorithm and it contains also all the other user's ratings.

I post also the code where I want to add this feature:

    RandomUtils.useTestSeed(); 
    DataModel model = new FileDataModel(new File("files/train.csv"));
    RecommenderIRStatsEvaluator evaluator = new GenericRecommenderIRStatsEvaluator();

    RecommenderBuilder recommenderBuilder = new RecommenderBuilder() {

        public Recommender buildRecommender(DataModel model) throws TasteException {
            //Here I build my recommender system
            //return ...
        }
    };

    IRStatistics stats = evaluator.evaluate(recommenderBuilder, null, model, null, 5,
            4/*relevance Threshold*/, 1); 


    System.out.println(stats.getPrecision());
1

1 Answers

0
votes

So you want cross-validation gold standard test data which you have. It is split into train and test. You would like a repeatable test. This make a lot of sense.

The Mahout evaluator does the split for you based on a random picking of test and training data from what you pass in. If you pass in a fixed RNG seed the evaluator will pick the exact same test and training set. This isn't exactly what you asked but it is one way to get repeatable CV tests.

Otherwise you will need to hack the Evaluator to use your pre-calculated test/training sets.

The precision metric I use is mean average precision (MAP) at some number of recommendations, like the number you will calculate or show in the UI. This is not built into the Mahout Evaluator.

To do all this right you'd hack the Evaluator.

BTW I wouldn't use that recommender unless absolute simplicity is the highest design criteria. The newest Mahout recommenders build models that are queried using a search engine like Solr or Elasticsearch. These are incredibly flexible and scalable.

The new way described here: http://mahout.apache.org/users/recommender/intro-cooccurrence-spark.html Some blog posts about this method here: http://occamsmachete.com/ml/

With this method you'd train on your train.csv and use the user history in test.csv to make queries. Calculate the precision of all queries using MAP. The new method uses a search engine for the queries so you also have a service that is scalable.