Skip to main content

Learners : Matchbox recommender : Command-line runners

Evaluators

Evaluators exist for each one of the four prediction modes - rating prediction, item recommendation, find related users and find related items.

Rating prediction evaluator

Rating prediction can be evaluated using the EvaluateRatingPrediction argument to Learner Recommender. It takes in a test set and predictions for this test set, and produces a report of evaluation metrics, which include mean absolute error (MAE), and root mean squared error (RMSE). These metrics as well as the way the evaluator works are explained in the corresponding evaluation API section.

Required parameters

Example

Learner Recommender EvaluateRatingPrediction --test-data TestSet.dat  
                                             --predictions RatingPredictions.dat   
                                             --report RatingPredictionEvaluation.txt

Item recommendation evaluator

Item recommendations can be evaluated using EvaluateItemRecommendation argument to Learner Recommender. It takes in a test set and predictions for this test set, and produces a report which specifies the computed normalized discounted cumulative gain (NDCG). This metric as well as the way the evaluator works is explained in the corresponding evaluation API section.

Required parameters

Example

Learner Recommender EvaluateItemRecommendation --test-data TestSet.dat  
                                               --predictions ItemRecommendations.dat   
                                               --report ItemRecommendationEvaluation.txt

Related users can be evaluated using the EvaluateFindRelatedUsers argument to Learner Recommender. It takes in a test set and predictions for this test set, and produces a report of evaluation metrics, which include NDCG with L1 and L2 similarities as gains. These metrics as well as the way the evaluator works are explained in the corresponding evaluation API section.

Required parameters

Optional parameters

Example

Learner Recommender EvaluateFindRelatedUsers --test-data TestSet.dat  
                                             --predictions RelatedUsers.dat   
                                             --report RelatedUserEvaluation.txt

Related items can be evaluated using the EvaluateFindRelatedItems argument to Learner Recommender. It takes in a test set and predictions for this test set, and produces a report of evaluation metrics, which include NDCG with L1 and L2 similarities as gains. These metrics as well as the way the evaluator works are explained in the corresponding evaluation API section.

Required parameters

Optional parameters

Example

Learner Recommender EvaluateFindRelatedItems --test-data TestSet.dat  
                                             --predictions RelatedItems.dat   
                                             --report RelatedItemEvaluation.txt