[mlpack-svn] [MLPACK] #337: Normalization for Collaborative Filtering

MLPACK Trac trac at coffeetalk-1.cc.gatech.edu
Wed Apr 9 10:31:52 EDT 2014


#337: Normalization for Collaborative Filtering
-----------------------------------------------------+----------------------
  Reporter:  sumedhghaisas                           |        Owner:  rcurtin     
      Type:  enhancement                             |       Status:  accepted    
  Priority:  major                                   |    Milestone:  mlpack 1.0.9
 Component:  mlpack                                  |   Resolution:              
  Keywords:  Normalization, Collaborative Filtering  |     Blocking:              
Blocked By:                                          |  
-----------------------------------------------------+----------------------
Changes (by rcurtin):

  * owner:  => rcurtin
  * status:  new => accepted


Comment:

 Hi Sumedh,

 I'm sorry it took me so long to get to this.  A couple of notes, before I
 look further into merging this:

  * I remember talking with you in IRC about testing this, but I don't see
 any tests here.  Do you have tests for the ALS normalized update rules,
 and the modified CF implementation using the ALS normalized update rules?
 I seem to remember that RMSE was better with these normalized update
 rules, but that we could not find a paper that gave results for the same
 implementation we have here.

  * I don't want to modify the NMF::Apply() function for the case of
 normalized update rules.  Instead, we should have
 WNormalizedAlternatingLeastSquaresRule and
 HNormalizedAlternatingLeastSquaresRule calculate the column and row
 averages in Update().  Something like storing columnAverages and
 rowAverages in each of those classes, then some code like this:

 {{{
 if (columnAverages.n_elem == 0)
 {
   // It has not been initialized.
   columnAverages = mean(V, ...); // calculate column averages
 }

 if (rowAverages.n_elem == 0)
 {
   // It has not been initialized.
   rowAverages = mean(V, ...); // calculate row averages
 }
 }}}

 or something like that; that's just a basic idea.  Although we will be
 replicating this calculation twice (once for the H rule, once for the W
 rule), I am not concerned about this because the actual update calculation
 will take (asymptotically) longer.

  * In als_normalized_update_rules.hpp, at line 45, the calculation `W = (V
 + columnAverages - rowAverages) * H.t() * pinv(H * H.t())` is slow.  The
 bit `(V + columnAverages - rowAverages)` will produce an object of type
 `arma::mat`.  It would be better to refactor this expression to avoid
 instantiating this dense matrix, by doing something more like:

 {{{
 W = V * H.t() * pinv(H * H.t()); // this will use faster sparse matrix
 multiplication
 W += columnAverages * H.t() * pinv(H * H.t());
 W -= rowAverages * H.t() * pinv(H * H.t());
 }}}

  * Further, those last two expressions can be simplified more.  In your
 code, `columnAverages` and `rowAverages` are stored as `repmat()`-type
 matrices, but because these are repeated matrices, you can simplify the
 expression significantly and use only the vectors `columnAverages` and
 `rowAverages` to calculate the expression much more efficiently.

 If you can take a look into these issues and address them, we can move
 forward toward integrating this into the trunk codebase.

 Thanks,

 Ryan

-- 
Ticket URL: <http://trac.research.cc.gatech.edu/fastlab/ticket/337#comment:2>
MLPACK <www.fast-lab.org>
MLPACK is an intuitive, fast, and scalable C++ machine learning library developed at Georgia Tech.


More information about the mlpack-svn mailing list