Multiplicative updates from Lee et al. (2001) for standard Nonnegative Matrix Factorization
models V \approx W H, where the distance between the target matrix and its NMF
estimate is measured by the Kullback-Leibler divergence.
nmf_update.KL.w and nmf_update.KL.h compute the updated basis and coefficient
matrices respectively.
They use a C++ implementation which is optimised for speed and memory usage.
nmf_update.KL.w_R and nmf_update.KL.h_R implement the same updates
in plain R.
nmf_update.KL.h(v, w, h, nbterms = 0L, ncterms = 0L, copy = TRUE) nmf_update.KL.h_R(v, w, h, wh = NULL) nmf_update.KL.w(v, w, h, nbterms = 0L, ncterms = 0L, copy = TRUE) nmf_update.KL.w_R(v, w, h, wh = NULL)
FALSE) or on a copy (TRUE - default).
With copy=FALSE the memory footprint is very small, and some speed-up may be
achieved in the case of big matrices.
However, greater care should be taken due the side effect.
We recommend that only experienced users use copy=TRUE.a matrix of the same dimension as the input matrix to update
(i.e. w or h).
If copy=FALSE, the returned matrix uses the same memory as the input object.
The coefficient matrix (H) is updated as follows:
H_kj <- H_kj ( sum_i [ W_ik V_ij / (WH)_ij ] ) / ( sum_i W_ik )
These updates are used in built-in NMF algorithms KL and
brunet.
The basis matrix (W) is updated as follows:
W_ik <- W_ik ( sum_u [H_kl A_il / (WH)_il ] ) / ( sum_l H_kl )
Lee DD and Seung H (2001). "Algorithms for non-negative matrix factorization." _Advances in neural information processing
systems_.