[mlpack-git] [mlpack] Armadillo matrix transposition doubles memory usage for a matrix (#203)
notifications at github.com
Mon Feb 16 11:47:23 EST 2015
Ah, this is very nice! I'm a bit confused about the memory graphs, though; the first image is using method 1? What does the memory usage look like when using method 2?
Getting the available memory will be hackish and difficult (though, on some systems, possible). If the runtime is very similar between methods 0 and 1 at larger sizes, too, then maybe we can just default to `inplace_trans()`; would you be willing to scale your graph up to more like an 8000x8000 matrix? (It may also be useful to try more typical machine learning dataset sizes, so more like 50x100000 or some other very non-square matrix.)
Thanks for looking into this!
Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the mlpack-git