[mlpack-git] (blog) master: Removes bullets because they appear ugly (6ff3ad3)

gitdub at mlpack.org gitdub at mlpack.org
Mon Jun 6 04:29:23 EDT 2016


Repository : https://github.com/mlpack/blog
On branch  : master
Link       : https://github.com/mlpack/blog/compare/1d87acfcce540d0f63b98c8f285cf7714e843735...6ff3ad351ee2af8542bd0eb4474f815bfbf2a036

>---------------------------------------------------------------

commit 6ff3ad351ee2af8542bd0eb4474f815bfbf2a036
Author: Yannis Mentekidis <mentekid at gmail.com>
Date:   Mon Jun 6 11:29:23 2016 +0300

    Removes bullets because they appear ugly


>---------------------------------------------------------------

6ff3ad351ee2af8542bd0eb4474f815bfbf2a036
 content/blog/YannisWeekTwo.md | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/content/blog/YannisWeekTwo.md b/content/blog/YannisWeekTwo.md
index ceabedf..dc3f4c0 100644
--- a/content/blog/YannisWeekTwo.md
+++ b/content/blog/YannisWeekTwo.md
@@ -11,7 +11,8 @@ To do that, I needed to improve access to LSHSearch object's projection tables,
 
 In the process of modifying the LSHSearch code to do that, Ryan and I also decided to make a few other modifications, namely
 
- * Change the data structure that stores the projection tables from an std::vector to an arma::cube. Each slice of the cube is a projection table. This conserves memory and simplifies the code.
- * Change the implementation of the second level hashing. In the current version, an arma::Mat<size_t> table is created where each row corresponds to a hash bucket and stores indices to points hashed to that bucket. This is inefficient, both because the default secondHashSize is pretty large and because the number of points in each bucket might be uneven - so the resulting table is quite sparse. After some demo codes and discussion, we decided on a solution to these two problems.
+1) Change the data structure that stores the projection tables from an std::vector to an arma::cube. Each slice of the cube is a projection table. This conserves memory and simplifies the code.
+
+2) Change the implementation of the second level hashing. In the current version, an arma::Mat<size_t> table is created where each row corresponds to a hash bucket and stores indices to points hashed to that bucket. This is inefficient, both because the default secondHashSize is pretty large and because the number of points in each bucket might be uneven - so the resulting table is quite sparse. After some demo codes and discussion, we decided on a solution to these two problems.
 
 So, with LSHSearch transparent, more easily testable and more efficient, we are now ready to perform benchmarks of single- and multiprobe LSH, see what we can optimize in the multiprobe code, and then move on to parallelization. All this will start today, so stay tuned :D
\ No newline at end of file




More information about the mlpack-git mailing list