[mlpack-git] (blog) master: Googlenet: Week 5 (9ed1e48)

gitdub at mlpack.org gitdub at mlpack.org
Tue Jun 28 15:20:12 EDT 2016


Repository : https://github.com/mlpack/blog
On branch  : master
Link       : https://github.com/mlpack/blog/compare/a9e894f5fd7be4916a1865cd47ba8f488f146629...f2ba170ef4a0662048c2a8df82b84c849b973098

>---------------------------------------------------------------

commit 9ed1e489aa5c501967d352b532459c687c4ffde8
Author: nilayjain <nilayjain13 at gmail.com>
Date:   Wed Jun 29 00:50:12 2016 +0530

    Googlenet: Week 5


>---------------------------------------------------------------

9ed1e489aa5c501967d352b532459c687c4ffde8
 content/blog/NilayWeekFive.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/content/blog/NilayWeekFive.md b/content/blog/NilayWeekFive.md
index 807393f..72e1cf8 100644
--- a/content/blog/NilayWeekFive.md
+++ b/content/blog/NilayWeekFive.md
@@ -5,7 +5,7 @@ Author: Nilay Jain
 
 I started this week by discussing "Going Deeper with Convolutions " paper with my mentor, to get an idea how to implement the inception layer. I also clarified some of the concepts regarding backprop, convolutions, and standard regularization techniques like dropout which are used in deep networks. I read the network in network paper to get an idea about 1 x 1 convolutions which was introduced here, and how smaller neural nets are used to build larger networks.
 
-Then I fixed minor issues pointed out in the PR for feature extraction code in edge_boxes method. I tested the timings for performing convolution using armadillo submatrices, and using loops and pointers by invoking NaiveConvolution class. Armadillo submatrices gives a faster method but I think this might not work when we have to convolve with stride. If we can figure out how we can work that, performance may improve for the Convolution method. Then I improved the gradient function by calculating edges by applying convolution using sobel filter.
+Then I fixed minor issues pointed out in the PR for feature extraction code in edge_boxes method. I tested the timings for performing convolution using armadillo submatrices, and using loops and pointers by invoking NaiveConvolution class. Armadillo submatrices gives a faster method but we have to check whether it is fast for large kernels too. If we can figure out how we can work that, performance may improve for the Convolution method. Then I improved the gradient function by calculating edges by applying convolution using sobel filter.
 
 Then I looked at the ann implementation of mlpack. I looked at the convolution and pooling layers that will be used in implementing the inception layer, and had to read things for some of the functions implemented in these classes. It took me a bit of time to get accustomed to the style in which the ann method is implemented because of lot of templatization in code. I guess I still have many things to learn.  I also glanced at some other implementations of googlenet in other libraries without understanding many details of course, but getting a rough idea.
 




More information about the mlpack-git mailing list