<p>In <a href="https://github.com/mlpack/mlpack/pull/603#discussion_r58915727">src/mlpack/core/optimizers/parallel_sgd/sgdp_impl.hpp</a>:</p>
<pre style='color:#555'>&gt; +  arma::mat sumIterate(iterate.n_rows,iterate.n_cols);
&gt; +  size_t it;   
&gt; +  bool halt=false;
&gt; +  sumIterate.zeros();
&gt; +
&gt; +
&gt; +
&gt; +  #pragma omp parallel  shared(sumIterate,halt) private(it) 
&gt; +  {
&gt; +    it=1; 
&gt; +    while(it!=maxIterations &amp;&amp; halt != true)
&gt; +    {
&gt; +      it++;
&gt; +
&gt; +      int th_num=omp_get_thread_num(); //thread number is stored in which the thread is running. 
&gt; +      arma::mat gradient(iterate.n_rows, iterate.n_cols);  //To make gradient private to each thread it is declared here.
</pre>
<p>Why not allocate this variable outside the while loop so we don't reallocate memory for the local gradient every iteration?</p>

<p style="font-size:small;-webkit-text-size-adjust:none;color:#666;">&mdash;<br />You are receiving this because you are subscribed to this thread.<br />Reply to this email directly or <a href="https://github.com/mlpack/mlpack/pull/603/files/a981f8322e84ec06a349b80d261639a282f4f7c5#r58915727">view it on GitHub</a><img alt="" height="1" src="https://github.com/notifications/beacon/AJ4bFDDjvGR8XUwzpOpwGN7PxdauYFwmks5p1UTTgaJpZM4H_54U.gif" width="1" /></p>
<div itemscope itemtype="http://schema.org/EmailMessage">
<div itemprop="action" itemscope itemtype="http://schema.org/ViewAction">
  <link itemprop="url" href="https://github.com/mlpack/mlpack/pull/603/files/a981f8322e84ec06a349b80d261639a282f4f7c5#r58915727"></link>
  <meta itemprop="name" content="View Pull Request"></meta>
</div>
<meta itemprop="description" content="View this Pull Request on GitHub"></meta>
</div>