[mlpack-git] [mlpack] ANN Saving the network and reloading (#531)
Joseph Mariadassou
notifications at github.com
Wed Mar 2 15:34:38 EST 2016
I had the same result when used make_tuple rather than tie. I fixed it
though in my own fork:
http://github.com/theSundayProgrammer/mlpack
branch:mytweaks
SHA1: 540dd7c3b3a5f3857eed922d446bfeb5016dbcd9
Although it is for CNN not FNN, I guess a similar change for FNN should
work too
On Thu, Mar 3, 2016 at 6:33 AM, Joe Mariadassou <joe.mariadassou at gmail.com>
wrote:
> I had the same result when used make_tuple rather than tie. I fixed it
> though in my own fork:
> http://github.com/theSundayProgrammer/mlpack
> fork:mytweaks
> SHA1: 540dd7c3b3a5f3857eed922d446bfeb5016dbcd9
>
> Although it is for CNN not FNN, I guess a similar change for FNN should
> work too
>
>
> On Thu, Mar 3, 2016 at 3:29 AM, sudarshan <notifications at github.com>
> wrote:
>
>> Thank you. I checked out pr/536 and linked the include directories and
>> lib from there. I got rid off all the compiler error. However, I get a 100%
>> classification error when I use std::make_tuple, but when I use std::tie I
>> get 4.95916%. This is my funciton:
>>
>> auto BuildFFN(MatType& trainData, MatType& trainLabels, MatType& testData, MatType& testLabels, const size_t hiddenLayerSize)
>> {
>> // input layer
>> ann::LinearLayer<> inputLayer(trainData.n_rows, hiddenLayerSize);
>> ann::BiasLayer<> inputBiasLayer(hiddenLayerSize);
>> ann::BaseLayer<PerformanceFunction> inputBaseLayer;
>>
>> // hidden layer
>> ann::LinearLayer<> hiddenLayer1(hiddenLayerSize, trainLabels.n_rows);
>> ann::BiasLayer<> hiddenBiasLayer1(trainLabels.n_rows);
>> ann::BaseLayer<PerformanceFunction> outputLayer;
>>
>> // output layer
>> OutputLayerType classOutputLayer;
>>
>> auto modules = std::tie(inputLayer, inputBiasLayer, inputBaseLayer, hiddenLayer1, hiddenBiasLayer1, outputLayer);
>> ann::FFN<decltype(modules), decltype(classOutputLayer), ann::RandomInitialization, PerformanceFunctionType> net(modules, classOutputLayer);
>>
>> net.Train(trainData, trainLabels);
>> arma::mat prediction;
>> net.Predict(testData, prediction);
>>
>> double classificationError;
>> for (size_t i = 0; i < testData.n_cols; i++)
>> {
>> if (arma::sum(arma::sum(arma::abs(prediction.col(i) - testLabels.col(i)))) != 0)
>> {
>> classificationError++;
>> }
>> }
>>
>> std::cout << "Classification Error = " << (double(classificationError) / testData.n_cols) * 100 << "%" << std::endl;
>>
>> return net;
>> }
>>
>> —
>> Reply to this email directly or view it on GitHub
>> <https://github.com/mlpack/mlpack/issues/531#issuecomment-191313956>.
>>
>
>
>
> --
> Joseph Chakravarti Mariadassou
> http://thesundayprogrammer.com
>
>
--
Joseph Chakravarti Mariadassou
http://thesundayprogrammer.com
---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/issues/531#issuecomment-191417423
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160302/cd30ed93/attachment.html>
More information about the mlpack-git
mailing list