[mlpack-git] [mlpack] How to speed up HMM training ? (#550)
Baptiste Wicht
notifications at github.com
Fri Mar 4 09:49:20 EST 2016
I don't know if that helps, but here is a a trace I took with perf on the currently running 64-gaussian GMM:
16.29% spotter libmkl_avx.so [.] mkl_blas_avx_xdgemmger
14.31% spotter spotter [.] arma::eglue_core<arma::eglue_minus>::apply<arma::Mat<double>, arma::Mat<double>, arma::Glue<arma::Col<double>, arma::Gen<arma::Row<double>, arma::gen_ones>, arma::glue_times> >
11.65% spotter libm-2.21.so [.] __ieee754_exp_avx
11.57% spotter libmkl_avx.so [.] mkl_blas_avx_dgemm_nocopy_anbn_meq9_keq9_b0
6.72% spotter spotter [.] arma::op_dot::direct_dot<double>
5.90% spotter spotter [.] arma::eglue_core<arma::eglue_schur>::apply<arma::Mat<double>, arma::Mat<double>, arma::Glue<arma::Gen<arma::Col<double>, arma::gen_ones>, arma::Op<arma::eGlue<arma::subview_col<double>, arma::Col<double>, arma::eglue_schur>, arma::op_htrans>, arma::glue_times> >
4.72% spotter libmkl_avx.so [.] mkl_blas_avx_dgemm_kernel_0
4.61% spotter spotter [.] etl::detail::Assign<float, etl::unary_expr<float, etl::p_max_pool_p_transformer<etl::binary_expr<float, etl::unary_expr<float, etl::rep_r_transformer<etl::fast_matrix_impl<float, std::array<float, 12ul>, (etl::order)0, 12ul> const&, 52ul, 12ul>, etl::transform_op>, etl::plus_binary_op<float>, etl::unary_expr<float, etl::sub_view<etl::fast_matrix_impl<float, std::vector<float, std::allocator<float> >, (etl::order)0, 2ul, 12ul, 52ul, 12ul>&>, etl::identity_op> >, 2ul, 2ul>, etl::transform_op>&>::operator()
2.81% spotter libmkl_avx.so [.] LAY16_M8_Tailgas_1
2.39% spotter spotter [.] arma::Row<unsigned long>::operator=
---
Reply to this email directly or view it on GitHub:
https://github.com/mlpack/mlpack/issues/550#issuecomment-192307867
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.cc.gatech.edu/pipermail/mlpack-git/attachments/20160304/b0dbfdb6/attachment.html>
More information about the mlpack-git
mailing list