[mlpack] Google Summer of Code project proposal

Ryan Curtin ryan at ratml.org
Thu Mar 24 17:12:00 EDT 2016


On Thu, Mar 24, 2016 at 08:13:12PM +0000, Leinoff, Alexander wrote:
> Hi Ryan!
> Thanks for getting back to me, to answer your questions, yes, yes,
> yes, and yes! 

Hi Alex,

Thanks for the replies.

>  > One of my interest has always been to have Jenkins build mlpack against
> all versions of its dependencies and run the tests, to try and find
> subtle bugs.
> 
> This is actually one of the main purposes of the ctest/cdash system!
> Any platform which can run cmake can also run ctest. So if you have a
> bunch of different platforms setup, use can use a scheduler to build
> and run the tests on mlpack every night with a variety of
> configurations, and their reports will be automatically conglomerated
> on the cdash server, which will keep the test results in its history.
> This makes it very easy to see when a bug was introduced, which
> platforms it causes problems on, and what the test failures that it
> caused were.

Does this give me any advantage over the existing Jenkins server?  We do
have a matrix build, but it simply hasn't received the attention it
needs to get mlpack working with all configurations.

> > how can we utilize the hardware that we already have?
> Any platform that can run cmake  can also run ctest. I would suggest
> setting up your machines with a variety of different platforms which
> could each build and run the tests on your software every night (with
> different dependencies versions also if you want to check that) so
> that obscure bugs can be visible very quickly, and you can see the
> results very easily.
> This means that every night you would build your software on different
> platforms and see which tests passed and failed on each platform as
> well as any build problems.

Yeah; all I'm saying is, this is what Jenkins already does, so I don't
know how CTest and CDash are going to be helpful here.  Do they work
together?

> >what changes will need to be made to the mlpack codebase to support
>    your project?
> 
> One of the major changes will be converting the existing testing
> process into a more granular process using ctest. Right now, after a
> pull request, all the tests are run on travis via the command
> mlpack_test, which executes your binary containing all of the tests.
> With ctest your 617 test cases will each be a different test, and they
> will all be executed via the command “ctest”. after the test
> execution, the results will be submitted to your online dashboard,
> where you can easily view test timings outputs and reasons for failure
> for each test.  This history will be saved so that in a month from
> now, you could go back and see the results. That way it will be very
> easy for a developer to see which test failed, why, and if that test
> has a history of failure on a particular platform.

I hate to say it but I'm going to balk at that idea.  Having 617
different test programs is not really something I want to do.  There's a
lot of overhead for that, especially given that we are using the Boost
Unit Test Framework which is basically optimized for making writing
simple tests as simple as possible.

However: one issue we have is that the command-line programs mlpack
provides are not currently tested, other than hand-tested.  All of the
algorithms that these programs use are tested, but the programs
themselves are not tested.  I have not thought of a good, long-term
maintainable solution for this, but it sounds like CTest might be a
possibility here.

Is it possible to have CTest run each individual test in mlpack_test, or
simply parse the output of the Boost UTF system?

>  > how can we present the information gathered by the automatic build
>    system in a concise and manageable way?
> 
> Once the tests have been migrated to using ctest, the dashboard
> submission process is automatic. CDash automatically parses the
> submissions into a standard presentation format which states the build
> site, environment, tests passing or falure, test timings, test
> history, and test output. In addition, tests results are put into an
> xml file, which could be parsed by other methods to get the data that
> you want

I know I sound like a broken record at this point, but this also is
something that Jenkins already does.  So I'm not certain what the
advantage of switching is, and there is a good amount of inertia from my
side to stick with Jenkins because it's a tool that I already know, and
I'd prefer to spend my time improving mlpack's code instead of learning
new tools for testing, unless there is some clear and tangible benefit
that we get from switching.

> > Benchmarking?
> Ctest records the timing information for all the individual tests
> which are reported to cdash and easily avalible.

Yes, but in this case we are not benchmarking how long the individual
tests take, but using a much more complex benchmarking system; see
https://github.com/zoq/benchmarks and
http://www.mlpack.org/benchmarks.html .

Thanks,

Ryan

-- 
Ryan Curtin    | "In honor of the last American hero, to whom speed
ryan at ratml.org | means freedom of the soul."  - Super Soul


More information about the mlpack mailing list