[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] The little engine that could... Neural Network

From: Øystein Schønning-Johansen
Subject: Re: [Bug-gnubg] The little engine that could... Neural Network
Date: Fri, 18 Dec 2015 15:00:33 +0100

Hi Robert!

I would love to see more development on the engine and the neural nets. There are plenty of questions that could be asked when it comes to neural net topology and training methods. However, from my point of view based on my experience, I think the quality of a neural net evaluation actually depends mostly on the training methods. I would of course be interesting to see some research on deep learning (and maybe even recurrent neural net topology) but I rather think effort is best used in the training methods.

(I would love to see it and I would love to be proved wrong!)

What about other techniques? MCTS? Or some kind of Markov decision Model? Can we, base on today's available technology, rethink the whole idea? Huge distributed databases or Distributed Hash Tables? Simplified evaluation function combined with deeper search? The neural network idea is from the 1990's. Today we have much more technology that can provide us other solutions. More memory, more storage, more distribution etc. Can we maybe even take a step back, and consider other solutions than a neural nets based solution?

best regards,

On Thu, Dec 17, 2015 at 3:08 PM, Robert Fontaine <address@hidden> wrote:
I scrolled back through the archives and it seems like it has been a
while since gnubg got smarter.

I have always suspected that the only thing holding gnubg back was an
obscene amount of compute.
A large single model should be able to outperform a bunch of essentially
disconnected specialized ones, ala snowie.
IFF the learning is deep enough and long enough.  Connecting backgames,
and containment with early game
play should/might/could happen.

I've been building a little compute node in my basement,  a few xeon
phis boards and a xeon 8 core processor and thinking
that if the models were ported to openmp or openacc and tweaked a bit we
might find a corporate sponsor with
heavy metal to run them.   Intel is pushing the Xeon Phi architecture
pretty heavily; could be they might lend us some
cpu time.   Amazon, google, Baidu... bound to be someone with some
cycles out there.

Who is the neural networks guru in residence?  Where do I look for the
docs,pseudo code,scribblings?
I'm at the "Going to Take the Andrew Ng" course point in the project but
I'm pretty good at assembling and porting things.
Hopefully, by the time I have code familiarity I might actually have
some current knowledge that can be applied to the real challenges.

R , pynum, pymic have both been ported to run on the intel/mic libraries.
Intels C/C++ compiler and vtune are available for students and the Phi's
are cheap like borscht for establishing a development environment/sandbox.

I've been scrounging hardware for the better part of a year at this
point but I'm within spitting distance of booting and getting Centos
Would like to use this as a stepping stone to doing some Kaggle contests
and generally getting my data analysis, machine learning chops.

Thanks for any thoughts or suggestions,

Bug-gnubg mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]