bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] Large bearoff databases (was: Huge evaluation difference


From: Øystein Schønning-Johansen
Subject: Re: [Bug-gnubg] Large bearoff databases (was: Huge evaluation difference)
Date: Sun, 24 Mar 2019 17:14:52 +0100

On Sun, Mar 24, 2019 at 4:33 PM Philippe Michel <address@hidden> wrote:
On Sun, Mar 24, 2019 at 03:17:07PM +0100, Øystein Schønning-Johansen wrote:
> Yes. Really cool. I have earlier seen significant differences between
> one-sided and two-sided race evaluation, but this is not one of the
> positions where it is off.

I suppose it helps that the opponent's position is a few-rolls position
and more balanced long races would not do as well.

Yes! There it actually two ways to do the dynamic programming of two sided databases.
You can do it top-down. You start at the position you are interested in, and calculate all the
possible following positions that can occur in a recursive manner. and storing each probability
in a matrix. In this case it really helps if the position is lopsided, such that the game is usually
over within a few moves. It is actually necessary.
 
> Some years ago, I used the same algorithm to calculate a full two-sided
> database for 15 checkers on 6 points. I can share it by bittorrent, or the
> generating code. The data file is 11 GB.

Would it be usable as is with the current gnubg ? Would any file larger
than 2 Gb be, anyway ?

Yes, to calculate the full two side database of 15 checkers on 6 points, I used a bottom-up
calculation instead. It is (21 choose 6) squared number of position and I store them with float
precision (4 byte): 54264^2 * 4 = 11778326784 = about 11GB.

The maximum size of the file is depending on the file system you are using and hence
the operating system. I more or less always use ext4 on Linux systems and have no
problem opening such big files.

However, I use different techniques to read the data from the file. I usually do not read the file
into memory (however, I can if want), but rather open pointer to the file and use seek to position
the fp to the right address. If the file is on SSD disk, this is usually fast enough! I've tried
memmap() as well, but I got into a technical problem I think (IIRC).

I think the building tools that comes with gnubg is limited by the 32bit file pointer size.
The fp address was overflowing. Maybe these days, when 64 bit architectures and
operating systems are really common, the code can generate even bigger data files.

If my database can be used with GNU Backgammon. No, not right out of the box, but
with a few code line inserts, I guess things will work perfectly.
 
I would be interested in the generating code, at least as an example
that handles the kind of issues below.

I'll see what I can share. I'll guess I'll post something on github.
 
Are you, Øystein, or other readers, familiar with gnubg's bearoff
databases format ? (I am not).

Well, I think Gary tried to keep it small, so it's actually using a 2byte format for the values.
 
Would it be enough to compile gnubg with the appropriate
_FILE_OFFSET_BITS=64 / _LARGEFILE_SOURCE / _LARGEFILE64_SOURCE #defines
and possibly some limited variables type changes in the code to be able
to use bigger databases, or are there more subtle limitations in the
current code ?

I'm really not sure. I think you have to just try it out and see. I will actually guess it does work with the settings you suggest
(and make sure you are on a 64 bit system, of course), or will work with really few tweaks.
 
Similarly, to create a one-sided database for, say, 15 men up the 8
point plus 2 up to the 18 point, or a similar subset of "plausible"
positions, would it be enough to find an adequate indexing scheme ?
 
Yeah, these are issues I've been thinking of as well, can a bearoff database be "parted" like this? I've not found a good answer to that. The indexing system becomes really complicated.

-Øystein

reply via email to

[Prev in Thread] Current Thread [Next in Thread]