bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Bug-gnubg] Bug in sigmoid?


From: Olivier Baur
Subject: [Bug-gnubg] Bug in sigmoid?
Date: Thu, 17 Apr 2003 15:33:34 +0200

I think I've found a bug in sigmoid (neuralnet.c), but I'm not sure about its impact on the evaluation function...

Let's call S the real sigmoid function: S(x) = 1 / ( 1 + e^x)
It seems that sigmoid(x) will return a good approximation of S(x) for -10.0 < x < 10.0 (less than +/-.01% error), but then it returns S(9.9) for x >= 10.0 (instead of S(10.0)) and S(-9.9) for x <= -10.0 (instead of S(-10.0)). sigmoid is not even monotonic!

Here are some tests I've run around x=10.0 and x=-10.0:

sig1 is the real sigmoid function
sig2 is the result of the sigmoid function in neuralnet.c

x= 9.94  sig1=0.00004820  sig2=0.00004824
x= 9.95  sig1=0.00004772  sig2=0.00004778
x= 9.96  sig1=0.00004725  sig2=0.00004733
x= 9.97  sig1=0.00004678  sig2=0.00004689
x= 9.98  sig1=0.00004631  sig2=0.00004645
x= 9.99  sig1=0.00004585  sig2=0.00004603
x=10.00 sig1=0.00004540 sig2=0.00005017 // we've got a discontinuity here
x=10.01  sig1=0.00004494  sig2=0.00005017
x=10.02  sig1=0.00004450  sig2=0.00005017
x=10.03  sig1=0.00004405  sig2=0.00005017
x=10.04  sig1=0.00004362  sig2=0.00005017

x=-9.94  sig1=0.99995178  sig2=0.99995178
x=-9.95  sig1=0.99995226  sig2=0.99995220
x=-9.96  sig1=0.99995273  sig2=0.99995267
x=-9.97  sig1=0.99995321  sig2=0.99995309
x=-9.98  sig1=0.99995369  sig2=0.99995357
x=-9.99  sig1=0.99995416  sig2=0.99995399
x=-10.00 sig1=0.99995458 sig2=0.99994981 // we've got a discontinuity here
x=-10.01  sig1=0.99995506  sig2=0.99994981
x=-10.02  sig1=0.99995548  sig2=0.99994981
x=-10.03  sig1=0.99995595  sig2=0.99994981
x=-10.04  sig1=0.99995637  sig2=0.99994981


By the way, I found a simple way of optimising the current sigmoid function: instead of having a lookup table holding pre-computed values of exp(X) and then returning sigmoid(x) = sigmoid(X+dx) = 1/(1+exp(X)(1+dx)), why not have a lookup table holding precomputed values of S(X) and return sigmoid(x) = sigmoid(X+dx) = S(X)+dx.(S(X+1)-S(X))? The time consumming operations here are the lookups and the reciprocal (1/x) operations. With the second method, you trade one reciprocal and one lookup for two lookups; and since in the latter case the second lookup will probably already be in the processor cache (since S(X+1) follows S(X) in memory), you end up doing mostly one lookup and no more reciprocal. On my machine, it gave me a +60% speed increase in sigmoid.


I found this problem in sigmoid while vectorising the evaluation function: replacing the neural net propagation by a vector matrix multiply, and replacing the scalar sigmoid by a vector sigmoid; for now, I've been able to come up with a +60% speed increase in the calibrate utility (from 8700 eval/s to 14000 eval/s) on my 768 MHz Apple G4.


Olivier





reply via email to

[Prev in Thread] Current Thread [Next in Thread]