|Subject:||Re: [Bug-gsl] ode-initval2 testsuite failure|
|Date:||Mon, 21 Jan 2013 16:42:24 +0100|
|User-agent:||Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0|
Hi,32-bit Intel: All tests give exactly the same results (supposedly correct) as 64-bit Intels when using sse instead of 387 floating point unit (-mfpmath=sse -msee2). Using 387, which is often more precise, makes the simulation to diverge quickly from expected path.
64-bit PowerPC: at one point in the middle of simulation, return value of function pow() from glibc on my system (Fedora 16) differs in one least significant bit of mantissa (when compared to Intels). Differences grow from that point on and lead to failed test.
I believe that the design of test_extreme_problems() is not good. Such an unstable combination of equation, solver and parameters tests one thing - if a machine behaves exactly as 64-bit Intels do (not sure about IEEE 754). What should it actually test? Is there some documentation/explanation/rationale/guideline? I can not fix it if I do not understand, what is it supposed to do.
Regarding msbdf decrease-order-by-2 problem, I was unable to find the bug in a reasonable amount of time and energy. I can not reproduce it consistently. Sorry.
|[Prev in Thread]||Current Thread||[Next in Thread]|