lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [lmi] Problem of the week: testing a testing tool


From: Ericksberg, Richard
Subject: RE: [lmi] Problem of the week: testing a testing tool
Date: Wed, 10 Jan 2007 14:42:49 -0500

On 2007-01-09 7:44 Zulu, Greg Chicares wrote:

> Let's focus on this one first:
> 
>> c) On those that don't have the '1.#IO' and appear correct,
>> scientific notation and millisecond counts don't match exactly.
> 
> In the data shown,
>  (i) the LHS is greater than the RHS, and
>  (ii) their difference is exactly one unit in the last place.
> Would exactly those conditions always obtain when a test like
> this is rerun?

Not always. Sometimes they match and sometimes they don't. I
ran 20-odd tests consecutively and every one but one [see last
one here] had at least 1 that was off. Usually it was 2 that
were off. The least frequently correct was 'Overhead:'.
Some samples:

  Speed tests...
  Overhead: [3.961e+000] 1 iteration took 3960 milliseconds
  Vector  : [9.221e+000] 1 iteration took 9220 milliseconds
  Write   : [7.738e+000] 1 iteration took 7738 milliseconds

  Speed tests...
  Overhead: [3.986e+000] 1 iteration took 3985 milliseconds
  Vector  : [9.165e+000] 1 iteration took 9165 milliseconds
  Write   : [8.159e+000] 1 iteration took 8158 milliseconds

  Speed tests...
  Overhead: [3.945e+000] 1 iteration took 3944 milliseconds
  Vector  : [9.030e+000] 1 iteration took 9030 milliseconds
  Write   : [7.691e+000] 1 iteration took 7690 milliseconds

  Speed tests...
  Overhead: [4.376e+000] 1 iteration took 4376 milliseconds
  Vector  : [9.697e+000] 1 iteration took 9697 milliseconds
  Write   : [8.414e+000] 1 iteration took 8413 milliseconds

  Speed tests...
  Overhead: [3.966e+000] 1 iteration took 3966 milliseconds
  Vector  : [9.387e+000] 1 iteration took 9387 milliseconds
  Write   : [7.971e+000] 1 iteration took 7971 milliseconds

Erratic nature traditionally points to a rounding problem.

>> For c) Mismatched values are result of differing methods of
>> mathematical manipulation. The scientific notation keeps its
>> floating-point value and is divided by the number of iterations
>> [z] where the milliseconds are multiplied by 1000.0 and cast as
>> an int.
> 
> Is that enough information to answer the new question posed
> under (0) above?

Not completely. Additional testing with display of intermediate
values showed that the millisecond value was being truncated
not rounded. The truncated value explains the differences.
 
> Anyway, what would be better?

Round the millisecond value to 0 decimals after multiplying
by 1000 instead of casting as an int. See patch below.

Giving this consistently correct output:

  Speed tests...
  Overhead: [3.984e+000] 1 iteration took 3984 milliseconds
  Vector  : [9.105e+000] 1 iteration took 9105 milliseconds
  Write   : [7.814e+000] 1 iteration took 7814 milliseconds

  Speed tests...
  Overhead: [3.976e+000] 1 iteration took 3976 milliseconds
  Vector  : [9.153e+000] 1 iteration took 9153 milliseconds
  Write   : [7.797e+000] 1 iteration took 7797 milliseconds

  Speed tests...
  Overhead: [4.088e+000] 1 iteration took 4088 milliseconds
  Vector  : [9.713e+000] 1 iteration took 9713 milliseconds
  Write   : [8.196e+000] 1 iteration took 8196 milliseconds

  Speed tests...
  Overhead: [5.386e-005] 10 iterations took 1 milliseconds
  Vector  : [2.243e-003] 10 iterations took 22 milliseconds
  Read    : [1.079e-002] 1 iteration took 11 milliseconds
  Write   : [2.770e-003] 10 iterations took 28 milliseconds
  'cns' io: [5.345e-002] 1 iteration took 53 milliseconds
  'ill' io: [1.961e-002] 1 iteration took 20 milliseconds

  Speed tests...
  Overhead: [7.241e-005] 10 iterations took 1 milliseconds
  Vector  : [3.697e-003] 10 iterations took 37 milliseconds
  Read    : [1.751e-002] 1 iteration took 18 milliseconds
  Write   : [1.043e-002] 1 iteration took 10 milliseconds
  'cns' io: [8.790e-002] 1 iteration took 88 milliseconds
  'ill' io: [3.257e-002] 1 iteration took 33 milliseconds

  Speed tests...
  Overhead: [4.290e-005] 100 iterations took 4 milliseconds
  Vector  : [2.158e-003] 100 iterations took 216 milliseconds
  Read    : [7.445e-003] 10 iterations took 74 milliseconds
  Write   : [2.581e-003] 100 iterations took 258 milliseconds
  'cns' io: [5.219e-002] 10 iterations took 522 milliseconds
  'ill' io: [1.946e-002] 10 iterations took 195 milliseconds

Running timer_test:
  [3.245e-003] 100 iterations took 325 milliseconds
  [3.296e-002] 10 iterations took 330 milliseconds
  [4.964e-001] 1 iteration took 496 milliseconds

Ran 30-odd tests and all produced matching output.
 
>>> 3. How could those defects have been prevented?
>> 
>> Classify, standardize, document, disseminate and utilize rules
>> [protocols if you like that better] for various implementation
>> situations. Ex: "Be sure a numeric value is not negative before
>> casting as unsigned."
> 
> Do we have any that would apply here?

"If rounded values are to be compared, they must be the result of
similar rounding methods and precision."
"Be sure a numeric value is not negative before casting as unsigned."
"Always check divisors for zero before they are used in division."

>> Rigorous unit testing at coding time [e.g. matrix of all possible
>> conditions encountered for that operation.]
> 
> How big would that matrix be in this situation?

Potentially huge. In general, when performing floating point
math you need to at least cover all precisions possible for
the size number you are using. For rounding, cover all digits
[0-9] in the most significant decimal place with results
depending on the rounding method. Consider the results of
similar yet not exact rounding methods when their values will
be compared either visually or mathematically [e.g.
materially_equal()]. Concerning signed vs. unsigned, consider
the possible results of a calculation [e.g. plus, zero, minus,
overflowed, undefined] and its effect on subsequent casting.

>> Use an interactive debugger.
>> http://www.testing.com/writings/reviews/maguire-solid.html
>>> "4. the virtues of stepping through every line of code using
>>> the debugger."
> 
> BTW, he means for every single line of code you write, not
> just lines suspected of being erroneous.
> 
> Would you actually do that?

I have done that in the past. Would try to now if I could get
the debugger to work.
 
>> From a critical review:
> 
> http://accu.org/index.php/book_reviews?url=view.xqy?review=w001915
>> How thorough should your testing be? Maguire talks about
>> 'coverage' and explains that you should step through both arms
>> of each if statement to ensure statement coverage - step through
>> with the debugger, by the way!
> 
>> Code review by others.
> 
> Which (one or more) of those practices should we adopt?

Why not all?

Coding rules.
Rigorous unit testing at coding time. Stepping through with
the debugger can be coupled with this.
Code review by others.
 
>>> 4. How should those defects be removed?
> 
> I.e., what patch would you propose for this?

[timer.cpp]

+ #include "round_to.hpp"

std::string Timer::elapsed_msec_str() const
{
    std::ostringstream oss;
-   oss << static_cast<int>(1000.0 * elapsed_usec());
+   round_to<double> const RoundToNearestWhole(0, r_to_nearest);
+   oss << RoundToNearestWhole(1000.0 * elapsed_usec());
    oss << " milliseconds";
    return oss.str().c_str();
}

---------------------------------------------------------
This e-mail transmission may contain information that is proprietary, 
privileged and/or confidential and is intended exclusively for the person(s) to 
whom it is addressed. Any use, copying, retention or disclosure by any person 
other than the intended recipient or the intended recipient's designees is 
strictly prohibited. If you are not the intended recipient or their designee, 
please notify the sender immediately by return e-mail and delete all copies. 

---------------------------------------------------------





reply via email to

[Prev in Thread] Current Thread [Next in Thread]