[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Problems with the Cygwin tests

From: Greg Chicares
Subject: Re: [lmi] Problems with the Cygwin tests
Date: Wed, 8 May 2019 23:59:18 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 2019-05-08 19:37, Vadim Zeitlin wrote:
> On Wed, 8 May 2019 18:11:09 +0000 Greg Chicares <address@hidden> wrote:
> GC> On 2019-05-07 15:05, Vadim Zeitlin wrote:
> GC> > On Tue, 7 May 2019 12:00:49 +0000 Greg Chicares <address@hidden> wrote:
> GC> > 
> GC> > GC> I'll let you know when I've finished all of that, and I
> GC> 
> GC> I've finished all of that.
> GC> 
> GC> > GC> hope that at that time you'll be able to retest everything in
> GC> > GC> 'INSTALL', along with
> GC> > GC>   ./nychthemeral_test
> GC> > GC>   ./gui_test
> GC> > GC> in cygwin so we'll know cygwin is fully supported.
> GC> 
> GC> Except that you might like first to look through the dozen or so commits
> GC> that I just pushed, because I could have missed something important, or
> GC> in the last commit I may have followed shellcheck's recommendations too
> GC> zealously. But everything builds and tests out perfectly here.
>  It looks good for me too, thanks!

The acid test, of course, is seeing whether it works in cygwin,
with those changes and the others discussed below.

> GC> >  With these caveats, the GUI test passes:
> GC> > 
> GC> >         time=51305ms (for all tests)
> GC> >         SUCCESS: 21 tests successfully completed.
> GC> >         NOTE: 4 tests were skipped
> GC> 
> GC> Excellent. I don't suppose we can make this faster except by making lmi
> GC> itself faster.
>  BTW, the full installation takes ~40 minutes on Ilya's VM.

Interesting. Thanks.

BTW, here are some of my recent notes indicating how long various
tests take, after
  make eviscerate
has been run. Because 'nychthemeral_test' builds some exotic stuff,
I ran it twice: think of the first timing as "build + test", and
the second as "test" only.

2019-05-01 in multiarch branch
   52.487 total gui_test
  5:05.22 total nychthemeral_test, first time after evisceration
  1:48.68 total same, repeat
   49.750 total gui_test
  5:27.54 total nychthemeral_test, first time after evisceration
   59.317 total same, repeat

My 'gui_test' timings are close to your "time=51305ms (for all tests)".
With careful attention, I can run both tests scripts in different
terminals at the same time, and they parallelize so well that the
GUI test adds nothing appreciable to the total. Let's try right now:

  [first run after 'make eviscerate; ./install_msw.sh']
./nychthemeral_test.sh  3703.13s user 225.15s system 1389% cpu 4:42.64 total

  [two "repeat" runs]
./nychthemeral_test.sh  302.16s user 78.86s system 325% cpu 1:57.08 total
./nychthemeral_test.sh  302.08s user 78.37s system 326% cpu 1:56.52 total

Now let's see if careful parallel synchronization makes the GUI test
cost nothing extra.

  [this is in terminal number one]
/opt/lmi/src/lmi[0]$time ./nychthemeral_test.sh
Production system built--ready to start GUI test in another session.
Do not forget to run wx_test.
./nychthemeral_test.sh  302.44s user 76.66s system 323% cpu 1:57.25 total

  [this in is terminal number two]
  wait for the "Production system built" message on terminal number one, then...
/opt/lmi/src/lmi[0]$ time ./gui_test.sh
NOTE: starting the test suite
01e9:err:seh:raise_exception Unhandled exception code c0000005 flags 0 addr 

That always used to work; I must have introduced a cross dependency.
Let's try again, while the test in terminal number one is still busy:

/opt/lmi/src/lmi[130]$ time ./gui_test.sh
NOTE: starting the test suite
SUCCESS: 21 tests successfully completed.
NOTE: 4 tests were skipped
./gui_test.sh  30.74s user 9.81s system 75% cpu 53.526 total

Okay, that worked, and when it finished, terminal number one was still
busy, so the total elapsed time is as shown on terminal number one. Thus:

./nychthemeral_test.sh  302.16s user 78.86s system 325% cpu 1:57.08 total
./nychthemeral_test.sh  302.08s user 78.37s system 326% cpu 1:56.52 total
./nychthemeral_test.sh  302.44s user 76.66s system 323% cpu 1:57.25 total

Of those three runs, the gui test was running in another terminal only
during the third, so its incremental cost was indeed virtually zero.

I'll need to figure out what went wrong on the first gui test run
("c0000005" above), but that's a problem I've solved before (it just
didn't stay solved). However, it's dreadfully inconvenient to watch
for a notification on one terminal, then switch to another and hit
Enter; what would really make this slick is:

On 2018-07-30 12:26, Vadim Zeitlin wrote:
| FWIW I think the best is to run the GUI tests in an isolated VM, e.g. I do
| it by ssh-ing into my lmi VM and launching the test from ssh session. Under
| X it should be also possible to use Xnest to run it on its own isolated
| display. Or maybe just open a second (real) X session on a different VT and
| run it there (but I haven't tried this).

I tried looking into that a few weeks ago, but all the other
improvements in the build system were more important, so I never
figured out how to make 'xnest' work with 'gui_test.sh'. Would
you have the time to do that for me?

> GC> >  Beyond that, there are some errors in the log, but I don't know if 
> they're
> GC> > expected or not: as I had already complained about the unit_tests 
> target,
> GC> > there is too much output and no summary at the end allowing to see at a
> GC> > glance whether the execution was successful or not.
> GC> 
> GC> The idea is that all unit tests succeed all the time, so filtering out
> GC> everything that worked correctly leaves nothing: "no news is good news".
>  Sorry to insist, but I really don't think it's the best approach. IMO
> normal output should be clearly separated from the error output.

You certainly do have a point. But I don't see any simple way to achieve
what you would like. Would you mind looking in the unit-test logs
and stating what failed? The commands to run them in isolation are just

make $coefficiency unit_tests 2>&1 | tee >(grep '\*\*\*') >(grep '????') >(grep 
'!!!!' --count | xargs printf '%d tests succeeded\n') >../log

make $coefficiency unit_tests build_type=safestdlib 2>&1 | tee >(grep '\*\*\*') 
>(grep '????') >(grep '!!!!' --count | xargs printf '%d tests succeeded\n') 

and one idea that occurs to me immediately is to change them thus:

- make "$coefficiency"
+ make "$coefficiency" --output-sync=recurse

which might make the output clearer without impairing speed.
Let's just test that: two runs without '--output-sync':

make $coefficiency unit_tests 2>&1  27.54s user 5.16s system 514% cpu 6.359 
tee >(grep '\*\*\*') >(grep '????')  > ../log  0.01s user 0.00s system 0% cpu 
6.356 total

make $coefficiency unit_tests 2>&1  27.79s user 5.32s system 528% cpu 6.271 
tee >(grep '\*\*\*') >(grep '????')  > ../log  0.01s user 0.00s system 0% cpu 
6.269 total

...and two with '--output-sync':

make $coefficiency --output-sync=recurse unit_tests 2>&1  27.72s user 5.13s 
system 520% cpu 6.315 total
tee >(grep '\*\*\*') >(grep '????')  > ../log  0.00s user 0.00s system 0% cpu 
6.312 total

make $coefficiency --output-sync=recurse unit_tests 2>&1  28.14s user 5.35s 
system 506% cpu 6.605 total
tee >(grep '\*\*\*') >(grep '????')  > ../log  0.00s user 0.00s system 0% cpu 
6.603 total

Okay, I'll make that change: the cost might be a third of a second,
but the gain is worth that; or the cost might actually be zero.

> GC> >  Another one:
> GC> > 
> GC> >         # test all valid emission types
> GC> > 
> GC> >         Unable to parse xml file '/tmp/lmi/tmp/sample.ill': File does 
> not exist.
> GC> >         [xml_lmi.cpp : 69]


>  Please let me know what do you think and we'll rerun the tests with the
> latest master in the meanwhile.

I think I've addressed everything, so would you please do that now?

reply via email to

[Prev in Thread] Current Thread [Next in Thread]