qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh tes


From: Aleksandar Markovic
Subject: Re: [Qemu-devel] [PATCH 0/2] tests/acceptance: Update MIPS Malta ssh test
Date: Thu, 29 Aug 2019 20:20:20 +0200

28.08.2019. 23.24, "Cleber Rosa" <address@hidden> је написао/ла:
>
> On Thu, Aug 22, 2019 at 07:59:07PM +0200, Aleksandar Markovic wrote:
> > 22.08.2019. 05.15, "Aleksandar Markovic" <address@hidden>
је
> > написао/ла:
> > >
> > >
> > > 21.08.2019. 23.00, "Eduardo Habkost" <address@hidden> је
написао/ла:
> > > >
> > > > On Wed, Aug 21, 2019 at 10:27:11PM +0200, Aleksandar Markovic wrote:
> > > > > 02.08.2019. 17.37, "Aleksandar Markovic" <
> > address@hidden> је
> > > > > написао/ла:
> > > > > >
> > > > > > From: Aleksandar Markovic <address@hidden>
> > > > > >
> > > > > > This little series improves linux_ssh_mips_malta.py, both in the
> > sense
> > > > > > of code organization and in the sense of quantity of executed
tests.
> > > > > >
> > > > >
> > > > > Hello, all.
> > > > >
> > > > > I am going to send a new version in few days, and I have a
question
> > for
> > > > > test team:
> > > > >
> > > > > Currently, the outcome of the script execition is either PASS:1
> > FAIL:0 or
> > > > > PASS:0 FAIL:1. But the test actually consists of several
subtests. Is
> > there
> > > > > any way that this single Python script considers these subtests as
> > separate
> > > > > tests (test cases), reporting something like PASS:12 FAIL:7? If
yes,
> > what
> > > > > would be the best way to achieve that?
> > > >
> > > > If you are talking about each test_*() method, they are already
> > > > treated like separate tests.  If you mean treating each
> > > > ssh_command_output_contains() call as a separate test, this might
> > > > be difficult.
> > > >
> > >
> > > Yes, I meant the latter one, individual code segments involving an
> > invocation of ssh_command_output_contains() instance being treated as
> > separate tests.
> > >
> >
> > Hello, Cleber,
> >
> > I am willing to rewamp python file structure if needed.
> >
> > The only thing I feel a little unconfortable is if I need to reboot the
> > virtual machine for each case of ssh_command_output_contains().
> >
>
> Hi Aleksandar,
>
> The short answer is that Avocado provides no way to report "subtest"
> statuses (as a formal concept), neither does the current
> "avocado_qemu" infrastructure allow for management of VMs across
> tests.  The later is an Avocado-VT feature, and it to be honest it
> brings a good deal of problems in itself, which we decided to avoid
> here.
>
> About the lack of subtests, we (the autotest project, then the Avocado
> project) found that this concept, to be well applied, need more than
> we could deal with initially.  For instance, Avocado has the concept
> of "pre_test" and "post_test" hooks, with that, should those be
> applied to subtests as well?  Also, there's support for capturing
> system information (a feature called sysinfo) before and after the
> tests... again, should it be applied to subtests?  Avocado also stores
> a well defined results directory, and we'd have to deal with something
> like that for subtests.  With regards to the variants feature, should
> they also be applied to subtests?  The list of questions goes on and
> on.
>
> The fact that one test should not be able (as much as possible) to
> influence another test also comes into play in our initial decision
> to avoid subtests.
>
> IMO, the best way to handle this is to either keep a separate logger
> with the test progress:
>
>
https://avocado-framework.readthedocs.io/en/71.0/WritingTests.html#advanced-logging-capabilities
>
> With a change similar to:
>
> ---
> diff --git a/tests/acceptance/linux_ssh_mips_malta.py
b/tests/acceptance/linux_ssh_mips_malta.py
> index 509ff929cf..0683586c35 100644
> --- a/tests/acceptance/linux_ssh_mips_malta.py
> +++ b/tests/acceptance/linux_ssh_mips_malta.py
> @@ -17,6 +17,7 @@ from avocado_qemu import Test
>  from avocado.utils import process
>  from avocado.utils import archive
>
> +progress_log = logging.getLogger("progress")
>
>  class LinuxSSH(Test):
>
> @@ -149,6 +150,7 @@ class LinuxSSH(Test):
>          stdout, _ = self.ssh_command(cmd)
>          for line in stdout:
>              if exp in line:
> +                progress_log.info('Check successful for "%s"', cmd)
>                  break
>          else:
>              self.fail('"%s" output does not contain "%s"' % (cmd, exp))
> ---
>
> You could run tests with:
>
>   $ ./tests/venv/bin/avocado --show=console,progress run
--store-logging-stream progress -- tests/acceptance/linux_ssh_mips_malta.py
>
> And at the same time:
>
>   $ tail -f ~/avocado/job-results/latest/progress.INFO
>   17:20:44 INFO | Check successful for "uname -a"
>   17:20:44 INFO | Check successful for "cat /proc/cpuinfo"
>   ...
>
> I hope this helps somehow.
>
> Best regards,
> - Cleber.
>

Thanks, Cleber, for your detailed response. I'll use whatever is available,
along the lines you highligted. I will most likely gradually modify this
test until I find the sweet spot where I am satisfied with test behavior
and reporting, but also everything fits well into Avocado framework.

Thanks again, both to you and Eduardo,
Aleksandar

> > Grateful in advance,
> > Aleksandar
> >
> > > > Cleber, is there something already available in the Avocado API
> > > > that would help us report more fine-grained results inside each
> > > > test case?
> > > >
> > >
> > > Thanks, that would be a better way of expressing my question.
> > >
> > > >
> > > > >
> > > > > Thanks in advance,
> > > > > Aleksandar
> > > > >
> > > > > > Aleksandar Markovic (2):
> > > > > >   tests/acceptance: Refactor and improve reporting in
> > > > > >     linux_ssh_mips_malta.py
> > > > > >   tests/acceptance: Add new test cases in
linux_ssh_mips_malta.py
> > > > > >
> > > > > >  tests/acceptance/linux_ssh_mips_malta.py | 81
> > > > > ++++++++++++++++++++++++++------
> > > > > >  1 file changed, 66 insertions(+), 15 deletions(-)
> > > > > >
> > > > > > --
> > > > > > 2.7.4
> > > > > >
> > > > > >
> > > >
> > > > --
> > > > Eduardo


reply via email to

[Prev in Thread] Current Thread [Next in Thread]