[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Improving QMP test coverage

From: Cleber Rosa
Subject: Re: [Qemu-devel] Improving QMP test coverage
Date: Tue, 25 Jul 2017 21:21:13 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 07/24/2017 02:56 AM, Markus Armbruster wrote:
> Test code language is orthogonal to verification method (with code
> vs. with diff).  Except verifying with shell code would be obviously
> nuts[*].
> The existing iotests written in Python verify with code, and the ones
> written in shell verify with diff.  Doesn't mean that we have to port
> from Python to shell to gain "verify with diff".

The fact that they are still subject to "verify with diff" is
interesting.  IMO, exit status makes a lot more sense.

I only raised this point, and gave my opinion, because:

1) Shell based tests *with* output check has a lot of use cases and
proven value
2) Python based tests *without* output check also have use cases and value

To me, this shows that language flexibility is a must, and output check
can be an unnecessary burden.  If at all possible, it should be optional.

Implementation-wise, this seems pretty simple.  Tests based on any
language can communicate the basic success/failure by exit status code
and/or by recorded output.  If the ".out" file exists, let's use it, if
not, fallback to exit status (or the other way around, as long as it's
predictable).  For instance, the current implementation of
iotests.main() discards the exit status that could be one of the
PASS/FAIL criteria.

> I don't doubt that featureful test frameworks like Avocado provide
> adequate tools to debug tests.  The lure of the shell is its perceived
> simplicity: everybody knows (well, should know) how to write and debug
> simple shell scripts.  Of course, the simplicity evaporates when the
> scripts grow beyond "simple".  Scare quotes, because what's simple for
> Alice may not be so simple for Bob.

Agreed.  That's why I avoided "simple" and "complex" in my previous
messages.  A statement such as "tool XYZ usually scales better" has
bigger chances of being agreed upon by both Alice and Bob.

>> BTW, I'll defer the discussion of using an external tool to check the
>> output and determine test success/failure, because it is IMO a
>> complementary topic, and I believe I understand its use cases.
> Yes.  Regardless, I want to tell you *now* how tired of writing code to
> verify test output I am.  Even of reading it.  Output text is easier to
> read than code that tries to verify output text.  Diffs are easier to
> read than stack backtraces.

Point taken.  And truth be told: the current shell based qemu-iotests
excel at this, with its simple and effective "run_qemu <<<QMP_INPUT"
pattern.  The cons are:

1) Some use cases do not fall so nicely in this pattern, and end up
requiring workarounds (such as filters)

2) The expected output is not readily accessible

Considering that for the specific case of tests targeting QMP, most of
the action comes down to the message exchanges (commands sent, response
received), how would you feel about a descriptive approach to the
communication?  Something along these lines:

-> { "execute": "change",
     "arguments": { "device": "ide1-cd0",
                    "target": "/srv/images/Fedora-12-x86_64-DVD.iso" } }
<- { "return": {} }

If this looks familiar, that's because it is a snippet from the
qapi-schema.json file.  It's documentation, so it should mean that's
easy to read, *but*, there's no reason this can not be treated as code too.

This approach is similar to Python's doctest[1], in which a docstring
contains snippets of Python code to be evaluated, and the expected
outcome.  Example:

def factorial(n):
    """Return the factorial of n, an exact integer >= 0.

    If the result is small enough to fit in an int, return an int.
    Else return a long.

    >>> [factorial(n) for n in range(6)]
    [1, 1, 2, 6, 24, 120]

The lines that match the interpreter input will be evaluated, and the
subsequent lines are the expected output.

I don't think all QMP tests can be modeled like this, inside literal
strings, but having blocks that can be used when it makes sense seems
logical to me.  Pseudo code:

test_change() {


-> { "execute": "change",
     "arguments": { "device": "ide1-cd0",
                    "target": "/srv/images/Fedora-12-x86_64-DVD.iso" } }
<- { "return": {} }


IMO, the qmp() function can provide similar quality of readability to diff.

How does it sound?  As with previous interactions, if people see value
in this, I can certainly follow up with a PoC.

>> [1] -
>> http://avocado-framework.readthedocs.io/en/52.0/api/utils/avocado.utils.html#avocado.utils.process.run
>> [2] -
>> http://avocado-framework.readthedocs.io/en/52.0/WritingTests.html#advanced-logging-capabilities
>> [3] - https://www.youtube.com/watch?v=htUbOsh8MZI
> [*] Very many nutty things are more obviously nuts in shell.  It's an
> advantage of sorts ;)

We all know a number of amazing software that gets written using the
most improbable tools and languages.  It makes the world a much more
interesting place!

[1] - https://docs.python.org/2/library/doctest.html

Cleber Rosa
[ Sr Software Engineer - Virtualization Team - Red Hat ]
[ Avocado Test Framework - avocado-framework.github.io ]
[  7ABB 96EB 8B46 B94D 5E0F  E9BB 657E 8D33 A5F2 09F3  ]

Attachment: signature.asc
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]