qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] iotests/149: Skip on unsupported ciphers


From: Hanna Reitz
Subject: Re: [PATCH v2 2/2] iotests/149: Skip on unsupported ciphers
Date: Thu, 18 Nov 2021 16:53:02 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.3.0

On 17.11.21 16:46, Daniel P. Berrangé wrote:
On Wed, Nov 17, 2021 at 04:17:07PM +0100, Hanna Reitz wrote:
Whenever qemu-img or qemu-io report that some cipher is unsupported,
skip the whole test, because that is probably because qemu has been
configured with the gnutls crypto backend.

We could taylor the algorithm list to what gnutls supports, but this is
a test that is run rather rarely anyway (because it requires
password-less sudo), and so it seems better and easier to skip it.  When
this test is intentionally run to check LUKS compatibility, it seems
better not to limit the algorithms but keep the list extensive.
I'd really like to figure out a way to be able to partially run
this test. When I have hit problems in the past, I needed to
run specific tests, but then the expected output always contains
everything.  I've thought of a few options

  - Split it into many stanadlone tests - eg
      tests/qemu-iotests/tests/luks-host-$ALG

I wouldn’t hate it, though we should have some common file where common code can be sourced from.

  - Split only the expected output eg
149-$SUBTEST

   and have a way to indicate which of expected output files
   we need to concatenate for the set of subtests that we
   run.

I’d prefer it if the test could verify its own output so that the reference output is basically just the usual unittest output of dots, “Ran XX tests” and “OK”.

(Two reasons: You can then easily disable some tests with the reference output changing only slightly; and it makes reviewing a test much easier because then I don’t need to verify the reference output...)

  - Introduce some template syntax in expected output
    tha can be used to munge the output.

  - Stop comparing expected output entirely and just
    then this into a normal python unit test.

That’s something that might indeed be useful for unittest-style iotests.

Then again, we already allow them to skip any test case and it will be counted as success, is that not sufficient?

  - Insert your idea here ?

I personally most prefer unittest-style tests, because with them you can just %s/def test_/def xtest_/, then reverse this change for all the cases you want to run, and then adjust the reference output to match the number of tests run.

So I suppose the best idea I have is to convert this test into unittest style, and then it should be more modular when it comes to what subtests it wants to run.

I mean, it doesn’t have to truly be an iotests.QMPTestCase.  It would be sufficient if the test itself verified the output of every command it invokes (instead of leaving that to a separate reference output file) and then printed something like “OK” afterwards.  Then we could trivially skip some cases just by printing “OK” even if they weren’t run.

Hanna




reply via email to

[Prev in Thread] Current Thread [Next in Thread]