[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 0/2] Two small fixes to the streaming test case.
From: |
Paolo Bonzini |
Subject: |
Re: [Qemu-devel] [PATCH 0/2] Two small fixes to the streaming test case. |
Date: |
Wed, 06 Jun 2012 14:15:17 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120430 Thunderbird/12.0.1 |
> A real patch series is preferable, having the patches as part of your
> signature makes quoting them a bit harder with Thunderbird...
Oops. Unintended, sorry.
>> From 644fda4d6da1a5babfc8884f255d87ebaf847616 Mon Sep 17 00:00:00 2001
>> From: Paolo Bonzini <address@hidden>
>> Date: Wed, 23 May 2012 13:07:56 +0200
>> Subject: [PATCH 1/2] qemu-iotests: fill streaming test image with data
>>
>> This avoids that the job completes too fast when the file system
>> reports the hole to QEMU (via FIEMAP or SEEK_HOLE).
>>
>> Signed-off-by: Paolo Bonzini <address@hidden>
>
> Does this really fix the cause or just a symptom? The commit message
> sounds like a race and now we happen to win it again. But maybe it's
> just a bad wording that gives the impression.
No, unfortunately that's exactly the case. The whole TestStreamStop
test case is racy.
If the job completes before we can cancel it, it fails. If we remove
the sleep the job will be canceled before it has even started, and the
test succeeds but I'm not sure it is testing anything worthwhile.
But if the image is left sparse, then the job has really nothing to do
except reading one L2-table. You're pretty much guaranteed to complete
the job too soon, and the test fails.
>> ---
>> tests/qemu-iotests/030 | 13 ++++++++++++-
>> 1 file changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
>> index eb7bf99..4ab7d62 100755
>> --- a/tests/qemu-iotests/030
>> +++ b/tests/qemu-iotests/030
>> @@ -21,6 +21,7 @@
>> import os
>> import iotests
>> from iotests import qemu_img, qemu_io
>> +import struct
>>
>> backing_img = os.path.join(iotests.test_dir, 'backing.img')
>> mid_img = os.path.join(iotests.test_dir, 'mid.img')
>> @@ -48,11 +49,21 @@ class ImageStreamingTestCase(iotests.QMPTestCase):
>>
>> self.assert_no_active_streams()
>>
>> + def create_image(self, name, size):
>> + file = open(name, 'w')
>> + i = 0
>> + while i < size:
>> + sector = struct.pack('>l504xl', i / 512, i / 512)
>> + file.write(sector)
>> + i = i + 512
>> + file.close()
>> +
>> +
>> class TestSingleDrive(ImageStreamingTestCase):
>> image_len = 1 * 1024 * 1024 # MB
>>
>> def setUp(self):
>> - qemu_img('create', backing_img, str(TestSingleDrive.image_len))
>> + self.create_image(backing_img, TestSingleDrive.image_len)
>
> How about just adding a qemu_io call instead? Looks a bit nicer to me
> than reimplementing it, and would also work if we decided to use a
> different backing file format later.
We do not test the content of the file, but we should and for that
purpose you need to write a separate pattern to each sector (the
struct.pack call above).
>> qemu_img('create', '-f', iotests.imgfmt, '-o', 'backing_file=%s' %
>> backing_img, mid_img)
>> qemu_img('create', '-f', iotests.imgfmt, '-o', 'backing_file=%s' %
>> mid_img, test_img)
>> self.vm = iotests.VM().add_drive(test_img)
>> --
>> 1.7.10.1
>
>> From 3ba5810860b2eaba1f01c257aa13e28c6f9e2b3f Mon Sep 17 00:00:00 2001
>> From: Paolo Bonzini <address@hidden>
>> Date: Wed, 23 May 2012 12:52:07 +0200
>> Subject: [PATCH 2/2] qemu-iotests: start vms in qtest mode
>>
>> This way, they will not execute any code at all. However, right now
>> one test is "relying" on being slowed down by TCG executing random
>> crap, so change the timeouts there.
>>
>> Signed-off-by: Paolo Bonzini <address@hidden>
>
> BIOS code is "random crap"? :-)
Didn't mean to insult SeaBIOS in any way, but for the purposes of
qemu-iotests it is. :)
> But I like the idea of using the qtest mode here.
>
>> ---
>> tests/qemu-iotests/030 | 2 +-
>> tests/qemu-iotests/iotests.py | 4 +++-
>> 2 files changed, 4 insertions(+), 2 deletions(-)
>>
>> diff --git a/tests/qemu-iotests/030 b/tests/qemu-iotests/030
>> index 4ab7d62..cc671dd 100755
>> --- a/tests/qemu-iotests/030
>> +++ b/tests/qemu-iotests/030
>> @@ -147,7 +147,7 @@ class TestStreamStop(ImageStreamingTestCase):
>> result = self.vm.qmp('block-stream', device='drive0')
>> self.assert_qmp(result, 'return', {})
>>
>> - time.sleep(1)
>> + time.sleep(0.1)
>> events = self.vm.get_qmp_events(wait=False)
>> self.assertEqual(events, [], 'unexpected QMP event: %s' % events)
>
> Why is waiting for too _long_ a problem? I would understand if we waited
> too short so that the QMP event hasn't arrived yet. But shouldn't you
> still get all QMP events if you wait one more second before you fetch them?
If the BLOCK_JOB_COMPLETED event has already come, you cannot cancel the
job anymore. The next line in the context is:
self.cancel_and_wait()
Paolo
>> diff --git a/tests/qemu-iotests/iotests.py b/tests/qemu-iotests/iotests.py
>> index e27b40e..e05b1d6 100644
>> --- a/tests/qemu-iotests/iotests.py
>> +++ b/tests/qemu-iotests/iotests.py
>> @@ -54,7 +54,9 @@ class VM(object):
>> self._qemu_log_path = os.path.join(test_dir, 'qemu-log.%d' %
>> os.getpid())
>> self._args = qemu_args + ['-chardev',
>> 'socket,id=mon,path=' + self._monitor_path,
>> - '-mon', 'chardev=mon,mode=control', '-nographic']
>> + '-mon', 'chardev=mon,mode=control',
>> + '-qtest', 'stdio', '-machine', 'accel=qtest',
>> + '-display', 'none', '-vga', 'none']
>> self._num_drives = 0
>>
>> def add_drive(self, path, opts=''):
>> --
>> 1.7.10.1
>
> Kevin
>
>