[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [PATCH 42/42] qemu-iotests: Test job-* with block jobs
From: |
Max Reitz |
Subject: |
Re: [Qemu-block] [PATCH 42/42] qemu-iotests: Test job-* with block jobs |
Date: |
Tue, 15 May 2018 01:44:41 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0 |
On 2018-05-09 18:26, Kevin Wolf wrote:
> This adds a test case that tests the new job-* QMP commands with
> mirror and backup block jobs.
>
> Signed-off-by: Kevin Wolf <address@hidden>
> ---
> tests/qemu-iotests/219 | 201 ++++++++++++++++++++++++++++
> tests/qemu-iotests/219.out | 327
> +++++++++++++++++++++++++++++++++++++++++++++
> tests/qemu-iotests/group | 1 +
> 3 files changed, 529 insertions(+)
> create mode 100755 tests/qemu-iotests/219
> create mode 100644 tests/qemu-iotests/219.out
Test looks good, but it fails (for me) on tmpfs because at the point of
the first query-jobs, they already have an offset of 65536.
> diff --git a/tests/qemu-iotests/219 b/tests/qemu-iotests/219
> new file mode 100755
> index 0000000000..6cfe54b4db
> --- /dev/null
> +++ b/tests/qemu-iotests/219
> @@ -0,0 +1,201 @@
[...]
> +with iotests.FilePath('disk.img') as disk_path, \
> + iotests.FilePath('copy.img') as copy_path, \
> + iotests.VM() as vm:
> +
> + img_size = '4M'
> + iotests.qemu_img_create('-f', iotests.imgfmt, disk_path, img_size)
> + iotests.qemu_io('-c', 'write 0 %s' % (img_size),
> + '-f', iotests.imgfmt, disk_path)
> +
> + iotests.log('Launching VM...')
> + vm.add_blockdev(vm.qmp_to_opts({
> + 'driver': iotests.imgfmt,
> + 'node-name': 'drive0-node',
> + 'file': {
> + 'driver': 'file',
> + 'filename': disk_path,
> + },
> + }))
> + vm.launch()
> +
> + # In order to keep things deterministic (especially progress in
> query-job,
> + # but related to this also automatic state transitions like job
> + # completion), but still get pause points often enough to avoid making
> this
> + # test veey slow, it's important to have the right ratio between speed
> and
s/veey/very/
(Although "veey" does have its charm.)
> + # buf_size.
> + #
> + # For backup, buf_size is hard-coded to the source image cluser size
> (64k),
s/cluser/cluster/
> + # so we'll pick the same for mirror. The slice time, i.e. the granularity
> + # of the rate limiting is 100ms. With a speed of 256k per second, we can
> + # get four pause points per second. This gives us 250ms per iteration,
> + # which should be enough to stay deterministic.
> +
> + test_job_lifecycle(vm, 'drive-mirror', has_ready=True, job_args={
> + 'device': 'drive0-node',
> + 'target': copy_path,
> + 'sync': 'full',
> + 'speed': 262144,
> + 'buf_size': 65536,
> + })
> +
> + for auto_finalize in [True, False]:
> + for auto_dismiss in [True, False]:
> + test_job_lifecycle(vm, 'drive-backup', job_args={
> + 'device': 'drive0-node',
> + 'target': copy_path,
> + 'sync': 'full',
> + 'speed': 262144,
> + 'auto-finalize': auto_finalize,
> + 'auto-dismiss': auto_dismiss,
> + })
> +
> + vm.shutdown()
> diff --git a/tests/qemu-iotests/219.out b/tests/qemu-iotests/219.out
> new file mode 100644
> index 0000000000..e244be9ce8
> --- /dev/null
> +++ b/tests/qemu-iotests/219.out
> @@ -0,0 +1,327 @@
[...]
> +Pause/resume in READY
> +=== Testing block-job-pause/block-job-resume ===
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'standby', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'ready', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +=== Testing block-job-pause/job-resume ===
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'standby', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'ready', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +=== Testing job-pause/block-job-resume ===
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'standby', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'ready', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +=== Testing job-pause/job-resume ===
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'standby', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'standby', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
> +{u'return': {}}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'ready', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': [{u'status': u'ready', u'current-progress': 4194304,
> u'total-progress': 4194304, u'id': u'job0', u'type': u'mirror'}]}
This is really, really mean. Don't you have any compassion with the
poor little job that just wants to have Feierabend?
It worked so hard and it's always on standby and instantly ready when
you need it. Yet, you keep it hanging. That's not nice.
Max
> +{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state
> 'ready' cannot accept command verb 'finalize'"}}
> +{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state
> 'ready' cannot accept command verb 'dismiss'"}}
> +{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state
> 'ready' cannot accept command verb 'finalize'"}}
> +{u'error': {u'class': u'GenericError', u'desc': u"Job 'job0' in state
> 'ready' cannot accept command verb 'dismiss'"}}
> +{u'return': {}}
> +
> +Waiting for PENDING state...
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'waiting', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'pending', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'concluded', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'timestamp': {u'seconds': 'SECS', u'microseconds': 'USECS'}, u'data':
> {u'status': u'null', u'id': u'job0'}, u'event': u'JOB_STATUS_CHANGE'}
> +{u'return': []}
signature.asc
Description: OpenPGP digital signature
- Re: [Qemu-block] [PATCH 39/42] job: Add lifecycle QMP commands, (continued)
[Qemu-block] [PATCH 40/42] job: Add query-jobs QMP command, Kevin Wolf, 2018/05/09
[Qemu-block] [PATCH 41/42] iotests: Move qmp_to_opts() to VM, Kevin Wolf, 2018/05/09
[Qemu-block] [PATCH 42/42] qemu-iotests: Test job-* with block jobs, Kevin Wolf, 2018/05/09
- Re: [Qemu-block] [PATCH 42/42] qemu-iotests: Test job-* with block jobs,
Max Reitz <=
[Qemu-block] [PATCH 38/42] job: Add JOB_STATUS_CHANGE QMP event, Kevin Wolf, 2018/05/09
Re: [Qemu-block] [PATCH 00/42] Generic background jobs, Kevin Wolf, 2018/05/15