[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PULL v3 00/21] Block layer patches
From: |
Juan Quintela |
Subject: |
Re: [PULL v3 00/21] Block layer patches |
Date: |
Fri, 19 May 2023 22:55:05 +0200 |
User-agent: |
Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux) |
Kevin Wolf <kwolf@redhat.com> wrote:
> Am 19.05.2023 um 20:48 hat Richard Henderson geschrieben:
>> On 5/19/23 10:18, Kevin Wolf wrote:
>> > The following changes since commit
>> > d009607d08d22f91ca399b72828c6693855e7325:
[ Adding Peter Xu, he has worked on postcopy lately]
>> >
>> > Revert "arm/kvm: add support for MTE" (2023-05-19 08:01:15 -0700)
>> >
>> > are available in the Git repository at:
>> >
>> > https://repo.or.cz/qemu/kevin.git tags/for-upstream
>> >
>> > for you to fetch changes up to 95fdd8db61848d31fde1d9b32da7f3f76babfa25:
>> >
>> > iotests: Test commit with iothreads and ongoing I/O (2023-05-19
>> > 19:16:53 +0200)
>> >
>> > ----------------------------------------------------------------
>> > Block layer patches
>> >
>> > - qcow2 spec: Rename "zlib" compression to "deflate"
>> > - Honour graph read lock even in the main thread + prerequisite fixes
>> > - aio-posix: do not nest poll handlers (fixes infinite recursion)
>> > - Refactor QMP blockdev transactions
>> > - graph-lock: Disable locking for now
>> > - iotests/245: Check if 'compress' driver is available
>> >
>> > ----------------------------------------------------------------
>> > Akihiro Suda (1):
>> > docs/interop/qcow2.txt: fix description about "zlib" clusters
>> >
>> > Kevin Wolf (12):
>> > block: Call .bdrv_co_create(_opts) unlocked
>> > block/export: Fix null pointer dereference in error path
>> > qcow2: Unlock the graph in qcow2_do_open() where necessary
>> > qemu-img: Take graph lock more selectively
>> > test-bdrv-drain: Take graph lock more selectively
>> > test-bdrv-drain: Call bdrv_co_unref() in coroutine context
>> > blockjob: Adhere to rate limit even when reentered early
>> > graph-lock: Honour read locks even in the main thread
>> > iotests/245: Check if 'compress' driver is available
>> > graph-lock: Disable locking for now
>> > nbd/server: Fix drained_poll to wake coroutine in right AioContext
>> > iotests: Test commit with iothreads and ongoing I/O
>> >
>> > Stefan Hajnoczi (2):
>> > aio-posix: do not nest poll handlers
>> > tested: add test for nested aio_poll() in poll handlers
>> >
>> > Vladimir Sementsov-Ogievskiy (6):
>> > blockdev: refactor transaction to use Transaction API
>> > blockdev: transactions: rename some things
>> > blockdev: qmp_transaction: refactor loop to classic for
>> > blockdev: transaction: refactor handling transaction properties
>> > blockdev: use state.bitmap in block-dirty-bitmap-add action
>> > blockdev: qmp_transaction: drop extra generic layer
>>
>> Test failure:
>>
>> https://gitlab.com/qemu-project/qemu/-/jobs/4317480370#L3347
>>
>> 194 fail [18:42:03] [18:42:05] 1.2s
>> output mismatch (see
>> /builds/qemu-project/qemu/build/tests/qemu-iotests/scratch/raw-file-194/194.out.bad)
>> --- /builds/qemu-project/qemu/tests/qemu-iotests/194.out
>> +++
>> /builds/qemu-project/qemu/build/tests/qemu-iotests/scratch/raw-file-194/194.out.bad
>> @@ -14,7 +14,6 @@
>> {"return": {}}
>> {"data": {"status": "setup"}, "event": "MIGRATION", "timestamp":
>> {"microseconds": "USECS", "seconds": "SECS"}}
>> {"data": {"status": "active"}, "event": "MIGRATION", "timestamp":
>> {"microseconds": "USECS", "seconds": "SECS"}}
>> -{"data": {"status": "postcopy-active"}, "event": "MIGRATION", "timestamp":
>> {"microseconds": "USECS", "seconds": "SECS"}}
>> {"data": {"status": "completed"}, "event": "MIGRATION", "timestamp":
>> {"microseconds": "USECS", "seconds": "SECS"}}
>> Gracefully ending the `drive-mirror` job on source...
>
> You got the same failure on mst's pull request, so this seems to be
> unrelated to the pull request at least.
>
> Maybe it is related to us using different test runners now and the test
> isn't working right there?
>
> I tried to reproduce locally with the same options as the disable-tcg CI
> job uses, but it always passes. Juan, do you have an idea what it could
> mean if on some CI system the "postcopy-active" event is missing?
The only thing that I can think by memory is that the machine go so fast
that we end migration on precopy and don't wait until postcopy. But
that is a wild guess, will try to take a look at the failure later.
> Kevin
Regards, Juan.