qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 5/9] hw/virtio: introduce virtio_device_should_start


From: Michael S. Tsirkin
Subject: Re: [PATCH v1 5/9] hw/virtio: introduce virtio_device_should_start
Date: Mon, 21 Nov 2022 17:37:11 -0500

On Tue, Nov 15, 2022 at 05:46:58PM +0100, Christian Borntraeger wrote:
> 
> 
> Am 15.11.22 um 17:40 schrieb Christian Borntraeger:
> > 
> > 
> > Am 15.11.22 um 17:05 schrieb Alex Bennée:
> > > 
> > > Christian Borntraeger <borntraeger@linux.ibm.com> writes:
> > > 
> > > > Am 15.11.22 um 15:31 schrieb Alex Bennée:
> > > > > "Michael S. Tsirkin" <mst@redhat.com> writes:
> > > > > 
> > > > > > On Mon, Nov 14, 2022 at 06:15:30PM +0100, Christian Borntraeger 
> > > > > > wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > Am 14.11.22 um 18:10 schrieb Michael S. Tsirkin:
> > > > > > > > On Mon, Nov 14, 2022 at 05:55:09PM +0100, Christian Borntraeger 
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > Am 14.11.22 um 17:37 schrieb Michael S. Tsirkin:
> > > > > > > > > > On Mon, Nov 14, 2022 at 05:18:53PM +0100, Christian 
> > > > > > > > > > Borntraeger wrote:
> > > > > > > > > > > Am 08.11.22 um 10:23 schrieb Alex Bennée:
> > > > > > > > > > > > The previous fix to virtio_device_started revealed a 
> > > > > > > > > > > > problem in its
> > > > > > > > > > > > use by both the core and the device code. The core code 
> > > > > > > > > > > > should be able
> > > > > > > > > > > > to handle the device "starting" while the VM isn't 
> > > > > > > > > > > > running to handle
> > > > > > > > > > > > the restoration of migration state. To solve this dual 
> > > > > > > > > > > > use introduce a
> > > > > > > > > > > > new helper for use by the vhost-user backends who all 
> > > > > > > > > > > > use it to feed a
> > > > > > > > > > > > should_start variable.
> > > > > > > > > > > > 
> > > > > > > > > > > > We can also pick up a change vhost_user_blk_set_status 
> > > > > > > > > > > > while we are at
> > > > > > > > > > > > it which follows the same pattern.
> > > > > > > > > > > > 
> > > > > > > > > > > > Fixes: 9f6bcfd99f (hw/virtio: move vm_running check to 
> > > > > > > > > > > > virtio_device_started)
> > > > > > > > > > > > Fixes: 27ba7b027f (hw/virtio: add boilerplate for 
> > > > > > > > > > > > vhost-user-gpio device)
> > > > > > > > > > > > Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> > > > > > > > > > > > Cc: "Michael S. Tsirkin" <mst@redhat.com>
> > > > > > > > > > > 
> > > > > > > > > > > Hmmm, is this
> > > > > > > > > > > commit 259d69c00b67c02a67f3bdbeeea71c2c0af76c35
> > > > > > > > > > > Author:     Alex Bennée <alex.bennee@linaro.org>
> > > > > > > > > > > AuthorDate: Mon Nov 7 12:14:07 2022 +0000
> > > > > > > > > > > Commit:     Michael S. Tsirkin <mst@redhat.com>
> > > > > > > > > > > CommitDate: Mon Nov 7 14:08:18 2022 -0500
> > > > > > > > > > > 
> > > > > > > > > > >         hw/virtio: introduce virtio_device_should_start
> > > > > > > > > > > 
> > > > > > > > > > > and older version?
> > > > > > > > > > 
> > > > > > > > > > This is what got merged:
> > > > > > > > > > https://lore.kernel.org/r/20221107121407.1010913-1-alex.bennee%40linaro.org
> > > > > > > > > > This patch was sent after I merged the RFC.
> > > > > > > > > > I think the only difference is the commit log but I might 
> > > > > > > > > > be missing
> > > > > > > > > > something.
> > > > > > > > > > 
> > > > > > > > > > > This does not seem to fix the regression that I have 
> > > > > > > > > > > reported.
> > > > > > > > > > 
> > > > > > > > > > This was applied on top of 9f6bcfd99f which IIUC does, 
> > > > > > > > > > right?
> > > > > > > > > > 
> > > > > > > > > > 
> > > > > > > > > 
> > > > > > > > > QEMU master still fails for me for suspend/resume to disk:
> > > > > > > > > 
> > > > > > > > > #0  0x000003ff8e3980a6 in __pthread_kill_implementation () at 
> > > > > > > > > /lib64/libc.so.6
> > > > > > > > > #1  0x000003ff8e348580 in raise () at /lib64/libc.so.6
> > > > > > > > > #2  0x000003ff8e32b5c0 in abort () at /lib64/libc.so.6
> > > > > > > > > #3  0x000003ff8e3409da in __assert_fail_base () at 
> > > > > > > > > /lib64/libc.so.6
> > > > > > > > > #4  0x000003ff8e340a4e in  () at /lib64/libc.so.6
> > > > > > > > > #5 0x000002aa1ffa8966 in vhost_vsock_common_pre_save
> > > > > > > > > (opaque=<optimized out>) at
> > > > > > > > > ../hw/virtio/vhost-vsock-common.c:203
> > > > > > > > > #6  0x000002aa1fe5e0ee in vmstate_save_state_v
> > > > > > > > >        (f=f@entry=0x2aa21bdc170, vmsd=0x2aa204ac5f0
> > > > > > > > > <vmstate_virtio_vhost_vsock>, opaque=0x2aa21bac9f8,
> > > > > > > > > vmdesc=vmdesc@entry=0x3fddc08eb30,
> > > > > > > > > version_id=version_id@entry=0) at ../migration/vmstate.c:329
> > > > > > > > > #7 0x000002aa1fe5ebf8 in vmstate_save_state
> > > > > > > > > (f=f@entry=0x2aa21bdc170, vmsd=<optimized out>,
> > > > > > > > > opaque=<optimized out>, 
> > > > > > > > > vmdesc_id=vmdesc_id@entry=0x3fddc08eb30)
> > > > > > > > > at ../migration/vmstate.c:317
> > > > > > > > > #8 0x000002aa1fe75bd0 in vmstate_save 
> > > > > > > > > (f=f@entry=0x2aa21bdc170,
> > > > > > > > > se=se@entry=0x2aa21bdbe90, vmdesc=vmdesc@entry=0x3fddc08eb30) 
> > > > > > > > > at
> > > > > > > > > ../migration/savevm.c:908
> > > > > > > > > #9 0x000002aa1fe79584 in
> > > > > > > > > qemu_savevm_state_complete_precopy_non_iterable
> > > > > > > > > (f=f@entry=0x2aa21bdc170, in_postcopy=in_postcopy@entry=false,
> > > > > > > > > inactivate_disks=inactivate_disks@entry=true)
> > > > > > > > >        at ../migration/savevm.c:1393
> > > > > > > > > #10 0x000002aa1fe79a96 in qemu_savevm_state_complete_precopy
> > > > > > > > > (f=0x2aa21bdc170, iterable_only=iterable_only@entry=false,
> > > > > > > > > inactivate_disks=inactivate_disks@entry=true) at
> > > > > > > > > ../migration/savevm.c:1459
> > > > > > > > > #11 0x000002aa1fe6d6ee in migration_completion 
> > > > > > > > > (s=0x2aa218ef600) at ../migration/migration.c:3314
> > > > > > > > > #12 migration_iteration_run (s=0x2aa218ef600) at 
> > > > > > > > > ../migration/migration.c:3761
> > > > > > > > > #13 migration_thread (opaque=opaque@entry=0x2aa218ef600) at 
> > > > > > > > > ../migration/migration.c:3989
> > > > > > > > > #14 0x000002aa201f0b8c in qemu_thread_start (args=<optimized 
> > > > > > > > > out>) at ../util/qemu-thread-posix.c:505
> > > > > > > > > #15 0x000003ff8e396248 in start_thread () at /lib64/libc.so.6
> > > > > > > > > #16 0x000003ff8e41183e in thread_start () at /lib64/libc.so.6
> > > > > > > > > 
> > > > > > > > > Michael, your previous branch did work if I recall correctly.
> > > > > > > > 
> > > > > > > > That one was failing under github CI though (for reasons we 
> > > > > > > > didn't
> > > > > > > > really address, such as disconnect during stop causing a 
> > > > > > > > recursive
> > > > > > > > call to stop, but there you are).
> > > > > > > Even the double revert of everything?
> > > > > > 
> > > > > > I don't remember at this point.
> > > > > > 
> > > > > > > So how do we proceed now?
> > > > > > 
> > > > > > I'm hopeful Alex will come up with a fix.
> > > > > I need to replicate the failing test for that. Which test is
> > > > > failing?
> > > > 
> > > > 
> > > > Pretty much the same as before. guest with vsock, managedsave and
> > > > restore.
> > > 
> > > If this isn't in our test suite I'm going to need exact steps.
> > 
> > Just get any libvirt guest, add
> >      <vsock model='virtio'>
> >        <cid auto='yes'/>
> >      </vsock>
> > 
> > to your libvirt xml. Start the guest (with the new xml).
> > Run virsh managedsave - qemu crashes. On x86 and s390.
> 
> 
> the libvirt log:
> 
> /home/cborntra/REPOS/qemu/build/x86_64-softmmu/qemu-system-x86_64 \
> -name guest=f36,debug-threads=on \
> -S \
> -object 
> '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-1-f36/master-key.aes"}'
>  \
> -machine pc-i440fx-7.2,usb=off,dump-guest-core=off,memory-backend=pc.ram \
> -accel kvm \
> -cpu 
> Cooperlake,ss=on,pdcm=on,hypervisor=on,tsc-adjust=on,avx512ifma=on,sha-ni=on,avx512vbmi=on,umip=on,avx512vbmi2=on,gfni=on,vaes=on,vpclmulqdq=on,avx512bitalg=on,avx512-vpopcntdq=on,rdpid=on,movdiri=on,movdir64b=on,fsrm=on,md-clear=on,xsaves=on,ibpb=on,ibrs=on,amd-stibp=on,amd-ssbd=on,hle=off,rtm=off,avx512-bf16=off,taa-no=off
>  \
> -m 2048 \
> -object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":2147483648}' \
> -overcommit mem-lock=off \
> -smp 2,sockets=2,cores=1,threads=1 \
> -uuid 712590b2-fbd8-4a2f-a8e9-be33cb9ee0da \
> -display none \
> -no-user-config \
> -nodefaults \
> -chardev socket,id=charmonitor,fd=39,server=on,wait=off \
> -mon chardev=charmonitor,id=monitor,mode=control \
> -rtc base=utc,driftfix=slew \
> -global kvm-pit.lost_tick_policy=delay \
> -no-hpet \
> -no-shutdown \
> -global PIIX4_PM.disable_s3=1 \
> -global PIIX4_PM.disable_s4=1 \
> -boot strict=on \
> -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x3.0x7 \
> -device 
> ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x3
>  \
> -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x3.0x1 \
> -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x3.0x2 \
> -blockdev 
> '{"driver":"file","filename":"/var/lib/libvirt/images/f36.qcow2","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}'
>  \
> -blockdev 
> '{"node-name":"libvirt-1-format","read-only":false,"driver":"qcow2","file":"libvirt-1-storage","backing":null}'
>  \
> -device 
> ide-hd,bus=ide.0,unit=0,drive=libvirt-1-format,id=ide0-0-0,bootindex=1 \
> -netdev user,id=hostnet0 \
> -device 
> e1000,netdev=hostnet0,id=net0,mac=52:54:00:20:ba:4a,bus=pci.0,addr=0x2 \
> -chardev pty,id=charserial0 \
> -device isa-serial,chardev=charserial0,id=serial0 \
> -audiodev '{"id":"audio1","driver":"none"}' \
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 \
> -sandbox 
> on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
> -device vhost-vsock-pci,id=vsock0,guest-cid=3,vhostfd=35,bus=pci.0,addr=0x5 \
> -msg timestamp=on
> char device redirected to /dev/pts/1 (label charserial0)
> qemu-system-x86_64: ../hw/virtio/vhost-vsock-common.c:203: 
> vhost_vsock_common_pre_save: Assertion 
> `!vhost_dev_is_started(&vvc->vhost_dev)' failed.
> 2022-11-15 16:38:46.096+0000: shutting down, reason=crashed

Alex were you able to replicate? Just curious.


-- 
MST




reply via email to

[Prev in Thread] Current Thread [Next in Thread]