On Thu, Jul 28, 2022 at 2:14 PM Lei Yang <leiyang@redhat.com> wrote:
>
> I tried to manually changed this line then test this branch on local host. After the migration is successful, the qemu core dump occurs on the shutdown inside guest.
>
> Compiled qemu Steps:
> # git clone https://gitlab.com/eperezmartin/qemu-kvm.git
> # cd qemu-kvm/
> # mkdir build
> # cd build/
> # git checkout bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9
I got this
fatal: reference is not a tree: bd85496c2a8c1ebf34f908fca2be2ab9852fd0e9
and my HEAD is:
commit 7b17a1a841fc2336eba53afade9cadb14bd3dd9a (HEAD -> master, tag:
v7.1.0-rc0, origin/master, origin/HEAD)
Author: Richard Henderson <richard.henderson@linaro.org>
Date: Tue Jul 26 18:03:16 2022 -0700
Update version for v7.1.0-rc0 release
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
I tried to recompile it use you mentioned commit, but the problem is reproduced again:
# git clone git://
git.qemu.org/qemu.git# cd qemu/
# git log
# mkdir build
# cd build/
# vim /root/qemu/hw/virtio/vhost-vdpa.c
# ../configure --target-list=x86_64-softmmu --enable-debug
# make
Latest commit:
commit 7b17a1a841fc2336eba53afade9cadb14bd3dd9a (HEAD -> master, tag: v7.1.0-rc0, origin/master, origin/HEAD)
Author: Richard Henderson <
richard.henderson@linaro.org>
Date: Tue Jul 26 18:03:16 2022 -0700
Update version for v7.1.0-rc0 release
Signed-off-by: Richard Henderson <
richard.henderson@linaro.org>
> # vim /root/qemu-kvm/hw/virtio/vhost-vdpa.c
> (Chanege "vhost_iova_tree_remove(v->iova_tree, &mem_region);" to "vhost_iova_tree_remove(v->iova_tree, result);")
Any reason you need to manually change the line since it has been merged?
> # ../configure --target-list=x86_64-softmmu --enable-debug
> # make
So if I understand you correctly, you meant the issue is not fixed?
From my side, this is a new issue. Because the guest can boot up normally and complete the migration. It is just that after the migration is successful, after shutdown in the guest, a core dump occurs
Thanks
Thanks
>
> Core dump messages:
> # gdb /root/qemu-kvm/build/x86_64-softmmu/qemu-system-x86_64 core.qemu-system-x86.7419
> (gdb) bt full
> #0 0x000056107c19afa9 in vhost_vdpa_listener_region_del (listener=0x7ff9a9c691a0, section=0x7ffd3889ad20)
> at ../hw/virtio/vhost-vdpa.c:290
> result = 0x0
> vaddr = 0x7ff29be00000
> mem_region = {iova = 0, translated_addr = 140679973961728, size = 30064771071, perm = IOMMU_NONE}
> v = 0x7ff9a9c69190
> iova = 4294967296
> llend = 34359738368
> llsize = 30064771072
> ret = 32765
> __func__ = "vhost_vdpa_listener_region_del"
> #1 0x000056107c1ca915 in listener_del_address_space (listener=0x7ff9a9c691a0, as=0x56107cccbc00 <address_space_memory>)
> at ../softmmu/memory.c:2939
> section =
> {size = 30064771072, mr = 0x56107e116270, fv = 0x7ff1e02a4090, offset_within_region = 2147483648, offset_within_address_space = 4294967296, readonly = false, nonvolatile = false}
> view = 0x7ff1e02a4090
> fr = 0x7ff1e04027f0
> #2 0x000056107c1cac39 in memory_listener_unregister (listener=0x7ff9a9c691a0) at ../softmmu/memory.c:2989
> #3 0x000056107c19d007 in vhost_vdpa_dev_start (dev=0x56107e126ea0, started=false) at ../hw/virtio/vhost-vdpa.c:1134
> v = 0x7ff9a9c69190
> ok = true
> #4 0x000056107c190252 in vhost_dev_stop (hdev=0x56107e126ea0, vdev=0x56107f40cb50) at ../hw/virtio/vhost.c:1828
> i = 32761
> __PRETTY_FUNCTION__ = "vhost_dev_stop"
> #5 0x000056107bebe26c in vhost_net_stop_one (net=0x56107e126ea0, dev=0x56107f40cb50) at ../hw/net/vhost_net.c:315
> file = {index = 0, fd = -1}
> __PRETTY_FUNCTION__ = "vhost_net_stop_one"
> #6 0x000056107bebe6bf in vhost_net_stop (dev=0x56107f40cb50, ncs=0x56107f421850, data_queue_pairs=1, cvq=0)
> at ../hw/net/vhost_net.c:425
> qbus = 0x56107f40cac8
> vbus = 0x56107f40cac8
> k = 0x56107df1a220
> n = 0x56107f40cb50
> peer = 0x7ff9a9c69010
> total_notifiers = 2
> nvhosts = 1
> i = 0
> --Type <RET> for more, q to quit, c to continue without paging--
> r = 32765
> __PRETTY_FUNCTION__ = "vhost_net_stop"
> #7 0x000056107c14af24 in virtio_net_vhost_status (n=0x56107f40cb50, status=15 '\017') at ../hw/net/virtio-net.c:298
> vdev = 0x56107f40cb50
> nc = 0x56107f421850
> queue_pairs = 1
> cvq = 0
> #8 0x000056107c14b17e in virtio_net_set_status (vdev=0x56107f40cb50, status=15 '\017') at ../hw/net/virtio-net.c:372
> n = 0x56107f40cb50
> q = 0x56107f40cb50
> i = 32765
> queue_status = 137 '\211'
> #9 0x000056107c185af2 in virtio_set_status (vdev=0x56107f40cb50, val=15 '\017') at ../hw/virtio/virtio.c:1947
> k = 0x56107dfe2c60
> #10 0x000056107c188cbb in virtio_vmstate_change (opaque=0x56107f40cb50, running=false, state=RUN_STATE_SHUTDOWN)
> at ../hw/virtio/virtio.c:3195
> vdev = 0x56107f40cb50
> qbus = 0x56107f40cac8
> k = 0x56107df1a220
> backend_run = false
> #11 0x000056107bfdca5e in vm_state_notify (running=false, state=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:334
> e = 0x56107f419950
> next = 0x56107f224b80
> #12 0x000056107bfd43e6 in do_vm_stop (state=RUN_STATE_SHUTDOWN, send_stop=false) at ../softmmu/cpus.c:263
> ret = 0
> #13 0x000056107bfd4420 in vm_shutdown () at ../softmmu/cpus.c:281
> #14 0x000056107bfdd584 in qemu_cleanup () at ../softmmu/runstate.c:813
> #15 0x000056107bd81a5b in main (argc=65, argv=0x7ffd3889b178, envp=0x7ffd3889b388) at ../softmmu/main.c:51
>
>
> Thanks
> Lei
>
> Jason Wang <jasowang@redhat.com> 于2022年7月26日周二 16:51写道:
>>
>> From: Eugenio Pérez <eperezma@redhat.com>
>>
>> vhost_vdpa_listener_region_del is always deleting the first iova entry
>> of the tree, since it's using the needle iova instead of the result's
>> one.
>>
>> This was detected using a vga virtual device in the VM using vdpa SVQ.
>> It makes some extra memory adding and deleting, so the wrong one was
>> mapped / unmapped. This was undetected before since all the memory was
>> mappend and unmapped totally without that device, but other conditions
>> could trigger it too:
>>
>> * mem_region was with .iova = 0, .translated_addr = (correct GPA).
>> * iova_tree_find_iova returned right result, but does not update
>> mem_region.
>> * iova_tree_remove always removed region with .iova = 0. Right iova were
>> sent to the device.
>> * Next map will fill the first region with .iova = 0, causing a mapping
>> with the same iova and device complains, if the next action is a map.
>> * Next unmap will cause to try to unmap again iova = 0, causing the
>> device to complain that no region was mapped at iova = 0.
>>
>> Fixes: 34e3c94edaef ("vdpa: Add custom IOTLB translations to SVQ")
>> Reported-by: Lei Yang <leiyang@redhat.com>
>> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>> Signed-off-by: Jason Wang <jasowang@redhat.com>
>> ---
>> hw/virtio/vhost-vdpa.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/hw/virtio/vhost-vdpa.c b/hw/virtio/vhost-vdpa.c
>> index bce64f4..3ff9ce3 100644
>> --- a/hw/virtio/vhost-vdpa.c
>> +++ b/hw/virtio/vhost-vdpa.c
>> @@ -290,7 +290,7 @@ static void vhost_vdpa_listener_region_del(MemoryListener *listener,
>>
>> result = vhost_iova_tree_find_iova(v->iova_tree, &mem_region);
>> iova = result->iova;
>> - vhost_iova_tree_remove(v->iova_tree, &mem_region);
>> + vhost_iova_tree_remove(v->iova_tree, result);
>> }
>> vhost_vdpa_iotlb_batch_begin_once(v);
>> ret = vhost_vdpa_dma_unmap(v, iova, int128_get64(llsize));
>> --
>> 2.7.4
>>