qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] VM memory caching model


From: Matwey V. Kornilov
Subject: [Qemu-devel] VM memory caching model
Date: Sat, 24 Feb 2018 18:07:15 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

Hi,

Sorry in advance, if wrong maillist.

Where may I find comprehensive description how virtualization CPU
extensions (like VMX) interact with CPU data caches as well as
corresponding qemu implementation details? I've looked through
memory_ldst.inc.c with little success.

I am trying to debug virtio_blk issues found on Xeon X5675. I run nested
guest under qemu-kvm from ESXi 5.5 hosted guest, this is quite odd setup
but still. Currently (master qemu, host and guest - 4.15 kernel), virtio
is broken in this setup due to some kind of memory synchronization
issues. Let me please recall, that main virtio communication abstraction
is a queue. The queue is consisted from three parts: descriptor table,
avail ring, used ring. This structures are supposed to be shared memory
between the guest and the hypervisor. All of them are structures
allocated by the guest device driver, pointers in guest physical address
space are transferred to the hypervisor through PCI MMIO configuration
BAR. This is so-called modern PCI virtio.

When the guest wants to notify the hypervisor for update, the guest
device driver writes to PCI BAR. At the hypervisor side, notification is
implemented through eventfd and KVM_IOEVENTFD ioctl. It works as
expected, since I see that qemu receives a lot of notification for the
queue. However, qemu sees no updates in avail ring where guest driver
increments so called "avail index" at each transfer. At the hypervisor
side avail ring index is always 1, so as virtio_queue_empty_rcu() always
says that incoming queue is empty one.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]