qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 5/6] hw/arm/virt: Enable backup bitmap for dirty ring


From: Gavin Shan
Subject: Re: [PATCH v1 5/6] hw/arm/virt: Enable backup bitmap for dirty ring
Date: Thu, 23 Feb 2023 11:52:43 +1100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0

On 2/23/23 2:54 AM, Peter Maydell wrote:
On Wed, 22 Feb 2023 at 04:36, Gavin Shan <gshan@redhat.com> wrote:

On 2/22/23 3:27 AM, Peter Maydell wrote:
Why does this need to be board-specific code? Is there
some way we can just do the right thing automatically?
Why does the GIC/ITS matter?

The kernel should already know whether we have asked it
to do something that needs this extra extension, so
I think we ought to be able in the generic "enable the
dirty ring" code say "if the kernel says we need this
extra thing, also enable this extra thing". Or if that's
too early, we can do the extra part in a generic hook a
bit later.

In the future there might be other things, presumably,
that need the backup bitmap, so it would be more future
proof not to need to also change QEMU to add extra
logic checks that duplicate the logic the kernel already has.


When the dirty ring is enabled, a per-vcpu buffer is used to track the dirty 
pages.
The prerequisite to use the per-vcpu buffer is existing running VCPU context. 
There
are two cases where no running VCPU context exists and the backup bitmap 
extension
is needed, as we know until now: (a) save/restore GICv3 tables; (b) 
save/restore ITS
tables; These two cases are related to KVM device "kvm-arm-gicv3" and 
"arm-its-kvm",
which are only needed by virt machine at present. So we needn't the backup 
bitmap
extension for other boards.

But we might have to for other boards we add later. We shouldn't
put code in per-board if it's not really board specific.

Moreover, I think "we need the backup bitmap if the kernel is
using its GICv3 or ITS implementation" is a kernel implementation
detail. It seems to me that it would be cleaner if QEMU didn't
have to hardcode "we happen to know that these are the situations
when we need to do that". A better API would be "ask the kernel
'do we need this?' and enable it if it says 'yes'". The kernel
knows what its implementations of ITS and GICv3 (and perhaps
future in-kernel memory-using devices) require, after all.


Well, As we know so far, the backup bitmap extension is only required by 
'kvm-arm-gicv3'
and 'arm-its-kvm' device. Those two devices are only used by virt machine at 
present.
So it's a board specific requirement. I'm not sure about the future. We may 
need to
enable the extension for other devices and other boards. That time, the 
requirement
isn't board specific any more. However, we're uncertain for the future.

In order to cover the future requirement, the extension is needed by other 
boards,
the best way I can figure out is to enable the extension in generic path in 
kvm_init()
if the extension is supported by the host kernel. In this way, the unnecessary 
overhead
is introduced for those boards where 'kvm-arm-vgic3' and 'arm-its-kvm' aren't 
used.
The overhead should be very small and acceptable. Note that the host kernel 
don't know
if 'kvm-arm-vgic3' or 'arm-its-kvm' device is needed by the board in 
kvm_init(), which
is the generic path.

The 'kvm-arm-vgic3' and 'arm-its-kvm' devices are created in machvirt_init(), 
where
the memory slots are also added. Prior to the function, host kernel doesn't 
know if
the extension is needed by QEMU. It means we have to enable the extension in 
machvirt_init(),
which is exactly what we're doing. The difference is QEMU decides to enable the 
extension
instead of being told to enable it by host kernel. Host kernel doesn't have the 
answer to
"Hey host kernel, do we need to enable the extension" until machvirt_init() 
where the devices
are created. Besides, machvirt_init() isn't the generic path if we want to 
enable the extension
for all possible boards. Further more, the extension can't be enabled if memory 
slots have been
added.

In summary, the best way I can figure out is to enable the extension in 
kvm_init() if it
has been supported by host kernel, to cover all possible boards for future 
cases. Otherwise,
we keep what we're doing to enable the extension in machvirt_init(). Please let 
me know your
thoughts, Peter :)

Thanks,
Gavin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]