qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support


From: Juan Quintela
Subject: Re: [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support
Date: Thu, 16 Feb 2023 15:02:46 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/28.2 (gnu/linux)

David Woodhouse <dwmw2@infradead.org> wrote:
> --=-jDk4SYxkcOAZoZa6DCVr
> Content-Type: text/plain; charset="UTF-8"
> Content-Transfer-Encoding: quoted-printable
>
> On Thu, 2023-02-16 at 11:49 +0100, Juan Quintela wrote:
>> David Woodhouse <dwmw2@infradead.org> wrote:
>> > The non-RFC patch submisson=C2=B9 is just the basic platform support fo=
> r Xen
>> > on KVM. This RFC series is phase 2, adding an internal XenStore and
>> > hooking up the PV back end drivers to that and the emulated grant table=
> s
>> > and event channels.
>> >=20
>> > With this, we can boot a Xen guest with PV disk, under KVM. Full suppor=
> t
>> > for migration isn't there yet because it's actually missing in the PV
>> > back end drivers in the first place (perhaps because upstream Xen doesn=
> 't
>> > yet have guest transparent live migration support anyway). I'm assuming
>> > that when the first round is merged and we drop the [RFC] from this set=
> ,
>> > that won't be a showstopper for now?
>> >=20
>> > I'd be particularly interested in opinions on the way I implemented
>> > serialization for the XenStore, by creating a GByteArray and then dumpi=
> ng
>> > it with VMSTATE_VARRAY_UINT32_ALLOC().
>>=20
>> And I was wondering why I was CC'd in the whole series O:-)
>>=20
>
> Indeed, Philippe M-D added you to Cc when discussing migrations on the
> first RFC submission back in December, and you've been included ever
> since.
>
>
>> How big is the xenstore?
>> I mean typical size and maximun size.
>>=20
>
> Booting a simple instance with a single disk:
>
> $ scripts/analyze-migration.py -f foo | grep impl_state_size
>         "impl_state_size": "0x00000634",
>
> Theoretical maximum is about 1000 nodes @2KiB, so around 2MiB.
>
>> If it is suficientely small (i.e. in the single unit megabytes), you can
>> send it as a normal device at the end of migration.
>>=20
>
> Right now it's part of the xen_xenstore device. Most of that is fairly
> simple and it's just the impl_state that's slightly different.
>
>
>> If it is bigger, I think that you are going to have to enter Migration
>> iteration stage, and have some kind of dirty bitmap to know what entries
>> are on the target and what not.
>>=20
>
> We have COW and transactions; that isn't an impossibility; I think we
> can avoid that complexity though.

It is relatively small.  I will go with migrating at the end of
migration for now.  Later we can measure if we need to improve
performance there.

Later, Juan.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]