[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support
From: |
David Woodhouse |
Subject: |
Re: [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support |
Date: |
Thu, 16 Feb 2023 16:33:44 +0100 |
User-agent: |
Evolution 3.44.4-0ubuntu1 |
On Thu, 2023-02-16 at 15:02 +0100, Juan Quintela wrote:
> David Woodhouse <dwmw2@infradead.org> wrote:
> > --=-jDk4SYxkcOAZoZa6DCVr
> > Content-Type: text/plain; charset="UTF-8"
> > Content-Transfer-Encoding: quoted-printable
> >
> > On Thu, 2023-02-16 at 11:49 +0100, Juan Quintela wrote:
> > > David Woodhouse <dwmw2@infradead.org> wrote:
> > > > The non-RFC patch submisson=C2=B9 is just the basic platform support fo=
> > r Xen
> > > > on KVM. This RFC series is phase 2, adding an internal XenStore and
> > > > hooking up the PV back end drivers to that and the emulated grant table=
> > s
> > > > and event channels.
> > > > =20
> > > > With this, we can boot a Xen guest with PV disk, under KVM. Full suppor=
> > t
> > > > for migration isn't there yet because it's actually missing in the PV
> > > > back end drivers in the first place (perhaps because upstream Xen doesn=
> > 't
> > > > yet have guest transparent live migration support anyway). I'm assuming
> > > > that when the first round is merged and we drop the [RFC] from this set=
> > ,
> > > > that won't be a showstopper for now?
> > > > =20
> > > > I'd be particularly interested in opinions on the way I implemented
> > > > serialization for the XenStore, by creating a GByteArray and then dumpi=
> > ng
> > > > it with VMSTATE_VARRAY_UINT32_ALLOC().
> > > =20
> > > And I was wondering why I was CC'd in the whole series O:-)
> > > =20
> >
> > Indeed, Philippe M-D added you to Cc when discussing migrations on the
> > first RFC submission back in December, and you've been included ever
> > since.
> >
> >
> > > How big is the xenstore?
> > > I mean typical size and maximun size.
> > > =20
> >
> > Booting a simple instance with a single disk:
> >
> > $ scripts/analyze-migration.py -f foo | grep impl_state_size
> > "impl_state_size": "0x00000634",
> >
> > Theoretical maximum is about 1000 nodes @2KiB, so around 2MiB.
> >
> > > If it is suficientely small (i.e. in the single unit megabytes), you can
> > > send it as a normal device at the end of migration.
> > > =20
> >
> > Right now it's part of the xen_xenstore device. Most of that is fairly
> > simple and it's just the impl_state that's slightly different.
> >
> >
> > > If it is bigger, I think that you are going to have to enter Migration
> > > iteration stage, and have some kind of dirty bitmap to know what entries
> > > are on the target and what not.
> > > =20
> >
> > We have COW and transactions; that isn't an impossibility; I think we
> > can avoid that complexity though.
>
> It is relatively small. I will go with migrating at the end of
> migration for now. Later we can measure if we need to improve
> performance there.
Yeah, that much I was relatively OK with. The bit I thought might
attract heckling is how I actually store the byte stream, in
https://git.infradead.org/users/dwmw2/qemu.git/commitdiff/45e7e645080#patch1
--- a/hw/i386/kvm/xen_xenstore.c
+++ b/hw/i386/kvm/xen_xenstore.c
@@ -66,6 +66,9 @@ struct XenXenstoreState {
evtchn_port_t guest_port;
evtchn_port_t be_port;
struct xenevtchn_handle *eh;
+
+ uint8_t *impl_state;
+ uint32_t impl_state_size;
};
struct XenXenstoreState *xen_xenstore_singleton;
@@ -109,16 +112,26 @@ static bool xen_xenstore_is_needed(void *opaque)
static int xen_xenstore_pre_save(void *opaque)
{
XenXenstoreState *s = opaque;
+ GByteArray *save;
if (s->eh) {
s->guest_port = xen_be_evtchn_get_guest_port(s->eh);
}
+
+ g_free(s->impl_state);
+ save = xs_impl_serialize(s->impl);
+ s->impl_state = save->data;
+ s->impl_state_size = save->len;
+ g_byte_array_free(save, false);
+
return 0;
}
static int xen_xenstore_post_load(void *opaque, int ver)
{
XenXenstoreState *s = opaque;
+ GByteArray *save;
+ int ret;
/*
* As qemu/dom0, rebind to the guest's port. The Windows drivers may
@@ -134,7 +147,13 @@ static int xen_xenstore_post_load(void *opaque, int ver)
}
s->be_port = be_port;
}
- return 0;
+
+ save = g_byte_array_new_take(s->impl_state, s->impl_state_size);
+ s->impl_state = NULL;
+ s->impl_state_size = 0;
+
+ ret = xs_impl_deserialize(s->impl, save, xen_domid, fire_watch_cb, s);
+ return ret;
}
static const VMStateDescription xen_xenstore_vmstate = {
@@ -152,6 +171,10 @@ static const VMStateDescription xen_xenstore_vmstate = {
VMSTATE_BOOL(rsp_pending, XenXenstoreState),
VMSTATE_UINT32(guest_port, XenXenstoreState),
VMSTATE_BOOL(fatal_error, XenXenstoreState),
+ VMSTATE_UINT32(impl_state_size, XenXenstoreState),
+ VMSTATE_VARRAY_UINT32_ALLOC(impl_state, XenXenstoreState,
+ impl_state_size, 0,
+ vmstate_info_uint8, uint8_t),
VMSTATE_END_OF_LIST()
}
};
smime.p7s
Description: S/MIME cryptographic signature
- [RFC PATCH v11bis 01/26] hw/xen: Add xenstore wire implementation and implementation stubs, (continued)
- [RFC PATCH v11bis 01/26] hw/xen: Add xenstore wire implementation and implementation stubs, David Woodhouse, 2023/02/16
- [RFC PATCH v11bis 13/26] hw/xen: Add xenstore operations to allow redirection to internal emulation, David Woodhouse, 2023/02/16
- [RFC PATCH v11bis 16/26] hw/xen: Rename xen_common.h to xen_native.h, David Woodhouse, 2023/02/16
- [RFC PATCH v11bis 20/26] hw/xen: Hook up emulated implementation for event channel operations, David Woodhouse, 2023/02/16
- [RFC PATCH v11bis 12/26] hw/xen: Add foreignmem operations to allow redirection to internal emulation, David Woodhouse, 2023/02/16
- [RFC PATCH v11bis 11/26] hw/xen: Pass grant ref to gnttab unmap operation, David Woodhouse, 2023/02/16
- [RFC PATCH v11bis 07/26] hw/xen: Implement core serialize/deserialize methods for xenstore_impl, David Woodhouse, 2023/02/16
- Re: [RFC PATCH v11bis 00/26] Emulated XenStore and PV backend support, Juan Quintela, 2023/02/16