qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] qemu-ga: Introduce guest-hibernate command


From: Luiz Capitulino
Subject: Re: [Qemu-devel] [RFC] qemu-ga: Introduce guest-hibernate command
Date: Fri, 9 Dec 2011 10:22:25 -0200

On Thu, 08 Dec 2011 21:18:00 -0600
Michael Roth <address@hidden> wrote:

> On 12/08/2011 12:52 PM, Luiz Capitulino wrote:
> > This is basically suspend to disk on a Linux guest.
> >
> > Signed-off-by: Luiz Capitulino<address@hidden>
> > ---
> >
> > This is an RFC because I did it as simple as possible and I'm open to
> > suggestions...
> >
> > Now, while testing this or even "echo disk>  /sys/power/state" I get several
> > funny results. Some times qemu just dies after printing that message:
> >
> >   "Guest moved used index from 20151 to 1"
> >
> > Some times it doesn't die, but I'm unable to log into the guest: I type
> > username&  password but the terminal kind of locks (the shell doesn't run).
> >
> > Some times it works...
> >
> 
> Here's the tail-end of the trace...
> 
> virtio_queue_notify 237.880 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 3.701 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 51.613 req=0x7f11f5966110 ret=0x0
> virtio_blk_req_complete 1.327 req=0x7f11f5966110 status=0x0
> virtqueue_fill 1.187 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
> idx=0x0
> virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.537 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 374.978 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 49.029 req=0x7f11f5faec50 ret=0x0
> virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
> idx=0x0
> virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 245.073 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 47.702 req=0x7f11f5966110 ret=0x0
> virtio_blk_req_complete 1.257 req=0x7f11f5966110 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
> idx=0x0
> virtqueue_flush 1.816 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.327 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 450.616 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 4.051 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 67.885 req=0x7f11f5faec50 ret=0x0
> virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
> idx=0x0
> 
> ...
> virtio_queue_notify 374.978 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 49.029 req=0x7f11f5faec50 ret=0x0
> virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
> idx=0x0
> virtqueue_flush 1.746 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 245.073 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 3.631 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 47.702 req=0x7f11f5966110 ret=0x0
> virtio_blk_req_complete 1.257 req=0x7f11f5966110 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
> idx=0x0
> virtqueue_flush 1.816 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.327 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 450.616 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 4.051 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 67.885 req=0x7f11f5faec50 ret=0x0
> virtio_blk_req_complete 1.327 req=0x7f11f5faec50 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5faec58 len=0x1001 
> idx=0x0
> virtqueue_flush 1.607 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.257 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 196.813 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> virtqueue_pop 3.562 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 in_num=0x2 
> out_num=0x1
> virtio_blk_rw_complete 47.492 req=0x7f11f5966110 ret=0x0
> virtio_blk_req_complete 1.327 req=0x7f11f5966110 status=0x0
> virtqueue_fill 1.327 vq=0x7f11f4cb4d40 elem=0x7f11f5966118 len=0x1001 
> idx=0x0
> virtqueue_flush 1.676 vq=0x7f11f4cb4d40 count=0x1
> virtio_notify 1.397 vdev=0x7f11f4c8f5e0 vq=0x7f11f4cb4d40
> virtio_queue_notify 882289.570 vdev=0x7f11f4c8f5e0 n=0x0 vq=0x7f11f4cb4d40
> 
> It doesn't seem to tell us much...but there's a bunch of successful 
> reads before the final virtio_queue_notify, and that notify takes quite 
> a bit longer than the previous ones. I can only speculate at this point, 
> but I would guess this is when the guest has completed loading the saved 
> memory from disk and it attempting to restore the previous state..
> 
> In the kernel there's a virtio_pci_suspend() PM callback that seems to 
> get called around this time and restores the PCI config from 
> virtio_pci_resume(). Could that be switching us to an older vring and 
> throwing the QEMU side out of whack?

Not sure. But Amit has confirmed that it's a virtio bug and he's working
on it. Right Amit?

> 
> 
> >   qapi-schema-guest.json     |   11 +++++++++++
> >   qga/guest-agent-commands.c |   19 +++++++++++++++++++
> >   2 files changed, 30 insertions(+), 0 deletions(-)
> >
> > diff --git a/qapi-schema-guest.json b/qapi-schema-guest.json
> > index fde5971..2c5bbcf 100644
> > --- a/qapi-schema-guest.json
> > +++ b/qapi-schema-guest.json
> > @@ -215,3 +215,14 @@
> >   ##
> >   { 'command': 'guest-fsfreeze-thaw',
> >     'returns': 'int' }
> > +
> > +##
> > +# @guest-hibernate
> > +#
> > +# Save RAM contents to disk and powerdown the guest.
> > +#
> > +# Notes: This command doesn't return on success.
> > +#
> > +# Since: 1.1
> > +##
> > +{ 'command': 'guest-hibernate' }
> > diff --git a/qga/guest-agent-commands.c b/qga/guest-agent-commands.c
> > index 6da9904..9dd4060 100644
> > --- a/qga/guest-agent-commands.c
> > +++ b/qga/guest-agent-commands.c
> > @@ -550,6 +550,25 @@ int64_t qmp_guest_fsfreeze_thaw(Error **err)
> >   }
> >   #endif
> >
> > +#define LINUX_SYS_STATE_FILE "/sys/power/state"
> > +
> > +void qmp_guest_hibernate(Error **err)
> > +{
> > +    int fd;
> > +
> > +    fd = open(LINUX_SYS_STATE_FILE, O_WRONLY);
> > +    if (fd<  0) {
> > +        error_set(err, QERR_OPEN_FILE_FAILED, LINUX_SYS_STATE_FILE);
> > +        return;
> > +    }
> > +
> > +    if (write(fd, "disk", 4)<  0) {
> > +        error_set(err, QERR_UNDEFINED_ERROR);
> > +    }
> > +
> > +    close(fd);
> > +}
> > +
> >   /* register init/cleanup routines for stateful command groups */
> >   void ga_command_state_init(GAState *s, GACommandState *cs)
> >   {
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]