qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 0/4] savevm: save vmsate with fixed size


From: Wenchao Xia
Subject: Re: [Qemu-devel] [RFC PATCH 0/4] savevm: save vmsate with fixed size
Date: Fri, 01 Mar 2013 10:35:28 +0800
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:17.0) Gecko/20130215 Thunderbird/17.0.3

于 2013-2-28 18:50, Kevin Wolf 写道:
Am 28.02.2013 um 09:09 hat Wenchao Xia geschrieben:
   This patch added a new way to savevm: save vmstate as plane contents
instead of stream.

So the idea is that when we introduce internal live snapshots, you don't
keep old state in the saved VM state, but you just overwrite it, right?
Or actually, the same works (could work?) for migration to file as well.

  exactly, it will overwrite the contents when a page became dirty
again, so the vmstate size would not grow unlimited. It works for
migration to file as well with a modification to migrate code.

You probably get some improvements of the file size when the migration
takes a while, depending on how much of the memory actually has to be
saved. You might however end up with a lot more small writes instead of
some big ones before, which might hurt performance.

Do you have any data about the resulting performance and file size?

  ah, an important issue I haven't test, thanks for tipping it, let
me add code for migration to file, and have a test. It also
can be optimized a bit in qemu_fseek(), but IMHO the optimization
for small writes would better goto block layer either in qemu
or underling components in system.

This version have following limitation:
   1 in patch 3 only dirty page got written, clean page is not touched, so
it will have trouble when savevm to an old internal snapshot, which
will be fixed later if this approach seems OK.

Basically you need a bdrv_zero_vmstate(), right? I think this would
  Yes, an API to initialize the data at the beginning, or just write 4K
zero in the progress....

actually be a bug fix, because snapshots might today get references to
unused VM state clusters that are just leftovers from the last snapshot.

  In a qcow2 file that have snapA, if user type "savevm snapA", then
qemu will delete old snapA and then create new snapA.
  Do you mean that new snapA and old snapA may use the same cluster
that is not cleaned up as zeros? I guess this brings no trouble to old
stream savevm, but will brings trouble to plane savevm in this patch.
If so, I think yes this bug fix can solve the problem.

   2 in patch 3 it saves contents according to address regardless about
zero pages, so the size of savevm grows. In my test 128MB guest took
about 21M internal snapshot before, and always took 137MB in this way.

   Although it have above issue, but I'd like to sent the RFC first
to see if this is a good way. Next steps will be make savevm lively,
save vmstate to image files.

   About issue 2, it will be OK if we save vmstate to external image files,
such as a qcow2 file, which may handle the duplicated zeros(I guess so).
in internal snapshot case, the qcow2 internal snapshot need to enhanced
to allow store zeros with little space.

Yes, we can use the qcow2 zero flag for this. It works on a qcow2
cluster granularity (i.e. 64k by default), which I hope should be
sufficient.

Kevin



--
Best Regards

Wenchao Xia




reply via email to

[Prev in Thread] Current Thread [Next in Thread]