qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Split migration bitmaps by ramblock


From: Juan Quintela
Subject: Re: [Qemu-devel] [RFC] Split migration bitmaps by ramblock
Date: Wed, 29 Mar 2017 10:51:47 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux)

"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:
>> Note that there are two reason for this, ARM and PPC do things like
>> guests with 4kb pages on hosts with 16/64kb hosts, and then we have
>> HugePages.  Note all the workarounds that postcopy has to do because
>> to work in HugePages size.
>
> There are some fun problems with changing the bitmap page size;
> off the top of my head, the ones I can remember include:
>     a) I'm sure I've seen rare cases where a target page is marked as
>        dirty inside a hostpage; I'm guessing that was qemu's doing, but
>        there are more subtle cases, e.g. running a 4kb guest on a 64kb host;
>        it's legal - and 4kb power guests used to exist;  I think in those
>        cases you see KVM only marking one target page as dirty.

        /*
         * bitmap-traveling is faster than memory-traveling (for addr...)
         * especially when most of the memory is not dirty.
         */
        for (i = 0; i < len; i++) {
            if (bitmap[i] != 0) {
                c = leul_to_cpu(bitmap[i]);
                do {
                    j = ctzl(c);
                    c &= ~(1ul << j);
                    page_number = (i * HOST_LONG_BITS + j) * hpratio;
                    addr = page_number * TARGET_PAGE_SIZE;
                    ram_addr = start + addr;
                    cpu_physical_memory_set_dirty_range(ram_addr,
                                       TARGET_PAGE_SIZE * hpratio, clients);
                } while (c != 0);
            }
        }


Thisis the code that we end using when we are synchronizing from kvm, so
if we don't have all pages of a host page set to one (or zero)  I think
we are doing something wrong, no?  Or I am missunderstanding the code?


>     b) Are we required to support migration across hosts of different 
> pagesize;
>        and if we do that what size should a bit represent?
>        People asked about it during postcopy but I think it's restricted to
>        matching sizes.  I don't think precopy has any requirement for matching
>        host pagesize at the moment.  64bit ARM does 4k, 64k and I think 16k 
> was
>        added later.

With current precopy, we should work independently of the host page size
(famous last words), and in a 1st step, I will only send pages of the
size TARGET_PAGE_SIZE.  I will only change the bitmaps.  We can add
bigger pages later.

>     c) Hugepages have similar issues; precopy doesn't currently have any
>        requirement for the hugepage selection on the two hosts to match,
>        but it does on postcopy.  Also you don't want to have a single dirty
>        bit for a 1GB host hugepage if you can handle detecting changes at
>        a finer grain level.

I agree here, I was thinking more on the Power/ARM case than the
HugePage case.  For the 2MB page, we could think about doing it, for the
1GB case, it is not gonna work.

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]