qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 8/9] migration/ram: Factor out populating pages readable i


From: David Hildenbrand
Subject: Re: [PATCH v4 8/9] migration/ram: Factor out populating pages readable in ram_block_populate_pages()
Date: Fri, 3 Sep 2021 21:40:58 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 03.09.21 21:20, Peter Xu wrote:
On Fri, Sep 03, 2021 at 09:58:06AM +0200, David Hildenbrand wrote:
That'll be good enough for live snapshot as uffd-wp works for zero pages,
however I'm just afraid it may stop working for some new users of it when zero
pages won't suffice.

I thought about that as well. But snapshots/migration will read all
memory either way and consume real memory when there is no shared zero
page. So it's just shifting the point in time when we allocate all these
pages I guess.

... thinking again, even when populating on shmem and friends there is
nothing stopping pages from getting mapped out again.

What would happen when trying uffd-wp protection on a pte_none() in your
current shmem implementation? Will it lookup if there is something in the
page cache (not a hole) and set a PTE marker? Or will it simply skip as
there is currently nothing in the page table? Or will it simply
unconditionally install a PTE marker, even if there is a hole?

It (will - I haven't rebased and posted) sets a pte marker.  So uffd-wp will
always work on read prefault irrelevant of memory type in the future.


Having an uffd-wp mode that doesn't require pre-population would really be
great. I remember you shared prototypes.

Yes, I planned to do that after the shmem bits, because they have some
conflict. I don't want to mess up more with the current series either, which is
already hard to push, which is very unfortunate.


Yeah ... alternatively, we could simply populate the shared zeropage on private anonymous memory when trying protecting a pte_none(). That might actually be a very elegant solution.

--
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]