qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 14/17] migration: Create thread infrastructure f


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH 14/17] migration: Create thread infrastructure for multifd recv side
Date: Tue, 14 Feb 2017 11:34:22 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

* Juan Quintela (address@hidden) wrote:
> "Dr. David Alan Gilbert" <address@hidden> wrote:
> > * Juan Quintela (address@hidden) wrote:
> >> We make the locking and the transfer of information specific, even if we
> >> are still receiving things through the main thread.
> >> 
> >> Signed-off-by: Juan Quintela <address@hidden>
> >> ---
> >>  migration/ram.c | 77 
> >> +++++++++++++++++++++++++++++++++++++++++++++++++--------
> >>  1 file changed, 67 insertions(+), 10 deletions(-)
> >> 
> >> diff --git a/migration/ram.c b/migration/ram.c
> >> index ca94704..4e530ea 100644
> >> --- a/migration/ram.c
> >> +++ b/migration/ram.c
> >> @@ -523,7 +523,7 @@ void migrate_multifd_send_threads_create(void)
> >>      }
> >>  }
> >> 
> >> -static int multifd_send_page(uint8_t *address)
> >> +static uint16_t multifd_send_page(uint8_t *address, bool last_page)
> >>  {
> >>      int i, j, thread_count;
> >>      bool found = false;
> >> @@ -538,8 +538,10 @@ static int multifd_send_page(uint8_t *address)
> >>      pages.address[pages.num] = address;
> >>      pages.num++;
> >> 
> >> -    if (pages.num < (pages.size - 1)) {
> >> -        return UINT16_MAX;
> >> +    if (!last_page) {
> >> +        if (pages.num < (pages.size - 1)) {
> >> +            return UINT16_MAX;
> >> +        }
> >>      }
> >
> > This should be in the previous patch?
> > (and the place that adds the last_page parameter below)?
> 
> ok.
> 
> >> @@ -2920,10 +2980,7 @@ static int ram_load(QEMUFile *f, void *opaque, int 
> >> version_id)
> >> 
> >>          case RAM_SAVE_FLAG_MULTIFD_PAGE:
> >>              fd_num = qemu_get_be16(f);
> >> -            if (fd_num != 0) {
> >> -                /* this is yet an unused variable, changed later */
> >> -                fd_num = fd_num;
> >> -            }
> >> +            multifd_recv_page(host, fd_num);
> >
> > This is going to be quite tricky to fit into ram_load_postcopy
> > in this form; somehow it's going to have to find addresses to use for place 
> > page
> > and with anything with a page size != target page size it gets messy.
> 
> What do you have in mind?

The problem is that for postcopy we read the data into a temporary buffer
and then call a system call to 'place' the page atomically in memory.
At the moment there's a single temporary buffer; for x86 this is easy -
read a page into buffer; place it.  For Power/ARM or hugepages we
read consecutive target-pages into the temporary buffer and at the end
of the page place the whole host/huge page at once.
If you're reading multiple pages in parallel then you're going to need
to take care with multiple temporary buffers; having one hugepage/hostpage
per fd would probably be the easiest way.

A related thing to take care of is that when switching to postcopy mode
we probably need to take care to sync all of the fds to make sure
any outstanding RAM load has completed before we start doing any postcopy
magic.

Dave

> Later, Juan.
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]