[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v7 32/42] Page request: Consume pages off the po
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [Qemu-devel] [PATCH v7 32/42] Page request: Consume pages off the post-copy queue |
Date: |
Wed, 16 Sep 2015 19:48:27 +0100 |
User-agent: |
Mutt/1.5.23 (2014-03-12) |
* Amit Shah (address@hidden) wrote:
> On (Tue) 16 Jun 2015 [11:26:45], Dr. David Alan Gilbert (git) wrote:
> > From: "Dr. David Alan Gilbert" <address@hidden>
> >
> > When transmitting RAM pages, consume pages that have been queued by
> > MIG_RPCOMM_REQPAGE commands and send them ahead of normal page scanning.
>
> It's slightly confusing with 'consume': we're /servicing/ requests from
> the dest at the src here rather than /consuming/ pages sent by src at
> the dest. If you find 'service' better than 'consume', please update
> the commit msg+log.
'consume' is a fairly normal term for taking an item off a queue and
processing it.
> > Note:
> > a) After a queued page the linear walk carries on from after the
> > unqueued page; there is a reasonable chance that the destination
> > was about to ask for other closeby pages anyway.
> >
> > b) We have to be careful of any assumptions that the page walking
> > code makes, in particular it does some short cuts on its first linear
> > walk that break as soon as we do a queued page.
> >
> > c) We have to be careful to not break up host-page size chunks, since
> > this makes it harder to place the pages on the destination.
> >
> > Signed-off-by: Dr. David Alan Gilbert <address@hidden>
>
> Reviewed-by: Amit Shah <address@hidden>
>
> > +static int ram_save_host_page(MigrationState *ms, QEMUFile *f, RAMBlock*
> > block,
> > + ram_addr_t *offset, bool last_stage,
> > + uint64_t *bytes_transferred,
> > + ram_addr_t dirty_ram_abs)
> > +{
> > + int tmppages, pages = 0;
> > + do {
> > + /* Check the pages is dirty and if it is send it */
> > + if (migration_bitmap_clear_dirty(dirty_ram_abs)) {
> > + if (compression_switch && migrate_use_compression()) {
> > + tmppages = ram_save_compressed_page(f, block, *offset,
> > + last_stage,
> > + bytes_transferred);
> > + } else {
> > + tmppages = ram_save_page(f, block, *offset, last_stage,
> > + bytes_transferred);
> > + }
>
> Something for the future: we should just have ram_save_page which does
> compression (or not); and even encryption (or not), and so on.
Yep, in my current world that's now a 'ram_save_host_page' function
that has that buried in it.
> > +
> > + if (tmppages < 0) {
> > + return tmppages;
> > + } else {
> > + if (ms->sentmap) {
> > + set_bit(dirty_ram_abs >> TARGET_PAGE_BITS,
> > ms->sentmap);
> > + }
> > + }
>
> This else could be dropped as the if stmt returns.
Done
> > + pages += tmppages;
> > + }
> > + *offset += TARGET_PAGE_SIZE;
> > + dirty_ram_abs += TARGET_PAGE_SIZE;
> > + } while (*offset & (qemu_host_page_size - 1));
> > +
> > + /* The offset we leave with is the last one we looked at */
> > + *offset -= TARGET_PAGE_SIZE;
> > + return pages;
> > +}
> > +
> > +/**
> > * ram_find_and_save_block: Finds a dirty page and sends it to f
> > *
> > * Called within an RCU critical section.
> > @@ -997,65 +1094,102 @@ err:
> > * @f: QEMUFile where to send the data
> > * @last_stage: if we are at the completion stage
> > * @bytes_transferred: increase it with the number of transferred bytes
> > + *
> > + * On systems where host-page-size > target-page-size it will send all the
> > + * pages in a host page that are dirty.
> > */
> >
> > static int ram_find_and_save_block(QEMUFile *f, bool last_stage,
> > uint64_t *bytes_transferred)
> > {
> > + MigrationState *ms = migrate_get_current();
> > RAMBlock *block = last_seen_block;
> > + RAMBlock *tmpblock;
> > ram_addr_t offset = last_offset;
> > + ram_addr_t tmpoffset;
> > bool complete_round = false;
> > int pages = 0;
> > - MemoryRegion *mr;
> > ram_addr_t dirty_ram_abs; /* Address of the start of the dirty page in
> > ram_addr_t space */
> >
> > - if (!block)
> > + if (!block) {
> > block = QLIST_FIRST_RCU(&ram_list.blocks);
> > + last_was_from_queue = false;
> > + }
> >
> > - while (true) {
> > - mr = block->mr;
> > - offset = migration_bitmap_find_and_reset_dirty(mr, offset,
> > - &dirty_ram_abs);
> > - if (complete_round && block == last_seen_block &&
> > - offset >= last_offset) {
> > - break;
> > - }
> > - if (offset >= block->used_length) {
> > - offset = 0;
> > - block = QLIST_NEXT_RCU(block, next);
> > - if (!block) {
> > - block = QLIST_FIRST_RCU(&ram_list.blocks);
> > - complete_round = true;
> > - ram_bulk_stage = false;
> > - if (migrate_use_xbzrle()) {
> > - /* If xbzrle is on, stop using the data compression at
> > this
> > - * point. In theory, xbzrle can do better than
> > compression.
> > - */
> > - flush_compressed_data(f);
> > - compression_switch = false;
> > - }
> > + while (true) { /* Until we send a block or run out of stuff to send */
> > + tmpblock = ram_save_unqueue_page(ms, &tmpoffset, &dirty_ram_abs);
> > +
> > + if (tmpblock) {
> > + /* We've got a block from the postcopy queue */
> > + trace_ram_find_and_save_block_postcopy(tmpblock->idstr,
> > + (uint64_t)tmpoffset,
> > +
> > (uint64_t)dirty_ram_abs);
> > + /*
> > + * We're sending this page, and since it's postcopy nothing
> > else
> > + * will dirty it, and we must make sure it doesn't get sent
> > again
> > + * even if this queue request was received after the background
> > + * search already sent it.
> > + */
> > + if (!test_bit(dirty_ram_abs >> TARGET_PAGE_BITS,
> > + migration_bitmap)) {
> > + trace_ram_find_and_save_block_postcopy_not_dirty(
> > + tmpblock->idstr, (uint64_t)tmpoffset,
> > + (uint64_t)dirty_ram_abs,
> > + test_bit(dirty_ram_abs >> TARGET_PAGE_BITS,
> > ms->sentmap));
> > +
> > + continue;
> > }
> > + /*
> > + * As soon as we start servicing pages out of order, then we
> > have
> > + * to kill the bulk stage, since the bulk stage assumes
> > + * in (migration_bitmap_find_and_reset_dirty) that every page
> > is
> > + * dirty, that's no longer true.
> > + */
> > + ram_bulk_stage = false;
> > + /*
> > + * We want the background search to continue from the queued
> > page
> > + * since the guest is likely to want other pages near to the
> > page
> > + * it just requested.
> > + */
> > + block = tmpblock;
> > + offset = tmpoffset;
> > } else {
> > - if (compression_switch && migrate_use_compression()) {
> > - pages = ram_save_compressed_page(f, block, offset,
> > last_stage,
> > - bytes_transferred);
> > - } else {
> > - pages = ram_save_page(f, block, offset, last_stage,
> > - bytes_transferred);
> > + MemoryRegion *mr;
> > + /* priority queue empty, so just search for something dirty */
> > + mr = block->mr;
> > + offset = migration_bitmap_find_dirty(mr, offset,
> > &dirty_ram_abs);
> > + if (complete_round && block == last_seen_block &&
> > + offset >= last_offset) {
> > + break;
> > }
> > -
> > - /* if page is unmodified, continue to the next */
> > - if (pages > 0) {
> > - MigrationState *ms = migrate_get_current();
> > - if (ms->sentmap) {
> > - set_bit(dirty_ram_abs >> TARGET_PAGE_BITS,
> > ms->sentmap);
> > + if (offset >= block->used_length) {
> > + offset = 0;
> > + block = QLIST_NEXT_RCU(block, next);
> > + if (!block) {
> > + block = QLIST_FIRST_RCU(&ram_list.blocks);
> > + complete_round = true;
> > + ram_bulk_stage = false;
> > + if (migrate_use_xbzrle()) {
> > + /* If xbzrle is on, stop using the data
> > compression at
> > + * this point. In theory, xbzrle can do better than
> > + * compression.
> > + */
> > + flush_compressed_data(f);
> > + compression_switch = false;
> > + }
> > }
> > -
> > - last_sent_block = block;
> > - break;
> > + continue; /* pick an offset in the new block */
> > }
> > }
> > +
> > + pages = ram_save_host_page(ms, f, block, &offset, last_stage,
> > + bytes_transferred, dirty_ram_abs);
> > +
> > + /* if page is unmodified, continue to the next */
> > + if (pages > 0) {
> > + break;
> > + }
>
> This function could use splitting into multiple ones.
Done, a separate pair of patches is on list to do that split; please review.
Dave
>
>
> Amit
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK