[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 1/5] migration: Rework migrate_send_rp_req_pages() function
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [PATCH 1/5] migration: Rework migrate_send_rp_req_pages() function |
Date: |
Tue, 8 Sep 2020 10:18:05 +0100 |
User-agent: |
Mutt/1.14.6 (2020-07-11) |
* Peter Xu (peterx@redhat.com) wrote:
> We duplicated the logic of maintaining the last_rb variable at both callers of
> this function. Pass *rb pointer into the function so that we can avoid
> duplicating the logic. Also, when we have the rb pointer, it's also easier to
> remove the original 2nd & 4th parameters, because both of them (name of the
> ramblock when needed, or the page size) can be fetched from the ramblock
> pointer too.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
> ---
> migration/migration.c | 26 ++++++++++++++++++--------
> migration/migration.h | 4 ++--
> migration/postcopy-ram.c | 24 ++----------------------
> 3 files changed, 22 insertions(+), 32 deletions(-)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index 58a5452471..6761e3f233 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -311,25 +311,35 @@ error:
> return ret;
> }
>
> -/* Request a range of pages from the source VM at the given
> - * start address.
> - * rbname: Name of the RAMBlock to request the page in, if NULL it's the
> same
> - * as the last request (a name must have been given previously)
> +/* Request one page from the source VM at the given start address.
> + * rb: the RAMBlock to request the page in
> * Start: Address offset within the RB
> * Len: Length in bytes required - must be a multiple of pagesize
> */
> -int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char
> *rbname,
> - ram_addr_t start, size_t len)
> +int migrate_send_rp_req_pages(MigrationIncomingState *mis, RAMBlock *rb,
> + ram_addr_t start)
> {
> uint8_t bufc[12 + 1 + 255]; /* start (8), len (4), rbname up to 256 */
> size_t msglen = 12; /* start + len */
> + size_t len = qemu_ram_pagesize(rb);
> enum mig_rp_message_type msg_type;
> + const char *rbname;
> + int rbname_len;
>
> *(uint64_t *)bufc = cpu_to_be64((uint64_t)start);
> *(uint32_t *)(bufc + 8) = cpu_to_be32((uint32_t)len);
>
> - if (rbname) {
> - int rbname_len = strlen(rbname);
> + /*
> + * We maintain the last ramblock that we requested for page. Note that
> we
> + * don't need locking because this function will only be called within
> the
> + * postcopy ram fault thread.
> + */
> + if (rb != mis->last_rb) {
> + mis->last_rb = rb;
> +
> + rbname = qemu_ram_get_idstr(rb);
> + rbname_len = strlen(rbname);
> +
> assert(rbname_len < 256);
>
> bufc[msglen++] = rbname_len;
> diff --git a/migration/migration.h b/migration/migration.h
> index ae497bd45a..ca8dc4c773 100644
> --- a/migration/migration.h
> +++ b/migration/migration.h
> @@ -328,8 +328,8 @@ void migrate_send_rp_shut(MigrationIncomingState *mis,
> uint32_t value);
> void migrate_send_rp_pong(MigrationIncomingState *mis,
> uint32_t value);
> -int migrate_send_rp_req_pages(MigrationIncomingState *mis, const char*
> rbname,
> - ram_addr_t start, size_t len);
> +int migrate_send_rp_req_pages(MigrationIncomingState *mis, RAMBlock *rb,
> + ram_addr_t start);
> void migrate_send_rp_recv_bitmap(MigrationIncomingState *mis,
> char *block_name);
> void migrate_send_rp_resume_ack(MigrationIncomingState *mis, uint32_t value);
> diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c
> index 1bb22f2b6c..11a70441a6 100644
> --- a/migration/postcopy-ram.c
> +++ b/migration/postcopy-ram.c
> @@ -684,14 +684,7 @@ int postcopy_request_shared_page(struct PostCopyFD
> *pcfd, RAMBlock *rb,
> qemu_ram_get_idstr(rb), rb_offset);
> return postcopy_wake_shared(pcfd, client_addr, rb);
> }
> - if (rb != mis->last_rb) {
> - mis->last_rb = rb;
> - migrate_send_rp_req_pages(mis, qemu_ram_get_idstr(rb),
> - aligned_rbo, pagesize);
> - } else {
> - /* Save some space */
> - migrate_send_rp_req_pages(mis, NULL, aligned_rbo, pagesize);
> - }
> + migrate_send_rp_req_pages(mis, rb, aligned_rbo);
> return 0;
> }
>
> @@ -986,20 +979,7 @@ retry:
> * Send the request to the source - we want to request one
> * of our host page sizes (which is >= TPS)
> */
> - if (rb != mis->last_rb) {
> - mis->last_rb = rb;
> - ret = migrate_send_rp_req_pages(mis,
> - qemu_ram_get_idstr(rb),
> - rb_offset,
> - qemu_ram_pagesize(rb));
> - } else {
> - /* Save some space */
> - ret = migrate_send_rp_req_pages(mis,
> - NULL,
> - rb_offset,
> - qemu_ram_pagesize(rb));
> - }
> -
> + ret = migrate_send_rp_req_pages(mis, rb, rb_offset);
> if (ret) {
> /* May be network failure, try to wait for recovery */
> if (ret == -EIO && postcopy_pause_fault_thread(mis)) {
> --
> 2.26.2
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK
- [PATCH 0/5] migration/postcopy: Sync faulted addresses after network recovered, Peter Xu, 2020/09/03
- [PATCH 1/5] migration: Rework migrate_send_rp_req_pages() function, Peter Xu, 2020/09/03
- Re: [PATCH 1/5] migration: Rework migrate_send_rp_req_pages() function,
Dr. David Alan Gilbert <=
- [PATCH 2/5] migration: Introduce migrate_send_rp_message_req_pages(), Peter Xu, 2020/09/03
- [PATCH 3/5] migration: Pass incoming state into qemu_ufd_copy_ioctl(), Peter Xu, 2020/09/03
- [PATCH 4/5] migration: Maintain postcopy faulted addresses, Peter Xu, 2020/09/03
- [PATCH 5/5] migration: Sync requested pages after postcopy recovery, Peter Xu, 2020/09/03