[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync |
Date: |
Wed, 7 Oct 2015 15:03:41 +0100 |
User-agent: |
Mutt/1.5.24 (2015-08-30) |
* Denis V. Lunev (address@hidden) wrote:
> From: Igor Redko <address@hidden>
>
> The key feature of the test transport is receiving information
> about dirty memory. The qemu_test_sync_hook() allows to use
> the migration infrastructure(code) for this purpose.
>
> All calls of this hook will be from ram_save_pending().
>
> At the first call of this hook we need to save the initial
> size of VM memory and put the migration thread to sleep for
> decent period (downtime for example). During this period
> guest would dirty memory.
>
> The second and the last call.
> We make our estimation of dirty bytes rate assuming that time
> between two synchronizations of dirty bitmap differs from downtime
> negligibly.
>
> An alternative to this approach is receiving information about
> size of data “transmitted” through the transport. However, this
> way creates large time and memory overheads:
> 1/Transmitted guest’s memory pages are copied to QEMUFile’s buffer
> (~8 sec per 4GB VM)
> 2/Dirty memory pages are processed one by one (~60msec per 4GB VM)
That's not true for two reasons:
1) As long as you register a writev_buffer method on the QEMUFile
RAM Pages get added by using add_to_iovec rather than actually
copying the data; so all the other stuff does go that way (as
do the page headers)
2) If you make it look like the rdma transport and register the 'save_page'
hook I think the overhead is even smaller.
Dave
>
> Signed-off-by: Igor Redko <address@hidden>
> Reviewed-by: Anna Melekhova <address@hidden>
> Signed-off-by: Denis V. Lunev <address@hidden>
> ---
> migration/migration.c | 8 ++++++++
> migration/test.c | 36 ++++++++++++++++++++++++++++++++++++
> 2 files changed, 44 insertions(+)
>
> diff --git a/migration/migration.c b/migration/migration.c
> index d6cb3e2..3182e15 100644
> --- a/migration/migration.c
> +++ b/migration/migration.c
> @@ -1058,6 +1058,14 @@ static void *migration_thread(void *opaque)
> MIGRATION_STATUS_FAILED);
> break;
> }
> +
> + if (migrate_is_test()) {
> + /* since no data is transfered during estimation all
> + all measurements below will be incorrect.
> + as well no need for delays. */
> + continue;
> + }
> +
> current_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
> if (current_time >= initial_time + BUFFER_DELAY) {
> uint64_t transferred_bytes = qemu_ftell(s->file) - initial_bytes;
> diff --git a/migration/test.c b/migration/test.c
> index 8d06988..b4d0761 100644
> --- a/migration/test.c
> +++ b/migration/test.c
> @@ -18,6 +18,7 @@ typedef struct QEMUFileTest {
>
> static uint64_t transfered_bytes;
> static uint64_t initial_bytes;
> +static int sync_cnt;
>
> static ssize_t qemu_test_put_buffer(void *opaque, const uint8_t *buf,
> int64_t pos, size_t size)
> @@ -31,7 +32,41 @@ static int qemu_test_close(void *opaque)
> return 0;
> }
>
> +static int qemu_test_sync_hook(QEMUFile *f, void *opaque,
> + uint64_t flags, void *data)
> +{
> + static uint64_t dirtied_bytes;
> + static uint64_t sleeptime_mcs;
> + int64_t time_delta;
> + uint64_t remaining_bytes = *((uint64_t *) data);
> + MigrationState *s = (MigrationState *) opaque;
> + switch (sync_cnt++) {
> + case 0:
> + /* First call will be from ram_save_begin
> + * so we need to save initial size of VM memory
> + * and sleep for decent period (downtime for example). */
> + sleeptime_mcs = migrate_max_downtime()/1000;
> + initial_bytes = remaining_bytes;
> + usleep(sleeptime_mcs);
> + break;
> + case 1:
> + /* Second and last call.
> + * We assume that time between two synchronizations of
> + * dirty bitmap differs from downtime negligibly and
> + * make our estimation of dirty bytes rate. */
> + dirtied_bytes = remaining_bytes;
> + time_delta = sleeptime_mcs / 1000;
> + s->dirty_bytes_rate = dirtied_bytes * 1000 / time_delta;
> + return -42;
> + default:
> + /* All calls after second are errors */
> + return -1;
> + }
> + return 0;
> +}
> +
> static const QEMUFileOps test_write_ops = {
> + .hook_ram_sync = qemu_test_sync_hook,
> .put_buffer = qemu_test_put_buffer,
> .close = qemu_test_close,
> };
> @@ -41,6 +76,7 @@ static void *qemu_fopen_test(MigrationState *s, const char
> *mode)
> QEMUFileTest *t;
> transfered_bytes = 0;
> initial_bytes = 0;
> + sync_cnt = 0;
> if (qemu_file_mode_is_not_valid(mode)) {
> return NULL;
> }
> --
> 2.1.4
>
>
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK
- Re: [Qemu-devel] [PATCH 7/8] migration: new migration test mode, (continued)
- [Qemu-devel] [PATCH 4/8] migration: add function for reseting migration bitmap, Denis V. Lunev, 2015/10/08
- [Qemu-devel] [PATCH 5/8] migration: add draft of new transport, Denis V. Lunev, 2015/10/08
- [Qemu-devel] [PATCH 2/8] qemu-file: new hook in qemu-file, Denis V. Lunev, 2015/10/08
- [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync, Denis V. Lunev, 2015/10/08
- Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync, Paolo Bonzini, 2015/10/08
- Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync, Denis V. Lunev, 2015/10/09
- Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync, Paolo Bonzini, 2015/10/08
- Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync, Denis V. Lunev, 2015/10/09
- Re: [Qemu-devel] [PATCH 6/8] migration: implementation of hook_ram_sync,
Dr. David Alan Gilbert <=
- [Qemu-devel] [PATCH 1/8] migration: fix expected_downtime, Denis V. Lunev, 2015/10/08
- [Qemu-devel] [PATCH 8/8] migration: add output of gathered statistics, Denis V. Lunev, 2015/10/08
- Re: [Qemu-devel] [RFC 0/8] QEMUFile-way to gather VM's memory statistics, Dr. David Alan Gilbert, 2015/10/08