qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/1] migration: fix expected_downtime


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH 1/1] migration: fix expected_downtime
Date: Fri, 09 Oct 2015 11:08:00 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.5 (gnu/linux)

"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Igor Redko (address@hidden) wrote:
>> On 28.09.2015 22:22, Dr. David Alan Gilbert wrote:
>> >* Denis V. Lunev (address@hidden) wrote:
>> >>From: Igor Redko <address@hidden>
>> >>
>> >>To get this estimation we must divide pending_size by bandwidth
>> >>according to description of expected-downtime ("qmp-commands.hx:3246"):
>> >>   "expected-downtime": only present while migration is active
>> >>               total amount in ms for downtime that was calculated on
>> >>               the last bitmap round (json-int)
>> >>
>> >>Previous version was just wrong because dirty_bytes_rate and bandwidth
>> >>are measured in Bytes/ms, so dividing first by second we get some
>> >>dimensionless quantity.
>> >>As it said in description above this value is showed during active
>> >>migration phase and recalculated only after transferring all memory
>> >>and if this process took more than 1 sec. So maybe just nobody noticed
>> >>that bug.
>> >
>> >While I agree the existing code looks wrong, I don't see how this is
>> >any more correct.
>> 
>> This patch is aimed to fix units of expected_downtime. It is reasonable that
>> expected_downtime should be measured in milliseconds. While the existing
>> implementation lacks of any units.
>
> I agree your units are correct where the old one isn't; and I agree
> it needs fixing.
> However I'm worrying about whether the value in your fix is correct.


The code (and calculation) is as clear as mud.  I will try to explain
what went behind the current code.


First of all, notice that we call this code under:

        if (current_time >= initial_time + BUFFER_DELAY) {

        }

        i.e. we haven't recalculated things for a while, so we need to
        recalculate (this is not the place when we decided to move from
        iterating stage to completion stage.  We have decided that above
        it.

        So, we have already decided that this will take so many time.

check for s->dirty_bytes_rate:

  If this value is zero, we haven't do a whole round of migration, so we
  don't really know how much information it is dirty.

check for transferred_bytes > 10000
  arbitrary value, we were fixing the case where we were only able to
  transmit very few data for whatever reason (emphasis on whatever).  If
  the number of bytes and/or time are very small, we could get really,
  really very big values here.


So, we decided to recalculate expected_downtime value:
- we have decided that it still no time to finish
- we have transferred some chunk of information since last time.


Now the current code:
    s->expected_downtime = s->dirty_bytes_rate / bandwidth;

And the proposed code is:
    s->expected_downtime = pending_size / bandwidth;

We are going to start why the proposed code is a bad idea:

    pending_size: means how much data we know that is dirty, but it can
    be more data that is dirty.  We would know if we do a
    migration_bitmap_sync(). But we don't want to do this here because
    this is a very costly operation when our guest have hundred of
    gigabytes of pages.

    I.e. if we are at the end of a walk for all the ram, the pending
    size can be 4MB, but there are another 100MB dirtied that we would
    only know when we do a migration_bitmap_sync().

So, what is the best value that we have?  The best one that we have is
the value that was there the last time that we did a
migration_bitmap_sync(), and what was that value?  the number of pages
that were dirtied on that precise moment, s->dirty_bytes_rate is that
number.

So, talking about the units, dirty_bytes_rate is the number of bytes
that we sent in one second.  In particular  if it also a number of
bytes.

    number_bytes / (number_bytes/second) = second

So the units are right, it could be a good idea to put some more
comments around the code, but I think that this is the best value that
we can get at that point.

pending_size is *too* optimistic in general.

We recalculate it here, and not having a fixed value since last
migration_bitmap_sync, because the network bandwidth can change.

Makes it a bit clearer?


>> >  I think 'pending_size' is an estimate of the number of bytes left
>> >to transfer, the intention being that most of those are transferred
>> >prior to pausing the machine, if those are transferred before pausing
>> >then they aren't part of the downtime.
>> >
>> Yes, 'pending_size' is an estimate of the number of bytes left to transfer,
>> indeed.

It is the "minimum" number of bytes left to transfer.  It can be
greater.

I can be convinced to calculate the right value if we do a info migrate,
and then we don't do any estimate.

>> But the condition:
>> >    if (s->dirty_bytes_rate && transferred_bytes > 10000) {
>> slightly modifies the meaning of pending_size correspondingly.
>> dirty_bytes_rate is set in migration_sync() that is called when pending_size
>> < max_downtime * bandwidth. This estimation is higher than max_downtime by
>> design

No, it is bigger than max_downtime with the bandwidth that we had at
that time.  With current  bandwidth, it can be true, or not.


static uint64_t ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size)
{
    uint64_t remaining_size;

    remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;

    if (remaining_size < max_size) {
        qemu_mutex_lock_iothread();
        rcu_read_lock();
        migration_bitmap_sync();
        rcu_read_unlock();
        qemu_mutex_unlock_iothread();
        remaining_size = ram_save_remaining() * TARGET_PAGE_SIZE;
    }
    return remaining_size;
}

We can be convinced to change this function to return if the pending_size
value is an estimate or the real one.


> I don't think that check really modifies the meaning of pending_size;
> it's just a sanity check that means we don't start trying to predict downtime
> when we've not transmitted much yet.

The problem is that pending_size is only a real value when we have done
a migration_sync() otherwise, it is just big enough.  If we know that we
can transmit 1MB of data in max_downtime, and we know that the amount of
dirty data is bigger than it, why search the exact value (as said it is costly)


>> >It feels that:
>> >    * If the guest wasn't dirtying pages, then you wouldn't have to
>> >      pause the guest; if it was just dirtying them a little then you
>> >      wouldn't have much to transfer after the pages you'd already
>> >      sent; so if the guest dirty pages fast then the estimate should be
>> >      larger; so 'dirty_bytes_rate' being on top of the fraction feels 
>> > right.

If we are here, it is because pending_size > max_size, so we *know* that
the guest is dirtying too much pages.  This value is only to let know
app management how much we guest the estimated_downtime is going to be,
we don't use it.


>> >
>> >    * If the bandwidth is higher then the estimate should be smaller; so
>> >      'bandwidth' being on the bottom of the fraction feels right.

I think I explained this before.  I can try to improve it is not clear yet.


>> The 'expected_downtime' in the existing code takes two types of values:
>>   * positive - dirty_bytes_rate is higher than bandwidth. In this
>>     case migration doesn't complete.
>>   * zero - bandwidth is higher than dirty_bytes_rate. In this case
>>     migration is possible, but we don’t have the downtime value.

No, real values are:
- positive: this is the best guess we have with previous
migration_bitmap_sync() dirty pages and current bandwidth.

- zero: we haven't yet completed a whole round of RAM memory, we can
  only guess, really.

I hope I have put some light here.

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]