qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState


From: Juan Quintela
Subject: Re: [Qemu-devel] [PATCH 30/51] ram: Move src_page_req* to RAMState
Date: Tue, 04 Apr 2017 19:42:46 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/25.1 (gnu/linux)

"Dr. David Alan Gilbert" <address@hidden> wrote:
> * Juan Quintela (address@hidden) wrote:

Hi

>> @@ -1191,19 +1204,18 @@ static bool get_queued_page(RAMState *rs, 
>> MigrationState *ms,
>>   *
>>   * It should be empty at the end anyway, but in error cases there may
>>   * xbe some left.
>> - *
>> - * @ms: current migration state
>>   */
>> -void flush_page_queue(MigrationState *ms)
>> +void flush_page_queue(void)
>
> I'm not sure this is safe;  it's called from migrate_fd_cleanup right at
> the end, if you do any finalisation/cleanup of the RAMState in
> ram_save_complete
> then when is it safe to run this?

But, looking into it, I think that we should be able to move this into
ram_save_cleanup() no?

We don't need it after that?
As an added bonus, we can remove the export of it.

>> @@ -1260,16 +1272,16 @@ int ram_save_queue_pages(MigrationState *ms, const 
>> char *rbname,
>>          goto err;
>>      }
>>  
>> -    struct MigrationSrcPageRequest *new_entry =
>> -        g_malloc0(sizeof(struct MigrationSrcPageRequest));
>> +    struct RAMSrcPageRequest *new_entry =
>> +        g_malloc0(sizeof(struct RAMSrcPageRequest));
>>      new_entry->rb = ramblock;
>>      new_entry->offset = start;
>>      new_entry->len = len;
>>  
>>      memory_region_ref(ramblock->mr);
>> -    qemu_mutex_lock(&ms->src_page_req_mutex);
>> -    QSIMPLEQ_INSERT_TAIL(&ms->src_page_requests, new_entry, next_req);
>> -    qemu_mutex_unlock(&ms->src_page_req_mutex);
>> +    qemu_mutex_lock(&rs->src_page_req_mutex);
>> +    QSIMPLEQ_INSERT_TAIL(&rs->src_page_requests, new_entry, next_req);
>> +    qemu_mutex_unlock(&rs->src_page_req_mutex);
>
> Hmm ok where did it get it's rs from?
> Anyway, the thing I needed to convince myself of was that there was
> any guarantee that
> RAMState would exist by the time the first request came in, something
> that we now need
> to be careful of.
> I think we're mostly OK; we call qemu_savevm_state_begin() at the top
> of migration_thread so the ram_save_setup should be done and allocate
> the RAMState before we get into the main loop and thus before we ever
> look at the 'start_postcopy' flag and thus before we ever ask the destination
> to send us stuff.
>
>>      rcu_read_unlock();
>>  
>>      return 0;
>> @@ -1408,7 +1420,7 @@ static int ram_find_and_save_block(RAMState *rs, 
>> QEMUFile *f, bool last_stage)
>>  
>>      do {
>>          again = true;
>> -        found = get_queued_page(rs, ms, &pss, &dirty_ram_abs);
>> +        found = get_queued_page(rs, &pss, &dirty_ram_abs);
>>  
>>          if (!found) {
>>              /* priority queue empty, so just search for something dirty */
>> @@ -1968,6 +1980,8 @@ static int ram_state_init(RAMState *rs)
>>  
>>      memset(rs, 0, sizeof(*rs));
>>      qemu_mutex_init(&rs->bitmap_mutex);
>> +    qemu_mutex_init(&rs->src_page_req_mutex);
>> +    QSIMPLEQ_INIT(&rs->src_page_requests);
>
> Similar question to above; that mutex is going to get reinit'd
> on a new migration and it shouldn't be without it being destroyed.
> Maybe make it a once.

good catch.  I think that the easiest way is to just move it into proper
RAMState, init it here, and destroy it at cleanup?

Later, Juan.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]