qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [v6 08/14] migration: Add the core code of multi-thread


From: Li, Liang Z
Subject: Re: [Qemu-devel] [v6 08/14] migration: Add the core code of multi-thread compression
Date: Thu, 26 Mar 2015 02:37:59 +0000

> > --- a/arch_init.c
> > +++ b/arch_init.c
> > @@ -355,12 +355,33 @@ static DecompressParam *decomp_param;  static
> > QemuThread *decompress_threads;  static uint8_t
> *compressed_data_buf;
> >
> > +static int do_compress_ram_page(CompressParam *param);
> > +
> >  static void *do_data_compress(void *opaque)  {
> > -    while (!quit_comp_thread) {
> > +    CompressParam *param = opaque;
> >
> > -    /* To be done */
> What is the different with changing this loop to:
> 
> 
> > +    while (!quit_comp_thread) {
> 
> Here we don't have quit_comp_thread protected by anything.

Yes, add a lock to protect quit_comp_thread is not hard and can make code
more clean, but the lock will drop the performance. So I don't select to protect
it with a lock or something else. 

Is there any problem to operate quit_comp_thread without protect?

> 
> > +        qemu_mutex_lock(&param->mutex);
> > +        /* Re-check the quit_comp_thread in case of
> > +         * terminate_compression_threads is called just before
> > +         * qemu_mutex_lock(&param->mutex) and after
> > +         * while(!quit_comp_thread), re-check it here can make
> > +         * sure the compression thread terminate as expected.
> > +         */
> > +        while (!param->start && !quit_comp_thread) {
> 
> Here and next use is protected by param->mutex, but param is per
> compression thread, so, it is not really protected.
> 
> > +            qemu_cond_wait(&param->cond, &param->mutex);
> > +        }
> > +        if (!quit_comp_thread) {
> > +            do_compress_ram_page(param);
> > +        }
> > +        param->start = false;
> 
> param->start is pretected by param->mutex everywhere
> 
> > +        qemu_mutex_unlock(&param->mutex);
> >
> > +        qemu_mutex_lock(comp_done_lock);
> > +        param->done = true;
> 
> param->done protected by comp_done_lock
> 
> > +        qemu_cond_signal(comp_done_cond);
> > +        qemu_mutex_unlock(comp_done_lock);
> >      }
> >
> >      return NULL;
> > @@ -368,9 +389,15 @@ static void *do_data_compress(void *opaque)
> >
> >  static inline void terminate_compression_threads(void)
> >  {
> > -    quit_comp_thread = true;
> > +    int idx, thread_count;
> >
> > -    /* To be done */
> > +    thread_count = migrate_compress_threads();
> > +    quit_comp_thread = true;
> 
> quite_comp_thread not protected again.
> 
> > +    for (idx = 0; idx < thread_count; idx++) {
> > +        qemu_mutex_lock(&comp_param[idx].mutex);
> > +        qemu_cond_signal(&comp_param[idx].cond);
> > +        qemu_mutex_unlock(&comp_param[idx].mutex);
> > +    }
> >  }
> >
> >  void migrate_compress_threads_join(void)
> > @@ -420,6 +447,7 @@ void migrate_compress_threads_create(void)
> >           * it's ops to empty.
> >           */
> >          comp_param[i].file = qemu_fopen_ops(NULL, &empty_ops);
> > +        comp_param[i].done = true;
> >          qemu_mutex_init(&comp_param[i].mutex);
> >          qemu_cond_init(&comp_param[i].cond);
> >          qemu_thread_create(compress_threads + i, "compress", @@
> > -829,6 +857,97 @@ static int ram_save_page(QEMUFile *f, RAMBlock*
> block, ram_addr_t offset,
> >      return pages;
> >  }
> >
> > +static int do_compress_ram_page(CompressParam *param) {
> > +    int bytes_sent, blen;
> > +    uint8_t *p;
> > +    RAMBlock *block = param->block;
> > +    ram_addr_t offset = param->offset;
> > +
> > +    p = memory_region_get_ram_ptr(block->mr) + (offset &
> > + TARGET_PAGE_MASK);
> > +
> > +    bytes_sent = save_page_header(param->file, block, offset |
> > +                                  RAM_SAVE_FLAG_COMPRESS_PAGE);
> > +    blen = qemu_put_compression_data(param->file, p,
> TARGET_PAGE_SIZE,
> > +                                     migrate_compress_level());
> > +    bytes_sent += blen;
> > +    atomic_inc(&acct_info.norm_pages);
> > +
> > +    return bytes_sent;
> > +}
> > +
> > +static inline void start_compression(CompressParam *param) {
> > +    param->done = false;
> 
> Not protected (well, its caller have protected it by comp_done_lock.
> 
> > +    qemu_mutex_lock(&param->mutex);
> > +    param->start = true;
> > +    qemu_cond_signal(&param->cond);
> > +    qemu_mutex_unlock(&param->mutex); }
> > +
> > +
> > +static uint64_t bytes_transferred;
> > +
> > +static void flush_compressed_data(QEMUFile *f) {
> > +    int idx, len, thread_count;
> > +
> > +    if (!migrate_use_compression()) {
> > +        return;
> > +    }
> > +    thread_count = migrate_compress_threads();
> > +    for (idx = 0; idx < thread_count; idx++) {
> > +        if (!comp_param[idx].done) {
> 
> done is not protected here.

My intention is to do some optimization.

> 
> > +            qemu_mutex_lock(comp_done_lock);
> > +            while (!comp_param[idx].done && !quit_comp_thread) {
> 
> 
> Now, it is under comp_done_lock.  Bun none of its other uses is protected by
> it.
> 
> And here done is proteced by comp_done_cond
> 
> 
> > +                qemu_cond_wait(comp_done_cond, comp_done_lock);
> > +            }
> > +            qemu_mutex_unlock(comp_done_lock);
> > +        }
> > +        if (!quit_comp_thread) {
> 
> Here, it is unprotected again.

I have tried the way that you commented in the version 5 patch, but I found the 
performance 
drop a lot. In my way, the migration can finish in 20s, but after change, it 
takes 30+s.
Maybe there were something that I did incorrectly. 

So, I my implementation, I tried to avoid  using the lock as possible as I can.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]