qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [v2 2/2] migration: Implement multiple compression thre


From: Li, Liang Z
Subject: Re: [Qemu-devel] [v2 2/2] migration: Implement multiple compression threads
Date: Fri, 21 Nov 2014 07:01:29 +0000

> > +static void migrate_put_be32(MigBuf *f, unsigned int v)
> > +{
> > +    migrate_put_byte(f, v >> 24);
> > +    migrate_put_byte(f, v >> 16);
> > +    migrate_put_byte(f, v >> 8);
> > +    migrate_put_byte(f, v);
> > +}
> > +
> > +static void migrate_put_be64(MigBuf *f, uint64_t v)
> > +{
> > +    migrate_put_be32(f, v >> 32);
> > +    migrate_put_be32(f, v);
> > +}
> > +

> This feels like you're doing something very similar to 
> the buffered file code that recently went in; could you
> reuse qemu_bufopen or the QEMUSizedBuffer?
> I think if you could use the qemu_buf somehow (maybe with
> modifications?) then you could avoid a lot of the 'if'd
> code below, because you'd always be working with a QEMUFile,
> it would just be a different QEMUFile.

I will do it in the next version patch.  

> > +static void *do_data_compress(void *opaque)
> > +{
> > +    compress_param *param = opaque;
> > +    while (!quit_thread) {
> > +        if (param->state == COM_START) {
> > +            save_compress_ram_page(param);
> > +            param->state = COM_DONE;
> > +         } else {
> > +             g_usleep(1);
> 
> > There has to be a better way than heaving your thread spin
> > with sleeps; qemu_event or semaphore or something?

I will use QemuCond and QemuMutex  instead.

> > +static int save_compress_ram_page(compress_param *param)
> > +{
> > +    int bytes_sent = param->bytes_sent;
> > +    int blen = COMPRESS_BUF_SIZE;
> > +    int cont = param->cont;
> > +    uint8_t *p = param->p;
> > +    int ret = param->ret;
> > +    RAMBlock *block = param->block;
> > +    ram_addr_t offset = param->offset;
> > +    bool last_stage = param->last_stage;
> > +    /* In doubt sent page as normal */
> > +    XBZRLE_cache_lock();
> > +    ram_addr_t current_addr = block->offset + offset;
> > +    if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
> > +        if (ret != RAM_SAVE_CONTROL_DELAYED) {
> > +            if (bytes_sent > 0) {
> > +                atomic_inc(&acct_info.norm_pages);
> > +             } else if (bytes_sent == 0) {
> > +                atomic_inc(&acct_info.dup_pages);
> > +             }
> > +        }
> > +    } else if (is_zero_range(p, TARGET_PAGE_SIZE)) {
> > +        atomic_inc(&acct_info.dup_pages);
> > +        bytes_sent = migrate_save_block_hdr(&param->migbuf, block, offset, 
> > cont,
> > +                             RAM_SAVE_FLAG_COMPRESS);
> > +        migrate_put_byte(&param->migbuf, 0);
> > +        bytes_sent++;
> > +        /* Must let xbzrle know, otherwise a previous (now 0'd) cached
> > +         * page would be stale
> > +         */
> > +        xbzrle_cache_zero_page(current_addr);
> > +    } else if (!param->bulk_stage && migrate_use_xbzrle()) {
> > +        bytes_sent = save_xbzrle_page(&param->migbuf, &p, current_addr, 
> > block,
> > +                              offset, cont, last_stage, true);
> > +    }
> > +    XBZRLE_cache_unlock();
> > +    /* XBZRLE overflow or normal page */

> I wonder if it's worth the complexity of doing the zero check
> and the xbzrle if you're already doing compression?  I assume
> zlib is going to handle a zero page reasonably well anyway?

Yes, the test show it's worth, with zero check is time will be shorter. The 
reason for checking the xbzrle is that  I want the compression co-work with 
xbzrle, 
using xbzrle can reduce the amount of data transferred.

> > +static uint64_t bytes_transferred;
> > +
> > +static void flush_compressed_data(QEMUFile *f)
> > +{
> > +    int idx;
> > +    if (!migrate_use_compress()) {
> > +        return;
> > +    }
> > +
> > +    for (idx = 0; idx < compress_thread_count; idx++) {
> > +        while (comp_param[idx].state != COM_DONE) {
> > +            g_usleep(0);
> > +        }

> Again, some type of event/semaphore rather than busy sleeping;
> and also I don't understand how the different threads keep everything
> in order - can you add some comments (or maybe notes in the docs)
> that explain how it all works?

I will add some comments in next version.

> >              }
> >          } else {
> > -            bytes_sent = ram_save_page(f, block, offset, last_stage);
> > -
> > -            /* if page is unmodified, continue to the next */
> > -            if (bytes_sent > 0) {
> > -                last_sent_block = block;
> > -                break;
> > +            if (!migrate_use_compress()) {
> > +                bytes_sent = ram_save_page(f, block, offset, last_stage);
> > +                /* if page is unmodified, continue to the next */
> > +                if (bytes_sent > 0) {
> > +                    last_sent_block = block;
> > +                    break;
> > +                }
> > +            } else {
> > +                cont = (block == last_sent_block) ? RAM_SAVE_FLAG_CONTINUE 
> > : 0;
> > +                p = memory_region_get_ram_ptr(block->mr) + offset;
> > +                ret = ram_control_save_page(f, block->offset,
> > +                           offset, TARGET_PAGE_SIZE, &len);
> > +                if ((!ram_bulk_stage && migrate_use_xbzrle()) || cont == 
> > 0) {
> > +                    if (cont == 0) {
> > +                        flush_compressed_data(f);
> > +                    }
> > +                    set_common_compress_params(&comp_param[0],
> > +                        ret, len, block, offset, last_stage, cont,
> > +                        p, ram_bulk_stage);
> > +                    bytes_sent = save_compress_ram_page(&comp_param[0]);
> > +                    if (bytes_sent > 0) {
> > +                        qemu_put_buffer(f, comp_param[0].migbuf.buf,
> > +                            comp_param[0].migbuf.buf_index);
> > +                        comp_param[0].migbuf.buf_index = 0;
> > +                        last_sent_block = block;
> > +                        break;
> > +                    }

> Is there no way to move this down into your save_compress_ram_page?
> When I split the code into ram_find_and_save_block and ram_save_page
> a few months ago, it meant that 'ram_find_and_save_block' only really
> did the work of finding what to send, and 'ram_save_page' figured out
> everything to do with sending it; it would be nice to keep all
> the details of sending it separate still.

I will rewrite the code.

> Since ram_bulk_stage is a static global in this file, why bother passing
> it into the 'compress_params'?  I think you could probably avoid a lot
> of things like that.

Passing ram_bulk_stage to compress_params is important to make things correct.

Liang



reply via email to

[Prev in Thread] Current Thread [Next in Thread]