[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] migration: vectorize is_dup_page

From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH] migration: vectorize is_dup_page
Date: Tue, 20 Dec 2011 16:45:58 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:7.0.1) Gecko/20110930 Thunderbird/7.0.1

On 12/20/2011 04:24 PM, Avi Kivity wrote:
On 12/06/2011 07:25 PM, Paolo Bonzini wrote:
is_dup_page is already proceeding in 32-bit chunks.  Changing it to 16
bytes using Altivec or SSE is easy, and provides a noticeable improvement.
Pierre Riteau measured 30->25 seconds on a 16GB guest, I measured 4.6->3.9
seconds on a 6GB guest (best of three times for me; dunno for Pierre).
Both of them are approximately a 15% improvement.

I tried playing with non-temporal prefetches, but I did not get any
improvement (though I did get less cache misses, so the patch was doing
its job).

It's worthwhile anyway IMO.

The problem is that if the page is not dup (the common case), you'll get all the cache misses anyway when you send it over the socket. So what I did was add a 4k buffer (the same for all pages), and make is_dup_page copy the page to it. Because the prefetches are non-temporal, you only use 4k of cache. But the code is more complex and less reusable, it incurs an extra copy and it cannot leave is_dup_page early.

+static int is_dup_page(uint8_t *page)
-    uint32_t val = ch<<  24 | ch<<  16 | ch<<  8 | ch;
-    uint32_t *array = (uint32_t *)page;
+    VECTYPE *p = (VECTYPE *)page;
+    VECTYPE val = SPLAT(p);

I think you can drop the SPLAT and just compare against zero.  Full page
repeats of anything but zero are unlikely, so we can simplify the code a
bit here.  If we do go with non-temporal loads, it saves an additional miss.

Yeah, with non-temporal loads that would make sense.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]