qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/4] introduce cpu_physical_memory_map_fast


From: Anthony Liguori
Subject: Re: [Qemu-devel] [PATCH 0/4] introduce cpu_physical_memory_map_fast
Date: Mon, 06 Jun 2011 10:44:15 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110424 Lightning/1.0b2 Thunderbird/3.1.10

On 06/06/2011 08:09 AM, Paolo Bonzini wrote:
On 06/06/2011 02:56 PM, Anthony Liguori wrote:

Oh, the patch series basically died for me when I saw:

Avi> What performance benefit does this bring?

Paolo> Zero

:)

Especially given Avi's efforts to introduce a new RAM API, I don't want
yet another special case to handle.

This is not a special case, the existing functions are all mapped onto
the new cpu_physical_memory_map_internal. I don't think this is in any
way related to Avi's RAM API which is (mostly) for MMIO.

You're just trying to avoid having to handle map failures, right?

Not just that. If you had a memory block at say 1 GB - 2 GB, and another
at 2 GB - 3 GB, a DMA operation that crosses the two could be
implemented with cpu_physical_memory_map_fast; you would simply build a
two-element iovec in two steps, something the current API does not allow.

You cannot assume RAM blocks are contiguous. This has nothing to do with PV or not PV but how the RAM API works today.


The patch does not change virtio to do the split, but it is possible to
do that. The reason I'm not doing the virtio change, is that I know mst
has pending changes to virtio and I'd rather avoid the conflicts for
now. However, for vmw_pvscsi I'm going to handle it using the new
functions.

Virtio can handle all of this today because it uses cpu_physical_memory_rw for ring access and then calls map for SG elements. SG elements are usually 4k so it's never really an issue to get a partial mapping. We could be more robust about it but in practice, it's not a problem.

Regards,

Anthony Liguori


Paolo





reply via email to

[Prev in Thread] Current Thread [Next in Thread]