qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device


From: Anthony Liguori
Subject: [Qemu-devel] Re: [PATCH v5 4/5] Inter-VM shared memory PCI device
Date: Tue, 11 May 2010 08:10:03 -0500
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.5) Gecko/20091209 Fedora/3.0-4.fc12 Lightning/1.0pre Thunderbird/3.0

On 05/11/2010 02:59 AM, Avi Kivity wrote:
(Replying again to list)

What data structure would you use? For a lockless ring queue, you can only support a single producer and consumer. To achieve bidirectional communication in virtio, we always use two queues.


You don't have to use a lockless ring queue. You can use locks (spinlocks without interrupt support, full mutexes with interrupts) and any data structure you like. Say a hash table + LRU for a shared cache.

Yeah, the mailslot enables this.

I think the question boils down to whether we can support transparent peer connections and disconnections. I think that's important in order to support transparent live migration.

If you have two peers that are disconnected and then connect to each other, there's simply no way to choose who's content gets preserved. It's necessary to designate one peer as a master in order to break the tie.

So this could simply involve an additional option to the shared memory driver: role=master|peer. If role=master, when a new shared memory segment is mapped, the contents of the BAR ram is memcpy()'d to the shared memory segment. In either case, the contents of the shared memory segment should be memcpy()'d to the BAR ram whenever the shared memory segment is disconnected.

I believe role=master should be default because I think a relationship of master/slave is going to be much more common than peering.


If you're adding additional queues to support other levels of communication, you can always use different areas of shared memory.

You'll need O(n^2) shared memory areas (n=peer count), and it is a lot less flexible that real shared memory. Consider using threading where the only communication among threads is a pipe (erlang?)

I can't think of a use of multiple peers via shared memory today with virtualization. I know lots of master/slave uses of shared memory though. I agree that it's useful to support from an academic perspective but I don't believe it's going to be the common use.

Regards,

Anthony Liguori





reply via email to

[Prev in Thread] Current Thread [Next in Thread]