qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] Inter-VM shared memory PCI device


From: Cam Macdonell
Subject: [Qemu-devel] Re: [PATCH] Inter-VM shared memory PCI device
Date: Wed, 10 Mar 2010 09:36:55 -0700

On Wed, Mar 10, 2010 at 2:21 AM, Avi Kivity <address@hidden> wrote:
> On 03/09/2010 08:34 PM, Cam Macdonell wrote:
>>
>> On Tue, Mar 9, 2010 at 10:28 AM, Avi Kivity<address@hidden>  wrote:
>>
>>>
>>> On 03/09/2010 05:27 PM, Cam Macdonell wrote:
>>>
>>>>
>>>>
>>>>>
>>>>>
>>>>>>
>>>>>>  Registers are used
>>>>>> for synchronization between guests sharing the same memory object when
>>>>>> interrupts are supported (this requires using the shared memory
>>>>>> server).
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> How does the driver detect whether interrupts are supported or not?
>>>>>
>>>>>
>>>>
>>>> At the moment, the VM ID is set to -1 if interrupts aren't supported,
>>>> but that may not be the clearest way to do things.  With UIO is there
>>>> a way to detect if the interrupt pin is on?
>>>>
>>>>
>>>
>>> I suggest not designing the device to uio.  Make it a good
>>> guest-independent
>>> device, and if uio doesn't fit it, change it.
>>>
>>> Why not support interrupts unconditionally?  Is the device useful without
>>> interrupts?
>>>
>>
>> Currently my patch works with or without the shared memory server.  If
>> you give the parameter
>>
>> -ivshmem 256,foo
>>
>> then this will create (if necessary) and map /dev/shm/foo as the
>> shared region without interrupt support.  Some users of shared memory
>> are using it this way.
>>
>> Going forward we can require the shared memory server and always have
>> interrupts enabled.
>>
>
> Can you explain how they synchronize?  Polling?  Using the network?  Using
> it as a shared cache?
>
> If it's a reasonable use case it makes sense to keep it.
>

Do you mean how they synchronize without interrupts?  One project I've
been contacted about uses the shared region directly for
synchronization for simulations running in different VMs that share
data in the memory region.  In my tests spinlocks in the shared region
work between guests.

If we want to keep the serverless implementation, do we need to
support shm_open with -chardev somehow? Something like -chardev
shm,name=foo.  Right now my qdev implementation just passes the name
to the -device option and opens it.

> Another thing comes to mind - a shared memory ID, in case a guest has
> multiple cards.

Sure, a number that can be passed on the command-line and stored in a register?

Cam




reply via email to

[Prev in Thread] Current Thread [Next in Thread]