qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] the userspace process vapp mmap filed // [PULL 13/37] v


From: Nikolay Nikolaev
Subject: Re: [Qemu-devel] the userspace process vapp mmap filed // [PULL 13/37] vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message
Date: Wed, 10 Sep 2014 14:01:46 +0300

Hello,

see my answer inline:

On Wed, Sep 10, 2014 at 6:00 AM, Linhaifeng <address@hidden> wrote:
> Hi,
>
> Thank you for your answer.I think the problem is not how to publish the patch 
> the problem is there is no standard vhost-user module.
>
> I just use the vapp to test the new backend vhost-user. I found that the 
> kernel have a module names vhost-net for the vhost backend but there is no 
> vhost-user module for the vhost-user backend.who will supply a standard 
> vhost-user lib for the user process?if everybody implement it self I think 
> it's hard to maintenance qemu.so I think there are some question must be 
> answered:
>
> 1.who supply the standard vhost-user module to use the backend of qemu?kernel 
> have maintenance the vhost-net there must be a organization to maintenance 
> the vhost-user module.

AFAIK there is no such module. vhost-user is a protocol. It is
described in the relevant document in the QEMU tree. If a project
wants to use it, it will implement the protocol.
The way we use vhost-user in snabbswitch (http://snabb.co) is we
implemented the slave side of the protocol in Lua.

> 2.the vhost-user module should be in common use.I think it maybe a share 
> library have interface like open、close、send、recv and the user process will 
> easy to use not just supply a test program.
We have discussed this possibility internally, however our current
resource prioritization does not allow us to give a fixed
implementation plan.

> 3.qemu support multi net device the vhost-user module should support multi 
> net device too.
I am not sure what multi net is? if you mean multiple vhost-user
backed NICs I think this should be working.

Is there a bug in vhost-user implementation in QEMU related to shared
memory? We don't see such. We're working on commercial deployment of
qemu/vhost-user/snabbswitch and it works well with our tests.


regards,
Nikolay Nikolaev
Virtual Open Systems

> -----Original Message-----
> From: Nikolay Nikolaev [mailto:address@hidden
> Sent: Wednesday, September 10, 2014 1:54 AM
> To: Linhaifeng; Daniel Raho
> Cc: qemu-devel; address@hidden >> Michael S. Tsirkin; Lilijun (Jerry); Paolo 
> Bonzini; Damjan Marion; VirtualOpenSystems Technical Team
> Subject: Re: Re: the userspace process vapp mmap filed //[Qemu-devel] [PULL 
> 13/37] vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message
>
> Hello,
>
> Vapp is a VOSYS application, currently not meant to be part of QEMU;
> as such your proposed patch might not be meaningful if pushed towards
> QEMU devel list. As the current Vapp implementation is not updated
> since last March, custom support and any related potential design need
> for a software switch implementation can be discussed at a custom
> commercial level.
>
> regards,
> Nikolay Nikolaev
> Virtual Open Systems
>
>
> On Tue, Sep 9, 2014 at 3:28 PM, linhafieng <address@hidden> wrote:
>>
>>
>>
>> -------- Forwarded Message --------
>> Subject: Re: the userspace process vapp mmap filed //[Qemu-devel] [PULL 
>> 13/37] vhost-user: fix regions provied with VHOST_USER_SET_MEM_TABLE message
>> Date: Tue, 09 Sep 2014 19:45:08 +0800
>> From: linhafieng <address@hidden>
>> To: Michael S. Tsirkin <address@hidden>
>> CC: address@hidden, address@hidden, address@hidden, address@hidden, 
>> address@hidden, address@hidden
>>
>> On 2014/9/3 15:08, Michael S. Tsirkin wrote:
>>> On Wed, Sep 03, 2014 at 02:26:03PM +0800, linhafieng wrote:
>>>> I run the user process vapp to test the  VHOST_USER_SET_MEM_TABLE message 
>>>> found that the user sapce failed to mmap.
>>>
>>> Why off-list?
>>> pls copy qemu mailing list and address@hidden
>>>
>>>
>>
>>
>> I wrote a patch for the vapp to test the patch of broken mem regions.The 
>> vapp can receive data from VM but there is a mmap failed error.
>>
>> i have some qeusions about the patch and vhost-user:
>> 1.can i mmap all the fd of the mem regions? why some region failed?Have any 
>> impact on it?
>> 2.the vapp why not update with the path of broken mem regions?
>> 3.is the test program of vhost user test vring mem more meaningful?
>> 4.the port of switch how to find the vhost-user device?by the socket path?
>> 5.should the process of vhost-user manage all vhost-user backend socket fd? 
>> or any better advise?
>>
>>
>> my patch is for vapp is :
>>
>> diff -uNr vapp/vhost_server.c vapp-for-broken-mem-region//vhost_server.c
>> --- vapp/vhost_server.c 2014-08-30 09:39:20.000000000 +0000
>> +++ vapp-for-broken-mem-region//vhost_server.c  2014-09-09 
>> 11:36:50.000000000 +0000
>> @@ -147,18 +147,22 @@
>>
>>      for (idx = 0; idx < msg->msg.memory.nregions; idx++) {
>>          if (msg->fds[idx] > 0) {
>> +            size_t size;
>> +            uint64_t *guest_mem;
>>              VhostServerMemoryRegion *region = 
>> &vhost_server->memory.regions[idx];
>>
>>              region->guest_phys_addr = 
>> msg->msg.memory.regions[idx].guest_phys_addr;
>>              region->memory_size = msg->msg.memory.regions[idx].memory_size;
>>              region->userspace_addr = 
>> msg->msg.memory.regions[idx].userspace_addr;
>> -
>> +            region->mmap_offset = msg->msg.memory.regions[idx].mmap_offset;
>> +
>>              assert(idx < msg->fd_num);
>>              assert(msg->fds[idx] > 0);
>>
>> -            region->mmap_addr =
>> -                    (uintptr_t) init_shm_from_fd(msg->fds[idx], 
>> region->memory_size);
>> -
>> +            size = region->memory_size + region->mmap_offset;
>> +            guest_mem = init_shm_from_fd(msg->fds[idx], size);
>> +            guest_mem += (region->mmap_offset / sizeof(*guest_mem));
>> +            region->mmap_addr = (uint64_t)guest_mem;
>>              vhost_server->memory.nregions++;
>>          }
>>      }
>> diff -uNr vapp/vhost_server.h vapp-for-broken-mem-region//vhost_server.h
>> --- vapp/vhost_server.h 2014-08-30 09:39:20.000000000 +0000
>> +++ vapp-for-broken-mem-region//vhost_server.h  2014-09-05 
>> 01:41:27.000000000 +0000
>> @@ -13,7 +13,9 @@
>>      uint64_t guest_phys_addr;
>>      uint64_t memory_size;
>>      uint64_t userspace_addr;
>> +       uint64_t mmap_offset;
>>      uint64_t mmap_addr;
>> +
>>  } VhostServerMemoryRegion;
>>
>>  typedef struct VhostServerMemory {
>> diff -uNr vapp/vhost_user.h vapp-for-broken-mem-region//vhost_user.h
>> --- vapp/vhost_user.h   2014-08-30 09:39:20.000000000 +0000
>> +++ vapp-for-broken-mem-region//vhost_user.h    2014-09-05 
>> 01:40:20.000000000 +0000
>> @@ -13,6 +13,7 @@
>>      uint64_t guest_phys_addr;
>>      uint64_t memory_size;
>>      uint64_t userspace_addr;
>> +       uint64_t mmap_offset;
>>  } VhostUserMemoryRegion;
>>
>>  typedef struct VhostUserMemory {
>>
>>
>> the result of the vapp with my patch :
>> ................................................................................
>> Processing message: VHOST_USER_SET_OWNER
>> _set_owner
>> Cmd: VHOST_USER_GET_FEATURES (0x1)
>> Flags: 0x1
>> u64: 0x0
>> ................................................................................
>> Processing message: VHOST_USER_GET_FEATURES
>> _get_features
>> Cmd: VHOST_USER_SET_VRING_CALL (0xd)
>> Flags: 0x1
>> u64: 0x0
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_CALL
>> _set_vring_call
>> Got callfd 0x5
>> Cmd: VHOST_USER_SET_VRING_CALL (0xd)
>> Flags: 0x1
>> u64: 0x1
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_CALL
>> _set_vring_call
>> Got callfd 0x6
>> Cmd: VHOST_USER_SET_FEATURES (0x2)
>> Flags: 0x1
>> u64: 0x0
>> ................................................................................
>> Processing message: VHOST_USER_SET_FEATURES
>> _set_features
>> Cmd: VHOST_USER_SET_MEM_TABLE (0x5)
>> Flags: 0x1
>> nregions: 2
>> region:
>>         gpa = 0x0
>>         size = 655360
>>         ua = 0x7f76c0000000 [0]
>> region:
>>         gpa = 0xC0000
>>         size = 2146697216
>>         ua = 0x7f76c00c0000 [1]
>> ................................................................................
>> Processing message: VHOST_USER_SET_MEM_TABLE
>> _set_mem_table
>> mmap: Invalid argument 
>> //@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ region 0 mmap 
>> failed!
>> Got memory.nregions 2
>> Cmd: VHOST_USER_SET_VRING_NUM (0x8)
>> Flags: 0x1
>> state: 0 256
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_NUM
>> _set_vring_num
>> Cmd: VHOST_USER_SET_VRING_BASE (0xa)
>> Flags: 0x1
>> state: 0 0
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_BASE
>> _set_vring_base
>> Cmd: VHOST_USER_SET_VRING_ADDR (0x9)
>> Flags: 0x1
>> addr:
>>         idx = 0
>>         flags = 0x0
>>         dua = 0x7f76f7f54000
>>         uua = 0x7f76f7f56000
>>         aua = 0x7f76f7f55000
>>         lga = 0x37f56000
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_ADDR
>> _set_vring_addr
>> Cmd: VHOST_USER_SET_VRING_KICK (0xc)
>> Flags: 0x1
>> u64: 0x0
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_KICK
>> _set_vring_kick
>> Got kickfd 0x9
>> Cmd: VHOST_USER_SET_VRING_NUM (0x8)
>> Flags: 0x1
>> state: 1 256
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_NUM
>> _set_vring_num
>> Cmd: VHOST_USER_SET_VRING_BASE (0xa)
>> Flags: 0x1
>> state: 1 0
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_BASE
>> _set_vring_base
>> Cmd: VHOST_USER_SET_VRING_ADDR (0x9)
>> Flags: 0x1
>> addr:
>>         idx = 1
>>         flags = 0x0
>>         dua = 0x7f7739834000
>>         uua = 0x7f7739836000
>>         aua = 0x7f7739835000
>>         lga = 0x79836000
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_ADDR
>> _set_vring_addr
>> Cmd: VHOST_USER_SET_VRING_KICK (0xc)
>> Flags: 0x1
>> u64: 0x1
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_KICK
>> _set_vring_kick
>> Got kickfd 0xa
>> Listening for kicks on 0xa
>> Cmd: VHOST_USER_SET_VRING_CALL (0xd)
>> Flags: 0x1
>> u64: 0x0
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_CALL
>> _set_vring_call
>> Got callfd 0xb
>> Cmd: VHOST_USER_SET_VRING_CALL (0xd)
>> Flags: 0x1
>> u64: 0x1
>> ................................................................................
>> Processing message: VHOST_USER_SET_VRING_CALL
>> _set_vring_call
>> Got callfd 0xc
>> chunks: 10 90
>> ................................................................................
>> 33 33 00 00 00 16 52 54 00 12 34 56 86 dd 60 00
>> 00 00 00 24 00 01 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 00 00 00 ff 02 00 00 00 00 00 00 00 00
>> 00 00 00 00 00 16 3a 00 05 02 00 00 01 00 8f 00
>> 3b 22 00 00 00 01 04 00 00 00 ff 02 00 00 00 00
>> 00 00 00 00 00 01 ff 12 34 56
>> chunks: 10 78
>> ................................................................................
>> 33 33 ff 12 34 56 52 54 00 12 34 56 86 dd 60 00
>> 00 00 00 18 3a ff 00 00 00 00 00 00 00 00 00 00
>> 00 00 00 00 00 00 ff 02 00 00 00 00 00 00 00 00
>> 00 01 ff 12 34 56 87 00 c4 02 00 00 00 00 fe 80
>> 00 00 00 00 00 00 50 54 00 ff fe 12 34 56
>> chunks: 10 70
>> ................................................................................
>> 33 33 00 00 00 02 52 54 00 12 34 56 86 dd 60 00
>> 00 00 00 10 3a ff fe 80 00 00 00 00 00 00 50 54
>> 00 ff fe 12 34 56 ff 02 00 00 00 00 00 00 00 00
>> 00 00 00 00 00 02 85 00 71 b5 00 00 00 00 01 01
>> 52 54 00 12 34 56
>> chunks: 10 90
>> ................................................................................
>>
>>
>>
>>
>>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]