qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 6/9] rocker: add new rocker switch device


From: Jason Wang
Subject: Re: [Qemu-devel] [PATCH v3 6/9] rocker: add new rocker switch device
Date: Wed, 21 Jan 2015 03:44:46 +0008



On Fri, Jan 16, 2015 at 5:48 PM, Scott Feldman <address@hidden> wrote:
On Fri, Jan 16, 2015 at 1:15 AM, Jason Wang <address@hidden> wrote:

 On 01/11/2015 11:57 AM, address@hidden wrote:
Each port is a netdev and can be paired with using -netdev id=<port name>.

 Signed-off-by: Scott Feldman <address@hidden>
 Signed-off-by: Jiri Pirko <address@hidden>
 ---

Looks like the devices does not support state saving. How about tagging
 it with unmigratable first?

State saving would be nice to add in the future.  How do we tag is
unmigratable for now?

E.g For VFIO it does:

static const VMStateDescription vfio_pci_vmstate = {
   .name = "vfio-pci",
   .unmigratable = 1,
};


 +    uint64_t switch_id;          /* switch id */

 Looks like the function of name ad switch_id are duplicated?

Don't follow...

I mean it looks to me that name is used to find a rocker through qmp. Can't we just do this through switch_id (many devices or netdev just have an 'id' property to do this)?


 +    if (iovcnt) {
 +        /* XXX perform Tx offloads */
 +        /* XXX   silence compiler for now */

Need add TSO here (e1000 is a good reference) or disable TSO in driver.

TSO is not enabled in driver, currently.  This is lower priority now
since the I/O path for traffic to/from the guest CPU is not
performance critical.  The forwarding path offloaded by the device is
the hot path, but that doesn't involve I/O to guest CPU, so Tx/Rx
offloads aren't in play.

But, someday, it would be nice to enable Rx/Tx offloads.

I see, thanks.


 +    rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_EVENT_TYPE,
 +                        ROCKER_TLV_EVENT_TYPE_LINK_CHANGED);
+ nest = rocker_tlv_nest_start(buf, &pos, ROCKER_TLV_EVENT_INFO); + rocker_tlv_put_le32(buf, &pos, ROCKER_TLV_EVENT_LINK_CHANGED_PPORT, pport); + rocker_tlv_put_u8(buf, &pos, ROCKER_TLV_EVENT_LINK_CHANGED_LINKUP,
 +                      link_up ? 1 : 0);

 Looks like those types are not documented.

Ok, I'll double check spec to make sure we're covered, for the items you found.

 +    rocker_tlv_put_le16(buf, &pos, ROCKER_TLV_RX_FLAGS, rx_flags);
 +    rocker_tlv_put_le16(buf, &pos, ROCKER_TLV_RX_CSUM, rx_csum);

Note: Some backend (e.g tap) can do offloading, may consider to add the
 support in the future.

Yup, on TODO list for Rx/Tx offloads.

 Using only 1 queues is ok for multiqueue backend. In the future may
 consider to use all.

When we get to QoS for working, I think we'll want to enable these
multiqueues.  QoS support is not implemented in rocker, currently, but
there is support defined in the OF-DPA spec as part of ACL processing.
So this is future work.

Yes.

Notes: some netdev backend (e.g tap) can be a multiple queue device. But since performance is not a major consideration now. It's ok to just use one queue.


 Thanks

Thanks for reviewing.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]