qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes bac


From: zhanghailiang
Subject: Re: [Qemu-devel] [PATCH 2/3] net: Flush queues when runstate changes back to running
Date: Mon, 18 Aug 2014 08:45:53 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; rv:11.0) Gecko/20120327 Thunderbird/11.0.1

On 2014/8/14 18:05, Michael S. Tsirkin wrote:
On Thu, Aug 14, 2014 at 02:13:57PM +0800, zhanghailiang wrote:
When the runstate changes back to running, we definitely need to flush
queues to get packets flowing again.

Here we implement this in the net layer:
(1) add a member 'VMChangeStateEntry *vmstate' to struct NICState,
Which will listen for VM runstate changes.
(2) Register a handler function for VMstate change.
When vm changes back to running, we flush all queues in the callback function.

OK but smash this together with patch 1, otherwise
after patch 1 things are broken, which breaks
git bisect.


Hmm, i will put them together into one patch. Thanks for your reviewing.

Signed-off-by: zhanghailiang<address@hidden>
---
  include/net/net.h |  1 +
  net/net.c         | 26 ++++++++++++++++++++++++++
  2 files changed, 27 insertions(+)

diff --git a/include/net/net.h b/include/net/net.h
index 312f728..a294277 100644
--- a/include/net/net.h
+++ b/include/net/net.h
@@ -97,6 +97,7 @@ typedef struct NICState {
      NICConf *conf;
      void *opaque;
      bool peer_deleted;
+    VMChangeStateEntry *vmstate;
  } NICState;

  NetClientState *qemu_find_netdev(const char *id);
diff --git a/net/net.c b/net/net.c
index 5bb2821..506e58f 100644
--- a/net/net.c
+++ b/net/net.c
@@ -242,6 +242,29 @@ NetClientState *qemu_new_net_client(NetClientInfo *info,
      return nc;
  }

+static void nic_vmstate_change_handler(void *opaque,
+                                       int running,
+                                       RunState state)
+{
+    NICState *nic = opaque;
+    NetClientState *nc;
+    int i, queues;
+
+    if (!running) {
+        return;
+    }
+
+    queues =  MAX(1, nic->conf->peers.queues);
+    for (i = 0; i<  queues; i++) {
+        nc =&nic->ncs[i];
+        if (nc->receive_disabled
+            || (nc->info->can_receive&&  !nc->info->can_receive(nc))) {
+            continue;
+        }
+        qemu_flush_queued_packets(nc);
+    }
+}
+
  NICState *qemu_new_nic(NetClientInfo *info,
                         NICConf *conf,
                         const char *model,
@@ -259,6 +282,8 @@ NICState *qemu_new_nic(NetClientInfo *info,
      nic->ncs = (void *)nic + info->size;
      nic->conf = conf;
      nic->opaque = opaque;
+    nic->vmstate = qemu_add_vm_change_state_handler(nic_vmstate_change_handler,
+                                                    nic);

      for (i = 0; i<  queues; i++) {
          qemu_net_client_setup(&nic->ncs[i], info, peers[i], model, name,
@@ -379,6 +404,7 @@ void qemu_del_nic(NICState *nic)
          qemu_free_net_client(nc);
      }

+    qemu_del_vm_change_state_handler(nic->vmstate);
      g_free(nic);
  }

--
1.7.12.4


.






reply via email to

[Prev in Thread] Current Thread [Next in Thread]