qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] add keepalive for qemu-nbd


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH] add keepalive for qemu-nbd
Date: Fri, 23 Sep 2022 12:55:52 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.8.1

On 9/21/22 18:34, Denis V. Lunev wrote:
On 9/21/22 10:36, songlinfeng wrote:
From: songlinfeng <songlinfeng@inspur.com>

we want to export a image with qemu-nbd as server, in client we use libnbd to 
connect qemu-nbd,but when client power down,the server is still working.
qemu-nbd will exit when last client exit.so,we still want server exit when 
client power down.maybe qmp can handle it,but i don't know how to do .

Please wrap commit message to 72 characters in line.


Signed-off-by: songlinfeng <songlinfeng@inspur.com>
---
  qemu-nbd.c | 23 ++++++++++++++++++++++-
  1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/qemu-nbd.c b/qemu-nbd.c
index 0cd5aa6..115ef2b 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -20,7 +20,8 @@
  #include <getopt.h>
  #include <libgen.h>
  #include <pthread.h>
-
+#include <sys/types.h>
+#include <sys/socket.h>
  #include "qemu/help-texts.h"
  #include "qapi/error.h"
  #include "qemu/cutils.h"
@@ -365,6 +366,26 @@ static void nbd_accept(QIONetListener *listener, 
QIOChannelSocket *cioc,
      nb_fds++;
      nbd_update_server_watch();
      nbd_client_new(cioc, tlscreds, tlsauthz, nbd_client_closed);
+    int tcp_keepalive_intvl = 5;
+    int tcp_keepalive_probes = 5;
+    int tcp_keepalive_time = 60;
+    int tcp_keepalive_on = 1;
+    if (setsockopt(cioc->fd, SOL_TCP, TCP_KEEPINTVL,
+                   &tcp_keepalive_intvl, sizeof(tcp_keepalive_intvl)) < 0) {
+        perror("setsockopt failed\n");
+    }
+    if (setsockopt(cioc->fd, SOL_TCP, TCP_KEEPCNT,
+                   &tcp_keepalive_probes, sizeof(tcp_keepalive_probes)) < 0) {
+        perror("setsockopt failed\n");
+    }
+    if (setsockopt(cioc->fd, SOL_TCP, TCP_KEEPIDLE,
+                   &tcp_keepalive_time, sizeof(tcp_keepalive_time)) < 0) {
+        perror("setsockopt failed\n");
+    }
+    if (setsockopt(cios->fd, SOL_SOCKET, SO_KEEPALIVE,
+                   &tcp_keepalive_on, sizeof(tcp_keepalive_on)) < 0) {
+        perror("setsockopt failed\n");
+    }
  }
  static void nbd_update_server_watch(void)
I tend to so no. Not in this exact form.

In general we have NBD server which could be created and configured
in several different ways. Through qemu-nbd and through QMP.

At the moment we do set KEEP_ALIVE for outgoing connections
only and that is configurable, please refer to

int inet_parse(InetSocketAddress *addr, const char *str, Error **errp)

which I believe should be called for the any real configuration
setting with an IP address specified. This option will get
InetSocketAddress->keep_alive/has_keep_alive set.

Though this option has no effect for the listen socket and
this is wrong for you case. You are absolutely right,
as specified in
static int inet_listen_saddr(InetSocketAddress *saddr,
                              int port_offset,
                              int num,
                              Error **errp)
{
     struct addrinfo ai,*res,*e;
     char port[33];
     char uaddr[INET6_ADDRSTRLEN+1];
     char uport[33];
     int rc, port_min, port_max, p;
     int slisten = -1;
     int saved_errno = 0;
     bool socket_created = false;
     Error *err = NULL;

     if (saddr->keep_alive) {
         error_setg(errp, "keep-alive option is not supported for passive "
                    "sockets");
         return -1;
     }

thus you should technically remove this pitfall and set keep_alive
in the generic accept code if you have it specified for the listen
socket.

This seems consistent to me.

I agree.

Adding Vladimir to the loop as
the submitter of the original keep-alive code (if my memory
does not fail me :).

Yes please keep me in CC for NBD patches, you just need to run 
./scripts/get_maintainer.pl /path/to/patch to get all involved maintainers.

--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]