qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 6/6] RFH: We lost "connect" events


From: Daniel P . Berrangé
Subject: Re: [Qemu-devel] [PATCH 6/6] RFH: We lost "connect" events
Date: Mon, 19 Aug 2019 10:52:28 +0100
User-agent: Mutt/1.12.0 (2019-05-25)

On Wed, Aug 14, 2019 at 04:02:18AM +0200, Juan Quintela wrote:
> When we have lots of channels, sometimes multifd migration fails
> with the following error:
> 
> (qemu) migrate -d tcp:0:4444
> (qemu) qemu-system-x86_64: multifd_send_pages: channel 17 has already quit!
> qemu-system-x86_64: multifd_send_pages: channel 17 has already quit!
> qemu-system-x86_64: multifd_send_sync_main: multifd_send_pages fail
> qemu-system-x86_64: Unable to write to socket: Connection reset by peer
> info migrate
> globals:
> store-global-state: on
> only-migratable: off
> send-configuration: on
> send-section-footer: on
> decompress-error-check: on
> clear-bitmap-shift: 18
> capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: 
> off compress: off events: off postcopy-ram: off x-colo: off release-ram: off 
> block: off return-path: off pause-before-switchover: off multifd: on 
> dirty-bitmaps: off postcopy-blocktime: off late-block-activate: off 
> x-ignore-shared: off
> Migration status: failed (Unable to write to socket: Connection reset by peer)
> total time: 0 milliseconds
> 
> On this particular example I am using 100 channels.  The bigger the
> number of channels, the easier that it is to reproduce.  That don't
> mean that it is a good idea to use so many channels.
> 
> With the previous patches on this series, I can run "reliabely" on my
> hardware with until 10 channels.  Most of the time.  Until it fails.
> With 100 channels, it fails almost always.
> 
> I thought that the problem was on the send side, so I tried to debug
> there.  As you can see for the delay, if you put any
> printf()/error_report/trace, you can get that the error goes away, it
> is very timing sensitive.  With a delay of 10000 microseconds, it only
> works sometimes.
> 
> What have I discovered so far:
> 
> - send side calls qemu_socket() on all the channels.  So it appears
>   that it gets created correctly.
> - on the destination side, it appears that "somehowe" some of the
>   connections are lost by the listener.  This error happens when the
>   destination side socket hasn't been "accepted", and it is not
>   properly created.  As far as I can see, we have several options:
> 
>   1- I don't know how to use properly qio asynchronously
>      (this is one big posiblity).
> 
>   2- glib has one error in this case?  or how qio listener is
>      implemented on top of glib.  I put lots of printf() and other
>      instrumentation, and it appears that the listener io_func is not
>      called at all for the connections that are missing.
> 
>   3- it is always possible that we are missing some g_main_loop_run()
>      somewhere.  Notice how test/test-io-channel-socket.c calls it
>      "creatively".
> 
>   4- It is enterely possible that I should be using the sockets as
>      blocking instead of non-blocking.  But I am not sure about that
>      one yet.
> 
> - on the sending side, what happens is:
> 
>   eventually it call socket_connect() after all the async dance with
>   thread creation, etc, etc. Source side creates all the channels, it
>   is the destination side which is missing some of them.
> 
>   sending side sends the first packet by that channel, it "sucheeds"
>   and didn't give any error.
> 
>   after some time, sending side decides to send another packet through
>   that channel, and it is now when we get the above error.
> 
> Any good ideas?

In inet_listen_saddr() we call

    if (!listen(slisten, 1)) {

note the second parameter sets the socket backlog, which is the max
number of pending socket connections we allow. My guess is that the
target QEMU is not accepting incoming connections quickly enough and
thus you hit the limit & the kernel starts dropping the incoming
connections.

As a quick test, just hack this code to pass a value of 100 and see
if it makes your test reliable. If it does, then we'll need to figure
out a nice way to handle backlog instead of hardcoding it at 1.


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]