[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v1 11/13] ui: place a hard cap on VNC server out
From: |
Daniel P. Berrange |
Subject: |
Re: [Qemu-devel] [PATCH v1 11/13] ui: place a hard cap on VNC server output buffer size |
Date: |
Wed, 20 Dec 2017 11:38:30 +0000 |
User-agent: |
Mutt/1.9.1 (2017-09-22) |
On Wed, Dec 20, 2017 at 12:32:51PM +0100, Marc-André Lureau wrote:
> Hi
>
> On Mon, Dec 18, 2017 at 8:12 PM, Daniel P. Berrange <address@hidden> wrote:
> > The previous patches fix problems with throttling of forced framebuffer
> > updates
> > and audio data capture that would cause the QEMU output buffer size to grow
> > without bound. Those fixes are graceful in that once the client catches up
> > with
> > reading data from the server, everything continues operating normally.
> >
> > There is some data which the server sends to the client that is impractical
> > to
> > throttle. Specifically there are various pseudo framebuffer update
> > encodings to
> > inform the client of things like desktop resizes, pointer changes, audio
> > playback start/stop, LED state and so on. These generally only involve
> > sending
> > a very small amount of data to the client, but a malicious guest might be
> > able
> > to do things that trigger these changes at a very high rate. Throttling
> > them is
> > not practical as missed or delayed events would cause broken behaviour for
> > the
> > client.
> >
> > This patch thus takes a more forceful approach of setting an absolute upper
> > bound on the amount of data we permit to be present in the output buffer at
> > any time. The previous patch set a threshold for throttling the output
> > buffer
> > by allowing an amount of data equivalent to one complete framebuffer update
> > and
> > one seconds worth of audio data. On top of this it allowed for one further
> > forced framebuffer update to be queued.
> >
> > To be conservative, we thus take that throttling threshold and multiply it
> > by
> > 5 to form an absolute upper bound. If this bound is hit during vnc_write()
> > we
> > forceably disconnect the client, refusing to queue further data. This limit
> > is
> > high enough that it should never be hit unless a malicious client is trying
> > to
> > exploit the sever, or the network is completely saturated preventing any
> > sending
> > of data on the socket.
> >
> > This completes the fix for CVE-2017-15124 started in the previous patches.
> >
> > Signed-off-by: Daniel P. Berrange <address@hidden>
> > ---
> > ui/vnc.c | 29 +++++++++++++++++++++++++++++
> > 1 file changed, 29 insertions(+)
> >
> > diff --git a/ui/vnc.c b/ui/vnc.c
> > index 4021c0118c..a4f0279cdc 100644
> > --- a/ui/vnc.c
> > +++ b/ui/vnc.c
> > @@ -1521,8 +1521,37 @@ gboolean vnc_client_io(QIOChannel *ioc G_GNUC_UNUSED,
> > }
> >
> >
> > +/*
> > + * Scale factor to apply to vs->throttle_output_offset when checking for
> > + * hard limit. Worst case normal usage could be x2, if we have a complete
> > + * incremental update and complete forced update in the output buffer.
> > + * So x3 should be good enough, but we pick x5 to be conservative and thus
> > + * (hopefully) never trigger incorrectly.
> > + */
> > +#define VNC_THROTTLE_OUTPUT_LIMIT_SCALE 5
> > +
> > void vnc_write(VncState *vs, const void *data, size_t len)
> > {
> > + if (vs->disconnecting) {
> > + return;
> > + }
> > + /* Protection against malicious client/guest to prevent our output
> > + * buffer growing without bound if client stops reading data. This
> > + * should rarely trigger, because we have earlier throttling code
> > + * which stops issuing framebuffer updates and drops audio data
> > + * if the throttle_output_offset value is exceeded. So we only reach
> > + * this higher level if a huge number of pseudo-encodings get
> > + * triggered while data can't be sent on the socket.
> > + *
> > + * NB throttle_output_offset can be zero during early protocol
> > + * handshake, or from the job thread's VncState clone
> > + */
> > + if (vs->throttle_output_offset != 0 &&
> > + vs->output.offset > (vs->throttle_output_offset *
> > + VNC_THROTTLE_OUTPUT_LIMIT_SCALE)) {
> > + vnc_disconnect_start(vs);
>
> It seems to me that the main source of data, the display, bypass this check.
>
> The vnc_worker_thread_loop() uses a local VncState & buffer. The
> result is moved to the vs->jobs_buffer, which is later moved in
> vnc_jobs_consume_buffer() to the vs->output in bottom-half.
>
> So in theory, it seems it would be possible for a client to make
> several update-request (assuming guest display content changed), and
> have several vnc jobs queued. In the unlikely events they would be
> consumed together, they would not respect the hard cap. I am not sure
> how the vnc-job queueing should be improved, just wanted to raise some
> concerns around that code and the fact it doesn't really respect the
> hard limits apparently. Am I wrong?
>
> Perhaps the hard limit should also be put in vnc_jobs_consume_buffer()
> ? Then I can imagine synchronization issues if the hard limit changes
> before the job buffer are consumed.
>
> May be we should limit the amount of jobs that can be queued? If we
> can estimate the max result buffer size of a job buffer,
> vnc_should_update() could take that into account?
The vnc_should_update() already prevents there being more than 1
queued job for the worker thread. So even if the client reuqests
more updates, we won't start processing them until the worker
thread as copied the job_buffer into output in the vnc_jobs_consume_buffer
bottom half. So no matter what the client requests, and how frequently
the guest display updates, we're still limiting output buffer size in
the vnc_update_client method. This vnc_write protection only needs to
cope with non-display updates, for things like psuedo-encoding messages.
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
- [Qemu-devel] [PATCH v1 00/13] Fix VNC server unbounded memory usage, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 01/13] ui: remove 'sync' parametr from vnc_update_client, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 03/13] ui: remove redundant indentation in vnc_client_update, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 06/13] ui: introduce enum to track VNC client framebuffer update request state, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 02/13] ui: remove unreachable code in vnc_update_client, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 04/13] ui: avoid pointless VNC updates if framebuffer isn't dirty, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 08/13] ui: refactor code for determining if an update should be sent to the client, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 11/13] ui: place a hard cap on VNC server output buffer size, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 07/13] ui: correctly reset framebuffer update state after processing dirty regions, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 10/13] ui: fix VNC client throttling when forced update is requested, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 05/13] ui: track how much decoded data we consumed when doing SASL encoding, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 09/13] ui: fix VNC client throttling when audio capture is active, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 12/13] ui: add trace events related to VNC client throttling, Daniel P. Berrange, 2017/12/18
- [Qemu-devel] [PATCH v1 13/13] ui: mix misleading comments & return types of VNC I/O helper methods, Daniel P. Berrange, 2017/12/18
- Re: [Qemu-devel] [PATCH v1 00/13] Fix VNC server unbounded memory usage, Darren Kenny, 2017/12/19
- Re: [Qemu-devel] [PATCH v1 00/13] Fix VNC server unbounded memory usage, Marc-André Lureau, 2017/12/19