qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performanc


From: Amos Kong
Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
Date: Wed, 8 Jan 2014 17:14:41 +0800
User-agent: Mutt/1.5.21 (2010-09-15)

On Wed, Dec 18, 2013 at 11:05:14AM +0100, Giuseppe Scrivano wrote:
> Markus Armbruster <address@hidden> writes:
> 
> > Amos Kong <address@hidden> writes:
> >
> >> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563
> >>
> >> We have a requests queue to cache the random data, but the second
> >> will come in when the first request is returned, so we always
> >> only have one items in the queue. It effects the performance.
> >>
> >> This patch changes the IOthread to fill a fixed buffer with
> >> random data from egd socket, request_entropy() will return
> >> data to virtio queue if buffer has available data.
> >>
> >> (test with a fast source, disguised egd socket)
> >>  # cat /dev/urandom | nc -l localhost 8003
> >>  # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \
> >>         -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \
> >>         -device virtio-rng-pci,rng=rng0
> >>
> >>   bytes     kb/s
> >>   ------    ----
> >>   131072 ->  835
> >>    65536 ->  652
> >>    32768 ->  356
> >>    16384 ->  182
> >>     8192 ->   99
> >>     4096 ->   52
> >>     2048 ->   30
> >>     1024 ->   15
> >>      512 ->    8
> >>      256 ->    4
> >>      128 ->    3
> >>       64 ->    2
> >
> > I'm not familiar with the rng-egd code, but perhaps my question has
> > value anyway: could agressive reading ahead on a source of randomness
> > cause trouble by depleting the source?
> >
> > Consider a server restarting a few dozen guests after reboot, where each
> > guest's QEMU then tries to slurp in a couple of KiB of randomness.  How
> > does this behave?

Hi Giuseppe,
 
> I hit this performance problem while I was working on RNG devices
> support in virt-manager and I also noticed that the bottleneck is in the
> egd backend that slowly response to requests.

o Current situation:
  rng-random backend reads data from non-blocking character devices
  New entropy request will be sent from guest when last request is processed,
  so the request queue can only cache one request.
  Almost all the request size is 64 bytes.
  Egd socket responses the request slowly.

o Solution 1: pre-reading, perf is improved, but cost much memory 
  In my V1 patch, I tried to add a configurable buffer to pre-read data
  from egd socket. The performance was improved but it used a big memory
  as the buffer.

o Solution 2: pre-sending request to egd socket, improve is trivial
  I did another test, we just pre-send entropy request to egd socket, not
  really read the data to a buffer.

o Solution 3: eyeless poll, not good
  Always returns an integer in rng_egd_chr_can_read(), the perf can be 
  improved to 120 kB/s, it reduce the delay caused by poll mechanism.

o Solution 4:
  Try to use the new message type to improve the response speed of egd socket

o Solution 5:
  non-block read?

> I thought as well about
> adding a buffer but to handle it trough a new message type in the EGD
> protocol.  The new message type informs the EGD daemon of the buffer
> size and that the buffer data has a lower priority that the daemon

lower priority or higher priority? we need the daemon respons our request 
quickly.

> should fill when there are no other queued requests.  Could such
> approach solve the scenario you've described?

I will try. Do you know the name of new message type? can you show me
an example?

QEMU code:
  uint8_t header[2];
  header[0] = 0x02;  /* 0x01: returns len + data, 0x02: only returns data*/
  header[1] = len;
  qemu_chr_fe_write(s->chr, header, sizeof(header));
 
> Cheers,
> Giuseppe

-- 
                        Amos.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]