qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performanc


From: Varad Gautam
Subject: Re: [Qemu-devel] [PATCH RFC 1/2] rng-egd: improve egd backend performance
Date: Tue, 24 Dec 2013 15:28:36 +0530

On Tue, Dec 17, 2013 at 12:33 PM, Amos Kong <address@hidden> wrote:
>
> In my test host, When I use the egd-socket, it is very slow.
> So I use a quick souce /dev/urandom, we ignore the egd protocol
> here, it might be wrong.
>
> > Can you suggest a way to test this the right way?
>
> It seems we should still use egd.pl to setup a daemon socket.
> But how to make it very quick? We can't verify the performance
> improvement if the source is too slow.
>
> Can we use "--bottomless" option for egd.pl? it will not decrement
> entropy count. When I use this option, the speed (without my patches)
> is about 13 kB/s.

Is egd is more likely to be found running *as a substitute* on host machines
without a /dev/random device? If so, speed becomes a major issue if it has
not been paired with a hardware source, as it gets entropy by using the
output of various the programs it calls.

In that case, instead of having egd running on the host, would it be better
to have the guests run their own copy of egd if needed? This would keep
the entropy available on the guests independent of each other, and remove
the issue of a single guest overusing and depleting the host's entropy for
everyone else.

Otherwise, we could use the `--bottomless` option to make it fast for testing,
but in practice, as the README suggests it won't be good enough for
generating keys. Since it communicates through sockets, we can build the
qemu back-end this way.

Theoretically, would mixing entropy from egd (software-generated) with
/dev/random (hardware event triggered) produce a better entropy source
than each of these individually?  I know that /dev/random is pretty good, but if
it can be mixed with other sources and still be useful, it can be made to last
longer.

Varad

On Wed, Dec 18, 2013 at 3:35 PM, Giuseppe Scrivano <address@hidden> wrote:
> Markus Armbruster <address@hidden> writes:
>
>> Amos Kong <address@hidden> writes:
>>
>>> Bugzilla: https://bugs.launchpad.net/qemu/+bug/1253563
>>>
>>> We have a requests queue to cache the random data, but the second
>>> will come in when the first request is returned, so we always
>>> only have one items in the queue. It effects the performance.
>>>
>>> This patch changes the IOthread to fill a fixed buffer with
>>> random data from egd socket, request_entropy() will return
>>> data to virtio queue if buffer has available data.
>>>
>>> (test with a fast source, disguised egd socket)
>>>  # cat /dev/urandom | nc -l localhost 8003
>>>  # qemu .. -chardev socket,host=localhost,port=8003,id=chr0 \
>>>         -object rng-egd,chardev=chr0,id=rng0,buf_size=1024 \
>>>         -device virtio-rng-pci,rng=rng0
>>>
>>>   bytes     kb/s
>>>   ------    ----
>>>   131072 ->  835
>>>    65536 ->  652
>>>    32768 ->  356
>>>    16384 ->  182
>>>     8192 ->   99
>>>     4096 ->   52
>>>     2048 ->   30
>>>     1024 ->   15
>>>      512 ->    8
>>>      256 ->    4
>>>      128 ->    3
>>>       64 ->    2
>>
>> I'm not familiar with the rng-egd code, but perhaps my question has
>> value anyway: could agressive reading ahead on a source of randomness
>> cause trouble by depleting the source?
>>
>> Consider a server restarting a few dozen guests after reboot, where each
>> guest's QEMU then tries to slurp in a couple of KiB of randomness.  How
>> does this behave?
>
> I hit this performance problem while I was working on RNG devices
> support in virt-manager and I also noticed that the bottleneck is in the
> egd backend that slowly response to requests.  I thought as well about
> adding a buffer but to handle it trough a new message type in the EGD
> protocol.  The new message type informs the EGD daemon of the buffer
> size and that the buffer data has a lower priority that the daemon
> should fill when there are no other queued requests.  Could such
> approach solve the scenario you've described?
>
> Cheers,
> Giuseppe



reply via email to

[Prev in Thread] Current Thread [Next in Thread]