qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/2] VirtIO RNG


From: Ian Molton
Subject: Re: [Qemu-devel] [PATCH 2/2] VirtIO RNG
Date: Sat, 24 Apr 2010 09:58:31 +0100
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.9) Gecko/20100411 Icedove/3.0.4

On 24/04/10 02:37, Jamie Lokier wrote:
Ian Molton wrote:
Jamie Lokier wrote:
First of all: Why are your egd daemons with open connections dying
anyway?  Buggy egd?

No, they aren't buggy. but occasionally, the sysadmin of said server
farms may want to, you know, update the daemon?

Many daemons don't kill active connections on upgrade.  For example
sshd, telnetd, ftpd, rsyncd...  Only new connections get the new daemon.

Thats actually completely irrelevant, but ok.

But let's approach this from a different angle:

Oh god, not again...

What do _other_ long-lived EGD clients do?  Is it:

    1. When egd is upgraded, the clients break.

I'm sure some do this.

    3. Active connections aren't killed on egd upgrade.

I don't know of any egd servers that are that nice.

    2. They keep trying to reconnect, as you've implemented in qemu.

Some do.

    4. Whenever they want entropy, they're expected to open a
       connection, request what they want, read it, and close.  Each time.

Some do that too.

Whatever other long-lived clients do, that's probably best for qemu
too.

Well clearly thats going to be inconclusive.

4 is interesting because it's an alternative approach to rate-limiting
the byte stream: Instead, fetch a batch of bytes in a single
open/read/close transaction when needed.  Rate-limit _that_, and you
don't need separate reconnection code.

You're effectively talking about idle-disconnect. It's not actually a bad idea, but we still need reconnect support in some form, in case the server goes away mid-fetch.

So I trying checking if egd kills connections when upgraded, and found...

No 'egd' package for my Debian and Ubuntu systems, nor anything which
looks obvious.  There are several other approaches to gathering
entropy from hardware sources, for example rng-tools, haveged, ekeyd, and
ekeyd-egd-linux (aha... it's a client).

ekeyd is an EGD server.

All of those have in common: they fetch entropy from something, and
write it to the kernel's /dev/random pool.  Applications are expected
to read from that pool.

Um. no. Applications *can* read that pool. Its not the only way, and designing apps that can ONLY do that is forcing policy.

In particular, if you do have a hardware or network EGD entropy
source, you can run ekeyd-egd-linux which is an EGD client, which
transfers from EGD ->  the kernel, so that applications can read from
/dev/random.

You *can* - but what if you have two entropy sources and you dont want the guests sucking down entropy from the hosts source, only their shared source?

That means, on Debian&  Ubuntu Linux at least, there is no need for
applications to talk EGD protocol themselves, even to get network or
hardware entropy - it's better left to egd-linux, rng-tools etc. to
manage.

And if you don't use Debian or Ubuntu, or you install your own package (you can use non-packaged software, and all...)

But the situation is no doubt different on non-Linux hosts.

Indeed. /dev/random doesn't even exist on many hosts.

By the way, ekeyd-egd-linux is a bit thoughtful: For example it has a
"shannons-per-byte" option, and it doesn't drain the EGD server at all
when the local pool is sufficiently full.

Indeed it is. I happen to be working with the folks that wrote it, as it happens.

Does your EGD client + virtio-rng support do that - avoid draining the
source when the guest's pool is full enough?

Actually yes, although that involves trusting the guest, which is the reason why my implementation has its own rate limiting - to prevent guest abuse of the hosts pool.

If guests need a _reliable_ source of data for security, silently not
complaining when it's gone away and hoping it comes back isn't good
enough.

Why? its not like the guest:

a) Has a choice in the matter
b) Would carry on without the entropy (it knows it has no entropy)

Because one might prefer a big red light, a halted machine removed
from the cluster which can resume its work when ready, and an email to
warn you that the machine isn't able to operate normally _without_
having to configure each guest's email, rather than a working machine
with increasing numbers of stuck crypto processes waiting on
/dev/random which runs out of memory and after getting into swap hell,
you have to reboot it, losing the other work that it was in the
middle of doing.

Well, you personally might not prefer that.  But that's why we
separate policy from mechanism...

Thats something of a doomsday scenario, but hey - if you like having QMP support, feel free to add it to the patch - thats what open source is all about, right?

virtio-serial isn't emulating a normal serial port.  It supports apps
like "send machine status blobs regularly", without having to be
robust against half a blob being delivered.

You can design packets so that doesn't matter, but virtio-serial
supports not needing to do that, making the apps simpler.

Have you actually read the code? virtio-rng is FAR simpler than virtio-serial.

I don't think it'll happen.  I think egd is a rather unusual
If another backend ever needs it, it's easy to move code around.

*bangs head on wall*

That was the exact same argument I made about the rate limiting code.
Why is that apparently only valid if its not me that says it?

Because you're talking to multiple people who hold different opinions,
and opinions change as more is learned and thought about.

Round in circles, apparently. This is getting on for the fourth time round...

Ah, that's not quite what I meant.  I meant I wasn't convinced it is
needed for egd, not I don't think anyone should use egd.  (But now I
see that egd-linux has a "reconnect time" option, perhaps reconnecting
_is_ de facto part of EGD protocol.)

Actually EGD is very much an ad-hoc standard I'm afraid. But it does exist, and my implementation both works, and is compliant with the standard, such as it is.

But now that we've confirmed that on Debian&  Ubuntu, all hardware
entropy sources are injected into /dev/random by userspace daemons
rather than serving EGD protocol,

No, we know that they *can be* not that they are, and not even by default - you actually have to elect to install a package to do that.

and if you do have an EGD server you
can run egd-linux and apps can read /dev/random, and egd-linux won't
drain the EGD server unnecessarily...

If you want the entropy to enter the *hosts* pool then sure - but I thought we weren't about forcing policy on the users?

> are you sure EGD support is appropriate?

Yes. Its what the users want. Its not broken or inefficient. Therefore its appropriate.

Is it different on, say, Fedora?  Or are you thinking of
other hosts?

There do exist hosts with no /dev/random as it happens.

-Ian




reply via email to

[Prev in Thread] Current Thread [Next in Thread]