qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] virtio-rng: Bump up quota value only when guest


From: Michael S. Tsirkin
Subject: Re: [Qemu-devel] [PATCH] virtio-rng: Bump up quota value only when guest requests entropy
Date: Mon, 13 Jul 2015 10:55:55 +0300

On Mon, Jul 13, 2015 at 01:04:45PM +0530, Amit Shah wrote:
> On (Mon) 13 Jul 2015 [02:53:36], Pankaj Gupta wrote:
> > Hi Amit,
> > 
> > Thanks for the review.
> > 
> > > 
> > > On (Fri) 10 Jul 2015 [15:04:00], Pankaj Gupta wrote:
> > > >    Timer was added in virtio-rng to rate limit the
> > > > entropy. It used to trigger at regular intervals to
> > > > bump up the quota value. The value of quota and timer
> > > > slice is decided based on entropy source rate in host.
> > > 
> > > It doesn't necessarily depnd on the source rate in the host - all we
> > > want the quota+timer to do is to limit the amount of data a guest can
> > > take from the host - to ensure one (potentially rogue) guest does not
> > > use up all the entropy from the host.
> > 
> > Sorry! for not being clear on this. By rate limit I meant same.
> > I used a broader term.
> 
> My comment was to the 'value of quota and timer slice is decided based
> on entropy source rate in host' - admins will usually not decide based
> on what sources the host has - they will typically decide how much a
> guest is supposed to consume.
> 
> > > > This resulted in triggring of timer even when quota
> > > > is not exhausted at all and resulting in extra processing.
> > > > 
> > > > This patch triggers timer only when guest requests for
> > > > entropy. As soon as first request from guest for entropy
> > > > comes we set the timer. Timer bumps up the quota value
> > > > when it gets triggered.
> > > 
> > > Can you say how you tested this?
> > > 
> > > Mainly interested in seeing the results in these cases:
> > > 
> > > * No quota/timer specified on command line
> >     Tested this scenario. I am setting timer when first request comes.
> >     So, timer gets fired after (1 << 16) ms time. 
> 
> But in case a quota or a timer isn't specified, the timer shouldn't be
> armed in the first place.
> 
> > > * Quota+timer specified on command line, and guest keeps asking host
> > >   for unlimited entropy, e.g. by doing 'dd if=/dev/hwrng of=/dev/null'
> > >   in the guest.
> > 
> >     I did not do  'dd if=/dev/hwrng of=/dev/null'.
> >     Did cat '/dev/hwrng' && '/dev/random'
> 
> OK - that's similar.  I like dd because when dd is stopped, it gives
> the rate at which data was received, so it helps in seeing if we're
> getting more rate than what was specified on the cmdline.
> 
> > > * Ensure quota restrictions are maintained, and we're not giving more
> > >   data than configured.
> >     Ensured. We are either giving < = requested data
> > > 
> > > For these tests, it's helpful to use the host's /dev/urandom as the
> > > source, since that can give data faster to the guest than the default
> > > /dev/random.  (Otherwise, if the host itself blocks on /dev/random,
> > > the guest may not get entropy due to that reason vs it not getting
> > > entropy due to rate-limiting.)
> > 
> >   Agree.
> >   Will test this as well.
> > 
> > > 
> > > I tested one scenario using the trace events.  With some quota and a
> > > timer value specified on the cmdline, before patch, I get tons of
> > > trace events before the guest is even up.  After applying the patch, I
> > > don't get any trace events.  So that's progress!
> > 
> > Thanks.
> > > 
> > > I have one question:
> > > 
> > > > Signed-off-by: Pankaj Gupta <address@hidden>
> > > > ---
> > > >  hw/virtio/virtio-rng.c         | 15 ++++++++-------
> > > >  include/hw/virtio/virtio-rng.h |  1 +
> > > >  2 files changed, 9 insertions(+), 7 deletions(-)
> > > > 
> > > > diff --git a/hw/virtio/virtio-rng.c b/hw/virtio/virtio-rng.c
> > > > index 22b1d87..8774a0c 100644
> > > > --- a/hw/virtio/virtio-rng.c
> > > > +++ b/hw/virtio/virtio-rng.c
> > > > @@ -78,6 +78,12 @@ static void virtio_rng_process(VirtIORNG *vrng)
> > > >          return;
> > > >      }
> > > >  
> > > > +    if (vrng->activate_timer) {
> > > > +        timer_mod(vrng->rate_limit_timer,
> > > > +                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > vrng->conf.period_ms);
> > > > +        vrng->activate_timer = false;
> > > > +    }
> > > > +
> > > >      if (vrng->quota_remaining < 0) {
> > > >          quota = 0;
> > > >      } else {
> > > > @@ -139,8 +145,7 @@ static void check_rate_limit(void *opaque)
> > > >  
> > > >      vrng->quota_remaining = vrng->conf.max_bytes;
> > > >      virtio_rng_process(vrng);
> > > > -    timer_mod(vrng->rate_limit_timer,
> > > > -                   qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL) +
> > > > vrng->conf.period_ms);
> > > > +    vrng->activate_timer = true;
> > > >  }
> > > 
> > > We're processing an older request first, and then firing the timer.
> > > What's the use of doing it this way?  Why even do this?
> > 
> > I also had this query. If we don't call this after resetting 
> > 'vrng->quota_remaining' 
> > further requests does not work. It looks to me some limitation in earlier 
> > code when 
> > 'vrng->quota_remaining' goes to < = 0. A self request is needed to reset 
> > things.
> > 
> > I will try to find the answer.
> 
> OK so I actually read through the thing, and this is useful for such a
> scenario:
> 
> assume our rate-limit is at 4KB/s.
> 
> * guest queues up multiple requests, say 4KB, 8KB, 12KB.
> * we will serve the first request in the queue, which is 4KB.
> * then, check_rate_limit() is triggered, and we serve the 2nd request.
> * since the 2nd request is for 8KB, but we can only give 4KB in 1
>   second, we only give the guest 4KB.  We then re-arm the timer so
>   that we can get to the next request in the list.  Without this
>   re-arming, the already-queued request will not get attention (till
>   the next time the guest writes something to the queue.
> * we then serve the 3rd request for 12KB, again with 4KB of data.
> 
> One thing to observe here is that we just service the minimum data we
> can without waiting to service the entire request (i.e. we give the
> guest 4KB of data, and not 8 or 12 KB.  We could do that by waiting
> for the timer to expire, and then servicing the entire request.
> This current way is simpler, and better.  If the guest isn't happy to
> receive just 4KB when it asked for 12, it can ask again.
> 
> That also saves us the trouble with live migration: if we hold onto a
> buffer, that becomes state, and we'll have to migrate it as well.
> This current model saves us from doing that.

Timers hold state and have to be migrated, this seems to be missing
here.


> And now that I type all this, I recall having thought about these
> things initially.  Don't know if I wrote it up somewhere, or if there
> are email conversations on this subject...
> 
> So now with your changes, here is what we can do: instead of just
> activating the timer in check_rate_limit(), we can issue a
> get_request_size() call.  If the return value is > 0, it means we have
> a buffer queued up by the guest, and we can then call
> virtio_rng_process() and set activated to true.  Else, no need to call
> virtio_rng_process at all, and the job for the timer is done, since
> there are no more outstanding requests from the guest.
> 
> Makes sense?
> 
>               Amit



reply via email to

[Prev in Thread] Current Thread [Next in Thread]