[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] Simple performance logging and network limiting
From: |
harald Schieche |
Subject: |
Re: [Qemu-devel] [PATCH] Simple performance logging and network limiting based on trace option |
Date: |
Thu, 30 Oct 2014 15:05:11 +0100 |
> Missing commit description:
>
> What problem are you trying to solve?
>
I want to log the storage (iops per second) and
network speed (packets and bandwidth per second)
I want to limit the network traffic to a specific bandwidth.
>
> The "Signed-off-by" goes here.
>
> > diff --git a/block/raw-posix.c b/block/raw-posix.c
> > index 475cf74..3c5cc71 100644
> > --- a/block/raw-posix.c
> > +++ b/block/raw-posix.c
> > @@ -1031,6 +1031,27 @@ static int aio_worker(void *arg)
> > return ret;
> > }
> >
> > +static void log_guest_storage_performance(void)
> > +{
> > + /*
> > + * Performance logging isn't specified yet.
> > + * Therefore we're using existing tracing.
> > + */
> > + static int64_t logged_clock;
> > + static int64_t counter;
>
> There can be multiple threads, static variables will not work.
ok, I have to fix it.
>
> > + int64_t clock = get_clock();
> > +
> > + counter++;
> > + if (clock - logged_clock >= 1000000000LL) {
>
> You are trying to identify calls to log_guest_storage_performance() that
> are more than 1 second apart? I'm not sure what you're trying to
> measure.
I want to measure calls per second and I want to log when i/o is active
>
> It is simplest to have unconditional trace events and calculate
> latencies during trace file analysis. That way no arbitrary constants
> like 1 second are hard-coded into QEMU.
We already have an unconditional trace event (paio_submit) but maybe there
are too many calls of it.
>
> > diff --git a/net/queue.c b/net/queue.c
> > index f948318..2b0fef7 100644
> > --- a/net/queue.c
> > +++ b/net/queue.c
> > @@ -23,7 +23,9 @@
> >
> > #include "net/queue.h"
> > #include "qemu/queue.h"
> > +#include "qemu/timer.h"
> > #include "net/net.h"
> > +#include "trace.h"
> >
> > /* The delivery handler may only return zero if it will call
> > * qemu_net_queue_flush() when it determines that it is once again able
> > @@ -58,6 +60,15 @@ struct NetQueue {
> > unsigned delivering : 1;
> > };
> >
> > +static int64_t bandwidth_limit; /* maximum number of bits per second */
>
> Throttling should be per-device, not global.
Maybe this would be better. But this patch should be most simple.
>
> > +
> > +void qemu_net_set_bandwidth_limit(int64_t limit)
> > +{
> > + bandwidth_limit = limit;
> > + trace_qemu_net_set_bandwidth_limit(limit);
> > +}
> > +
> > +
> > NetQueue *qemu_new_net_queue(void *opaque)
> > {
> > NetQueue *queue;
> > @@ -175,6 +186,48 @@ static ssize_t qemu_net_queue_deliver_iov(NetQueue
> > *queue,
> > return ret;
> > }
> >
> > +static int64_t limit_network_performance(int64_t start_clock,
> > + int64_t bytes)
> > +{
> > + int64_t clock = get_clock();
> > + int64_t sleep_usecs = 0;
> > + if (bandwidth_limit > 0) {
> > + sleep_usecs = (bytes * 8 * 1000000LL) / bandwidth_limit -
> > + (clock - start_clock) / 1000LL;
> > + }
> > + if (sleep_usecs > 0) {
> > + usleep(sleep_usecs);
>
> This does more than limit the network performance, it can also freeze
> the guest.
>
> QEMU is event-driven. The event loop thread is not allowed to block or
> sleep - otherwise the vcpu threads will block when they try to acquire
> the QEMU global mutex.
>
Yes, it freezes the guest. That's not fine, but simple.