qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] Simple performance logging and network limiting


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH] Simple performance logging and network limiting based on trace option
Date: Wed, 29 Oct 2014 15:09:34 +0000
User-agent: Mutt/1.5.23 (2014-03-12)

On Tue, Oct 28, 2014 at 04:33:42PM +0100, Harald Schieche wrote:

Missing commit description:

What problem are you trying to solve?

The "Signed-off-by" goes here.

> diff --git a/block/raw-posix.c b/block/raw-posix.c
> index 475cf74..3c5cc71 100644
> --- a/block/raw-posix.c
> +++ b/block/raw-posix.c
> @@ -1031,6 +1031,27 @@ static int aio_worker(void *arg)
>      return ret;
>  }
>  
> +static void log_guest_storage_performance(void)
> +{
> +    /*
> +     * Performance logging isn't specified yet.
> +     * Therefore we're using existing tracing.
> +     */
> +    static int64_t logged_clock;
> +    static int64_t counter;

There can be multiple threads, static variables will not work.

> +    int64_t clock = get_clock();
> +
> +    counter++;
> +    if (clock - logged_clock >= 1000000000LL) {

You are trying to identify calls to log_guest_storage_performance() that
are more than 1 second apart?  I'm not sure what you're trying to
measure.

It is simplest to have unconditional trace events and calculate
latencies during trace file analysis.  That way no arbitrary constants
like 1 second are hard-coded into QEMU.

> diff --git a/net/queue.c b/net/queue.c
> index f948318..2b0fef7 100644
> --- a/net/queue.c
> +++ b/net/queue.c
> @@ -23,7 +23,9 @@
>  
>  #include "net/queue.h"
>  #include "qemu/queue.h"
> +#include "qemu/timer.h"
>  #include "net/net.h"
> +#include "trace.h"
>  
>  /* The delivery handler may only return zero if it will call
>   * qemu_net_queue_flush() when it determines that it is once again able
> @@ -58,6 +60,15 @@ struct NetQueue {
>      unsigned delivering : 1;
>  };
>  
> +static int64_t bandwidth_limit;     /* maximum number of bits per second */

Throttling should be per-device, not global.

> +
> +void qemu_net_set_bandwidth_limit(int64_t limit)
> +{
> +    bandwidth_limit = limit;
> +    trace_qemu_net_set_bandwidth_limit(limit);
> +}
> +
> +
>  NetQueue *qemu_new_net_queue(void *opaque)
>  {
>      NetQueue *queue;
> @@ -175,6 +186,48 @@ static ssize_t qemu_net_queue_deliver_iov(NetQueue 
> *queue,
>      return ret;
>  }
>  
> +static int64_t limit_network_performance(int64_t start_clock,
> +                                         int64_t bytes)
> +{
> +    int64_t clock = get_clock();
> +    int64_t sleep_usecs = 0;
> +    if (bandwidth_limit > 0) {
> +        sleep_usecs = (bytes * 8 * 1000000LL) / bandwidth_limit -
> +                      (clock - start_clock) / 1000LL;
> +    }
> +    if (sleep_usecs > 0) {
> +        usleep(sleep_usecs);

This does more than limit the network performance, it can also freeze
the guest.

QEMU is event-driven.  The event loop thread is not allowed to block or
sleep - otherwise the vcpu threads will block when they try to acquire
the QEMU global mutex.

Attachment: pgpUN916ZMywX.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]