[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Tracking the VM making an IO request

From: Aarian P. Aleahmad
Subject: Re: [Qemu-devel] Tracking the VM making an IO request
Date: Sat, 12 Mar 2016 16:17:58 +0330

Thanks for helping me. what should I do in case of using KVM?

On Wed, Feb 10, 2016 at 4:10 PM, Paolo Bonzini <address@hidden> wrote:

On 10/02/2016 11:23, Stefan Hajnoczi wrote:
> On Wed, Feb 10, 2016 at 12:35:54PM +0330, Aarian P. Aleahmad
> wrote:
>> I'm a student, engaged in a project in which QEMU is a candidate
>> to be used to make some studies about IO usage etc. I need to
>> track the IO requests made to the block devices (e.g. HDD, SSD,
>> etc.). I check the source code but I was confused. What I want to
>> know is that when an IO request is made, find out that which on
>> of the VMs has made that request. I'll thank you if you help me
>> on this issue.
> There are trace events that you can use.  See docs/tracing.txt and
> trace-events.
> virtio_blk_handle_write and virtio_blk_handle_read can be used if
> your guest has virtio-blk.
> The QEMU block layer also has trace events named bdrv_aio_*.
> Or you could use blktrace(8) in the guest or on the host, depending
> on how you've set up storage.

It's the third time I've gotten the question recently which makes me
believe the others were friends with Aarian...

Each QEMU process represents a single VM.  Therefore, it is simple to
answer the question "which VM is making the request"; the answer is
"the one for the QEMU process you are tracing".

You probably want to use blktrace if you care about multiple VMs.
Alternatively, you can use tracing as mentioned by Stefan.  If you
compile QEMU with --enable-trace-backend=simple, the resulting files
can be parsed with Python programs (see scripts/simpletrace.py).  The
trace files include the pid and a timestamp based on CLOCK_MONOTONIC,
so it should be easy to merge the traces together.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]