qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: qcow2 perfomance: read-only IO on the guest generates high write IO


From: Kevin Wolf
Subject: Re: qcow2 perfomance: read-only IO on the guest generates high write IO on the host
Date: Tue, 24 Aug 2021 17:37:53 +0200

[ Cc: qemu-block ]

Am 11.08.2021 um 13:36 hat Christopher Pereira geschrieben:
> Hi,
> 
> I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest using
> "find | grep -c".
> 
> On the host I saw high write IO (40 MB/s !) during over 1 hour using
> virt-top.
> 
> I later repeated the read-only operation inside the guest and no additional
> data was written on the host. The operation took only some seconds.
> 
> I believe QEMU was creating some kind of cache or metadata map the first
> time I accessed the inodes.

No, at least in theory, QEMU shouldn't allocate anything when you're
just reading.

Are you sure that this isn't activity coming from your guest OS?

> But I wonder why the cache or metadata map wasn't available the first time
> and why QEMU had to recreate it?
> 
> The VM has "compressed base <- snap 1" and base was converted without
> prealloc.
> 
> Is it because we created the base using convert without metadata prealloc
> and so the metadata map got lost?
> 
> I will do some experiments soon using convert + metadata prealloc and
> probably find out myself, but I will happy to read your comments and gain
> some additional insights.
> If it the problem persists, I would try again without compression.

What were the results of your experiments? Is the behaviour related to
any of these options?

> Additional info:
> 
>  * Guest fs is xfs.
>  * (I believe the snapshot didn't significantly increase in size, but I
>    would need to double check)
>  * This is a production host with old QEMU emulator version 2.3.0
>    (qemu-kvm-ev-2.3.0-31.el7_2.10.1)

Discussing the most recent version is generally easier, but the expected
behaviour has always been the same, so it probably doesn't matter much
in this case.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]