[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] qemu-system-ppc video artifacts since "tcg: drop global l

From: BALATON Zoltan
Subject: Re: [Qemu-ppc] qemu-system-ppc video artifacts since "tcg: drop global lock during TCG code execution"
Date: Tue, 14 Mar 2017 18:34:48 +0100 (CET)
User-agent: Alpine 2.20 (BSF 67 2015-01-07)

On Tue, 14 Mar 2017, Alex Bennée wrote:
So from a single-threaded -smp guest case there should be no difference
in behaviour. However cross-vCPU flushes are queued up using the async
work queue and are dealt with in the target vCPU context. In the single
threaded case it shouldn't matter as this work will get executed as soon
as round-robin scheduler gets to it:

 while (cpu && !cpu->queued_work_first && !cpu->exit_request) {

When converting a target to MTTCG its certainly something that has to
have attention paid to it. For example some cross-vCPU tlb flushes need
to be complete from the source vCPU point of view. In this case you call
the tlb_flush_*_synced() variants and exit the execution loop. This
ensures all vCPUs have completed flushes before we continue. See
a67cf2772733e for what I did on ARM. However this shouldn't affect
anything in the single-threaded world.

I think we have a single CPU and thread for these ppc machines here so I'm not sure how this could be relevant.

However delaying tlb_flushes() could certainly expose/hide stuff that is
accessing the dirty mechanism. tlb_flush itself now takes the tb_lock() to
avoid racing with the TB invalidation logic. The act of the flush will
certainly wipe all existing SoftMMU entries and force a re-load on each
memory access.

So is the dirty status of memory being read from outside a vCPU
execution context?

Like from the display controller models that use memory_region_get_dirty() to check if the frambuffer needs to be updated? But all display adaptors seem to do this and the problem was only seem on ppc so it may be related to something ppc specific.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]