qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 1/1] QEMU plugin interface extension


From: Florian Hauschild
Subject: Re: [RFC PATCH 1/1] QEMU plugin interface extension
Date: Thu, 26 Aug 2021 16:12:04 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0


Am 24.08.21 um 16:47 schrieb Peter Maydell:
> On Tue, 24 Aug 2021 at 15:34, Florian Hauschild
> <florian.hauschild@fs.ei.tum.de> wrote:
>>
>>
>>
>> Am 21.08.21 um 15:18 schrieb Peter Maydell:
>>> On Sat, 21 Aug 2021 at 10:48, Florian Hauschild
>>> <florian.hauschild@fs.ei.tum.de> wrote:
>>>>
>>>> This extension covers functions:
>>>>   * to read and write guest memory
>>>>   * to read and write guest registers
>>>>   * to flush tb cache
>>>>   * to control single stepping of qemu from plugin
>>>>
>>>> These changes allow the user to
>>>>   * collect more information about the behaviour of the system
>>>>   * change the guest state with a plugin during execution
>>>>   * control cache of tcg
>>>>   * allow for precise instrumentation in execution flow
>>>
>>>> +
>>>> +static int plugin_read_register(CPUState *cpu, GByteArray *buf, int reg)
>>>> +{
>>>> +    CPUClass *cc = CPU_GET_CLASS(cpu);
>>>> +    if (reg < cc->gdb_num_core_regs) {
>>>> +        return cc->gdb_read_register(cpu, buf, reg);
>>>> +    }
>>>> +    return 0;
>>>> +}
>>>
>>> At the point where these functions execute is the emulation
>>> definitely stopped (ie no register values currently held
>>> live in TCG locals) ?
> 
>> I am not sure, if it is definitely stopped.
>> I call them during tb_exec_cb and insn_exec_cb.
>> I have used the extension on ARM and RISC-V single cpu guests and the
>> data collected is the one i would expect during normal execution on real
>> hardware. How this would behave on a multi cpu/core system i have not
>> tested yet.
> 
> Multicore isn't relevant here. What you want to check for
> is what happens when the TB covers multiple guest instructions
> such that a later insn in the TB uses a register that is
> set by an earlier insn in the TB, eg:
> 
>     mov x0, 0
>     add x0, x0, 1
>     add x0, x0, 1
> 
> In this case TCG is likely to generate code which does not
> write back the intermediate 0 and 1 values of x0 to the CPUState
> struct, and so reading x0 via the gdb_read_register interface
> before the execution of the 3rd insn will continue to return
> whatever value x0 had before execution of the TB started.
> 
> For the gdbstub's use of the gdb_read_register API, this
> can't happen, because we always completely stop the CPU
> (which means it is not inside a TB at all) before handling
> gdbstub packets requesting register information.
> 
> I don't know whether the TCG plugin infrastructure takes steps
> with its various callbacks to ensure that intermediate values
> get written back to the CPU state before the callback is
> invoked: it's possible that this is safe, or can be made to
> be safe.
> 
> thanks
> -- PMM
> 

Sorry, i misunderstood your question.

Form my observation all three insn_cb would see x0 == 2. They are
executed at the end of a tb execution.

During my testing these changes were stable and i assume they are safe.
But thats why i chose RFC. I am new to QEMU and might overlook something
important.

Please correct me if i am wrong:
When the TB is executed, first the TB cb is executed, then the various
instruction cb. If you would like to see x0 in between instructions (e.g
mov and first add), QEMU need to be in single step mode.
The plugin infrastructure does have some sort of infrastructure to tell
the tcg if the registers are read or written to, but does apparently not
use it. The register values seem to be written back before the various
cbs are called.

Regards
Florian



reply via email to

[Prev in Thread] Current Thread [Next in Thread]