qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH 6/6] softmmu: Restrict watchpoint handlers to TCG acceler


From: Alex Bennée
Subject: Re: [RFC PATCH 6/6] softmmu: Restrict watchpoint handlers to TCG accelerator
Date: Mon, 15 Feb 2021 12:05:09 +0000
User-agent: mu4e 1.5.8; emacs 28.0.50

Claudio Fontana <cfontana@suse.de> writes:

> On 1/18/21 10:36 AM, Philippe Mathieu-Daudé wrote:
>> On 1/18/21 10:10 AM, Claudio Fontana wrote:
>>> On 1/17/21 5:48 PM, Philippe Mathieu-Daudé wrote:
>>>> Watchpoint funtions use cpu_restore_state() which is only
>>>> available when TCG accelerator is built. Restrict them
>>>> to TCG.
>>>>
>>>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>>
>>> I am doing some of this in my series, and I did not notice that
>>> cpu_watchpoint_insert was also TCG only.
>>>
>>> Probably we should merge this somehow.
>>>
>>> I thought it was used by gdbstub.c as well, passing flags BP_GDB .
>> 
>> BP_MEM_ACCESS for watchpoint.
>> 
>>> I noticed that gdbstub does something else entirely for kvm_enabled(), ie, 
>>> kvm_insert_breakpoint,
>>> but what about the other accels, it seems that the code flows to the
>>cpu_breakpoint_insert and watchpoint_insert..?

For KVM (and I guess other accelerators) the kvm_insert_breakpoint is
really the entry point for all debug. SW breakpoints are dealt with
separately from HW breakpoints and watchpoints which involve more than
just planting code in the guests RAM. 

>>>
>>> should cpu_breakpoint_insert have the same fate then?
>>>
>>> And is this really all TCG specific?
>>>
>>> From gdbstub.c:1020:
>>>
>>> static int gdb_breakpoint_insert(int type, target_ulong addr, target_ulong 
>>> len)
>>> {
>>>     CPUState *cpu;
>>>     int err = 0;
>>>
>>>     if (kvm_enabled()) {
>>>         return kvm_insert_breakpoint(gdbserver_state.c_cpu, addr, len, 
>>> type);
>> 
>> Doh I missed that. I remember Peter and Richard explained it once
>> but I forgot and couldn't find on the list, maybe it was on IRC.
>> 
>> See i.e. in target/arm/kvm64.c:
>> 
>>  312 int kvm_arch_insert_hw_breakpoint(target_ulong addr,
>>  313                                   target_ulong len, int type)
>>  314 {
>>  315     switch (type) {
>>  316     case GDB_BREAKPOINT_HW:
>>  317         return insert_hw_breakpoint(addr);
>>  318         break;
>>  319     case GDB_WATCHPOINT_READ:
>>  320     case GDB_WATCHPOINT_WRITE:
>>  321     case GDB_WATCHPOINT_ACCESS:
>>  322         return insert_hw_watchpoint(addr, len, type);
>>  323     default:
>>  324         return -ENOSYS;
>>  325     }
>>  326 }
>> 
>>>
>>>> ---
>>>> RFC because we could keep that code by adding an empty
>>>>     stub for cpu_restore_state(), but it is unclear as
>>>>     the function is named generically.
>>>> ---
>>>>  include/hw/core/cpu.h | 4 ++--
>>>>  softmmu/physmem.c     | 4 ++++
>>>>  2 files changed, 6 insertions(+), 2 deletions(-)
>> 
>
> Hello Philippe,
>
> I have reached this issue again when working on the ARM part of the cleanup,
> did we reach a conclusion on whether cpu_watchpoint_insert is TCG-specific or 
> not,
>
> and more in general about breakpoints and watchpoints?
>
> The way I encountered this issue now is during the KVM/TCG split in 
> target/arm.
>
> there are calls in target/arm/cpu.c and machine.c of the kind:
>
> hw_breakpoint_update_all()
> hw_watchpoint_update_all()

This is the third case, using the TCG's internal breakpoint/watchpoint
structures to simulate the hardwares HW breakpoint/watchpoint support
under emulation.

> is this all TCG-specific,
>
> including also hw_watchpoint_update(), hw_breakpoint_update() ?
>
> kvm64.c seems to have its own handling of breakpoints, including arrays:
>
> GArray *hw_breakpoints, *hw_watchpoints;

Correct. KVM and other HW accelerators are really just ensuring that
accounting is done in some arch specific way to ensure registers are
setup and traps properly interpreted.

>
> while the TCG stuff relies on cpu->watchpoints in softmmu/physmem.c:

Because we need core TCG support to detect cases for both gdbstub and
emulating real HW.

>
> $ gid watchpoints
> include/hw/core/cpu.h:139: * address before attempting to match it against 
> watchpoints.
> include/hw/core/cpu.h:388:    QTAILQ_HEAD(, CPUWatchpoint) watchpoints;
> linux-user/main.c:210:    /* Clone all break/watchpoints.
> linux-user/main.c:212:       BP_CPU break/watchpoints are handled correctly 
> on clone. */
> linux-user/main.c:214:    QTAILQ_INIT(&new_cpu->watchpoints);
> linux-user/main.c:218:    QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
> softmmu/physmem.c:791:    /* keep all GDB-injected watchpoints in front */
> softmmu/physmem.c:793:        QTAILQ_INSERT_HEAD(&cpu->watchpoints, wp, 
> entry);
> softmmu/physmem.c:795:        QTAILQ_INSERT_TAIL(&cpu->watchpoints, wp, 
> entry);
> softmmu/physmem.c:816:    QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
> softmmu/physmem.c:829:    QTAILQ_REMOVE(&cpu->watchpoints, watchpoint, entry);
> softmmu/physmem.c:836:/* Remove all matching watchpoints.  */
> softmmu/physmem.c:841:    QTAILQ_FOREACH_SAFE(wp, &cpu->watchpoints, entry, 
> next) {
> softmmu/physmem.c:868:/* Return flags for watchpoints that match addr + prot. 
>  */
> softmmu/physmem.c:874:    QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
> softmmu/physmem.c:906:    QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
> softmmu/physmem.c:911:                 * Don't process the watchpoints when 
> we are
> accel/tcg/cpu-exec.c:511:        QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) 
> {
> accel/tcg/cpu-exec.c:822:               after-access watchpoints.  Since this 
> request should never
> hw/core/cpu.c:361:    QTAILQ_INIT(&cpu->watchpoints);

Also we need softmmu for watchpoints - because otherwise there is no way
to mark memory to trigger on access. One day we might solve this with
the oft talked about softmmu for user-mode combination.

> Even in i386 there is a bit of confusion still, and I think sorting out this 
> could improve the code on i386 as well..
>
> Thanks for any comment,
>
> Ciao,
>
> CLaudio


-- 
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]