qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 2/2] s390x/kvm: Pass SIGP Stop flags


From: David Hildenbrand
Subject: Re: [RFC PATCH v1 2/2] s390x/kvm: Pass SIGP Stop flags
Date: Mon, 11 Oct 2021 20:07:16 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 11.10.21 19:58, Eric Farman wrote:
On Mon, 2021-10-11 at 11:21 +0200, David Hildenbrand wrote:
On 11.10.21 10:40, Christian Borntraeger wrote:

Am 11.10.21 um 09:09 schrieb David Hildenbrand:
On 08.10.21 22:38, Eric Farman wrote:
When building a Stop IRQ to pass to KVM, we should incorporate
the flags if handling the SIGP Stop and Store Status order.
With that, KVM can reject other orders that are submitted for
the same CPU while the operation is fully processed.

Signed-off-by: Eric Farman <farman@linux.ibm.com>
Acked-by: Janosch Frank <frankja@linux.ibm.com>
---
    target/s390x/kvm/kvm.c | 4 ++++
    1 file changed, 4 insertions(+)

diff --git a/target/s390x/kvm/kvm.c b/target/s390x/kvm/kvm.c
index 5b1fdb55c4..701b9ddc88 100644
--- a/target/s390x/kvm/kvm.c
+++ b/target/s390x/kvm/kvm.c
@@ -2555,6 +2555,10 @@ void kvm_s390_stop_interrupt(S390CPU
*cpu)
            .type = KVM_S390_SIGP_STOP,
        };
+    if (cpu->env.sigp_order == SIGP_STOP_STORE_STATUS) {
+        irq.u.stop.flags = KVM_S390_STOP_FLAG_STORE_STATUS;
+    }
+

KVM_S390_STOP_FLAG_STORE_STATUS tells KVM to perform the store
status as well ... is that really what we want?
At least it should not hurt I guess. QEMU then does it again?

The thing is, that before we officially completed the action in user
space (and let other SIGP actions actually succeed in user space on
the
CPU), the target CPU will be reported as !busy in the kernel
already.
And before we even inject the stop interrupt, the CPU will be
detected
as !busy in the kernel. I guess it will fix some cases where we poll
via
SENSE if the stop and store happened, because the store *did* happen
in
the kernel and we'll simply store again in user space.

However, I wonder if we want to handle it more generically: Properly
flag a CPU as busy for SIGP when we start processing the order until
we
completed processing the order. That would allow to handle other
SIGP
operations in user space cleanly, without any chance for races with
SENSE code running in the kernel.


I think a generic solution would be ideal, but I'm wrestling with the
race with the kernel's SENSE code. Today, handle_sigp_single_dst
already checks to see if a CPU is currently processing an order and
returns a CC2 when it does, but of course the kernel's SENSE code
doesn't know that. We could flag the CPU as busy in the kernel when
sending a SIGP to userspace, so that the SENSE code indicates BUSY, but
then how do we know when userspace is finished and the CPU is no longer
BUSY?

I'd just add a new IOCTL for marking a CPU busy/!busy for SIGP from user space. You can then either let user space perform both actions (set+unset), or let the kernel automatically set "busy" and user space only clear "busy". You can define a new capability to enable the "automatically set busy when going to user space on sigp" -- might require some thoughts on some corner cases.

Maybe there might be other scenarios in the future where we might want to set a CPU busy for sigp without that CPU triggering a sigp action itself (e.g., externally triggered reset of a CPU? Simulation of check-stop? store status?), so at least having a way to set/reset a CPU busy for SIGP might be valuable.

Once we go to user space to process a SIGP, we usually don't care too much about some additional overhead due to 1 or 2 ioctls IMHO.

--
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]