[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH RFCv3 6/9] s390x/diag: subcode to query device memory region

From: David Hildenbrand
Subject: [PATCH RFCv3 6/9] s390x/diag: subcode to query device memory region
Date: Fri, 24 Jul 2020 16:37:47 +0200

A guest OS that is aware of memory devices (placed into the device
memory region located in guest physical address space) has to know at least
the end address of the device memory region during boot, for example, to
prepare the kernel virtual address space accordingly (e.g., select page
table hierarchy). The device memory region is located above the SCLP
maximum storage increment.

Let's provide a new diag500 subcode to query the location of the device
memory region under QEMU/KVM. This way, esp. Linux who's wants to support
virtio-based memory devices can query the location of this region and
derive the maximum possible PFN.

Let's use a specification exception in case no such memory region
exists (e.g., maxmem wasn't specified, or on old QEMU machines). We'll
unlock this with future patches that prepare and instanciate the device
memory region.

Memory managed by memory devices should never be detected and used
without having proper support for them in the guest (IOW, a driver that
detects and handles the devices). It's not exposed via other HW/firmware
interfaces (e.g., SCLP, diag260). In the near future, the focus is on
supporting virtio-based memory devices like vitio-mem. Other memory devices
are imaginable in the future (e.g., expose DIMMs via a KVM-specific
interface to s390x guests).

Note: We don't want to include the device memory region within the
SCLP-defined maximum storage increment, because especially older
guests will will sense (via tprot) accessible memory within this range.
If an unmodified guest would detect and use device memory, it could end
badly. The memory might have different semantics (e.g., a disk provided
via virtio-pmem a.k.a. DAX) and might require a handshake first (e.g.,
unplugged memory part of virtio-mem in some cases), before memory that
might look accessible can actually be used without surprises.

Signed-off-by: David Hildenbrand <david@redhat.com>
 hw/s390x/s390-hypercall.c | 18 ++++++++++++++++++
 hw/s390x/s390-hypercall.h |  1 +
 2 files changed, 19 insertions(+)

diff --git a/hw/s390x/s390-hypercall.c b/hw/s390x/s390-hypercall.c
index 20d4f6e159..ac21f4576e 100644
--- a/hw/s390x/s390-hypercall.c
+++ b/hw/s390x/s390-hypercall.c
@@ -11,6 +11,7 @@
 #include "qemu/osdep.h"
 #include "cpu.h"
+#include "hw/boards.h"
 #include "hw/s390x/s390-hypercall.h"
 #include "hw/s390x/ioinst.h"
 #include "hw/s390x/css.h"
@@ -44,6 +45,20 @@ static int handle_virtio_ccw_notify(uint64_t subch_id, 
uint64_t queue)
     return 0;
+static void handle_device_memory_region(CPUS390XState *env, uintptr_t ra)
+    MachineState *machine = MACHINE(qdev_get_machine());
+    if (!machine->device_memory ||
+        !memory_region_size(&machine->device_memory->mr)) {
+        s390_program_interrupt(env, PGM_SPECIFICATION, ra);
+        return;
+    }
+    env->regs[2] = machine->device_memory->base;
+    env->regs[3] = machine->device_memory->base +
+                   memory_region_size(&machine->device_memory->mr) - 1;
 void handle_diag_500(CPUS390XState *env, uintptr_t ra)
      const uint64_t subcode = env->regs[1];
@@ -55,6 +70,9 @@ void handle_diag_500(CPUS390XState *env, uintptr_t ra)
          env->regs[2] = handle_virtio_ccw_notify(env->regs[2], env->regs[3]);
+        handle_device_memory_region(env, ra);
+        break;
          s390_program_interrupt(env, PGM_SPECIFICATION, ra);
diff --git a/hw/s390x/s390-hypercall.h b/hw/s390x/s390-hypercall.h
index e6b958db41..1b179d7d99 100644
--- a/hw/s390x/s390-hypercall.h
+++ b/hw/s390x/s390-hypercall.h
@@ -16,6 +16,7 @@
 #define DIAG500_VIRTIO_RESET           1 /* legacy */
 #define DIAG500_VIRTIO_SET_STATUS      2 /* legacy */
 void handle_diag_500(CPUS390XState *env, uintptr_t ra);
 #endif /* HW_S390_HYPERCALL_H */

reply via email to

[Prev in Thread] Current Thread [Next in Thread]