qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH 1/3] powerpc iommu: multiple TCE requests enabled


From: Alexey Kardashevskiy
Subject: Re: [Qemu-ppc] [PATCH 1/3] powerpc iommu: multiple TCE requests enabled
Date: Fri, 22 Feb 2013 12:03:49 +1100
User-agent: Mozilla/5.0 (X11; Linux i686 on x86_64; rv:17.0) Gecko/20130107 Thunderbird/17.0.2

On 22/02/13 09:52, David Gibson wrote:
On Tue, Feb 19, 2013 at 06:43:35PM +1100, Alexey Kardashevskiy wrote:
Currently only single TCE entry per requiest is supported (H_PUT_TCE).
However PAPR+ specification allows multiple entry requests such as
H_PUT_TCE_INDIRECT and H_STUFFF_TCE. Having less transitions to the host
kernel via ioctls, support of these calls can accelerate IOMMU operations.

The patch adds a check for the KVM_CAP_PPC_MULTITCE capability and
if it is supported, QEMU adds the "call-multi-tce" property to hypertas
list which triggers the guest to use H_PUT_TCE_INDIRECT and H_STUFF_TCE
instead of H_PUT_TCE.

Signed-off-by: Alexey Kardashevskiy <address@hidden>

Conflicts:
        hw/spapr_iommu.c
        linux-headers/linux/kvm.h

Try to remember to remove the conflict messages before you send out.

---
  hw/spapr.c                |   12 ++++++--
  hw/spapr_iommu.c          |   71 +++++++++++++++++++++++++++++++++++++++++++++
  linux-headers/linux/kvm.h |    1 +
  3 files changed, 82 insertions(+), 2 deletions(-)

diff --git a/hw/spapr.c b/hw/spapr.c
index 2ec0cd0..231a7b6 100644
--- a/hw/spapr.c
+++ b/hw/spapr.c
@@ -233,6 +233,9 @@ static void *spapr_create_fdt_skel(const char *cpu_model,
      CPUPPCState *env;
      uint32_t start_prop = cpu_to_be32(initrd_base);
      uint32_t end_prop = cpu_to_be32(initrd_base + initrd_size);
+    char hypertas_propm[] = 
"hcall-pft\0hcall-term\0hcall-dabr\0hcall-interrupt"
+        "\0hcall-tce\0hcall-vio\0hcall-splpar\0hcall-bulk"
+        "\0hcall-multi-tce";
      char hypertas_prop[] = 
"hcall-pft\0hcall-term\0hcall-dabr\0hcall-interrupt"
          "\0hcall-tce\0hcall-vio\0hcall-splpar\0hcall-bulk";
      char qemu_hypertas_prop[] = "hcall-memop1";
@@ -406,8 +409,13 @@ static void *spapr_create_fdt_skel(const char *cpu_model,
      /* RTAS */
      _FDT((fdt_begin_node(fdt, "rtas")));

-    _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas_prop,
-                       sizeof(hypertas_prop))));
+    if (kvm_check_extension(kvm_state, KVM_CAP_PPC_MULTITCE)) {
+        _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas_propm,
+                           sizeof(hypertas_propm))));
+    } else {
+        _FDT((fdt_property(fdt, "ibm,hypertas-functions", hypertas_prop,
+                           sizeof(hypertas_prop))));
+    }

You've implemented the multitce hypercalls in qemu, but because of the
kvm capability check, you'll never advertise them in full emu.
Instead you should always advertise them as available, and the kvm
capability will just be a question of whether they go fast (through
kvm) or slow (through qemu).

So we do not need the KVM_CAP_PPC_MULTITCE capability at all as we are not going to support real mode without multi-tce support in the host kernel, is that correct?



--
Alexey



reply via email to

[Prev in Thread] Current Thread [Next in Thread]