qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v1 2/3] target/s390x: implement mvcos instructio


From: Richard Henderson
Subject: Re: [Qemu-devel] [PATCH v1 2/3] target/s390x: implement mvcos instruction
Date: Tue, 13 Jun 2017 21:41:30 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0

On 06/13/2017 02:47 PM, David Hildenbrand wrote:
+static inline bool psw_key_valid(CPUS390XState *env, uint8_t psw_key)
+{
+    uint16_t pkm = ((env->cregs[3] & CR3_PKM) >> 16);
+
+    if (env->psw.mask & PSW_MASK_PSTATE) {
+        /* PSW key has range 0..15, it is valid if the bit is 1 in the PKM */
+        return pkm & (1 << (psw_key & 0xff));

Did you intend to write & 0xf?  Otherwise this mask is pointless...


+        switch (src_as) {
+        case 0x0:
+            x = cpu_ldub_primary_ra(env, src, ra);
+            break;
+        case 0x2:
+            x = cpu_ldub_secondary_ra(env, src, ra);
+            break;
+        case 0x3:
+            x = cpu_ldub_home_ra(env, src, ra);
+            break;
+        }
+        switch (dest_as) {
+        case 0x0:
+            cpu_stb_primary_ra(env, dest, x, ra);
+            break;
+        case 0x2:
+            cpu_stb_secondary_ra(env, dest, x, ra);
+            break;
+        case 0x3:
+            cpu_stb_home_ra(env, dest, x, ra);
+            break;
+        }

Rather than these switches, you can use helper_ret_ldub_mmu. Of course, that will only work for SOFTMMU. But for CONFIG_USER_ONLY, there's surely only one address space that's legal, so you could simply forward to fast_memmove.

+    if (!(env->psw.mask & PSW_MASK_DAT)) {
+        program_interrupt(env, PGM_SPECIAL_OP, 6);
+    }

You should use restore_program_state before program_interrupt (or add a new entry-point to do both). Then you can drop ...

+    potential_page_fault(s);
+    gen_helper_mvcos(cc_op, cpu_env, o->addr1, o->in2, regs[r3]);

... the potential_page_fault.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]