[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v4 3/4] Introduce the NVMM impl
From: |
Maxime Villard |
Subject: |
Re: [PATCH v4 3/4] Introduce the NVMM impl |
Date: |
Mon, 2 Mar 2020 20:28:04 +0100 |
Le 02/03/2020 à 19:13, Paolo Bonzini a écrit :
> On 06/02/20 22:32, Kamil Rytarowski wrote:
>> +get_qemu_vcpu(CPUState *cpu)
>> +{
>> + return (struct qemu_vcpu *)cpu->hax_vcpu;
>> +}
>
> Please make hax_vcpu a void * and rename it to "accel_data".
NVMM reproduces the existing logic in the other accelerators. I agree
that it should be "accel_data" with void *, but that should be done
in all accelerators in a separate commit, unrelated to NVMM.
>> + nseg->attrib.g = __SHIFTOUT(attrib, DESC_G_MASK);
>
>> + __SHIFTIN((uint32_t)nseg->attrib.g, DESC_G_MASK);
>
> What are __SHIFTOUT and __SHIFTIN?
They are macros in NetBSD.
>> + if (qcpu->int_window_exit) {
>
> Should it assert the condition in the "if" below?
No, because if int_window_exit is set, then state->intr.int_window_exiting
is set too, so there is no point doing get+set.
>> + return false;
>> + }
>> +
>> + if (qcpu->int_shadow || !(env->eflags & IF_MASK)) {
>> + struct nvmm_x64_state *state = vcpu->state;
>> +
>> + /* Exit on interrupt window. */
>> + nvmm_vcpu_getstate(mach, vcpu, NVMM_X64_STATE_INTR);
>> + state->intr.int_window_exiting = 1;
>> + nvmm_vcpu_setstate(mach, vcpu, NVMM_X64_STATE_INTR);
>
> ... and should this set qcpu->int_window_exit?
Mmh, maybe. Not a big problem though, because at worst it just means we
set int_window_exiting to one while it was already one. I'll think about
that for a future commit. (I'm not immediately able to test.)
>> +
>> + return false;
>> + }
>
> Have you tried running kvm-unit-tests?
I didn't know kvm-unit-tests (until now). I developed my own tests and
ran them in qemu-nvmm. But good to know, I'll try these tests.
>> + /* Needed, otherwise infinite loop. */
>> + current_cpu->vcpu_dirty = false;
>
> Can you explain this?
If vcpu_dirty remains true, we land here in the next iteration of the
loop:
if (cpu->vcpu_dirty) {
nvmm_set_registers(cpu);
cpu->vcpu_dirty = false;
}
And the (now updated) register values are lost. The guest stays on the
same instruction.
>> + break;
>> + default: /* More MSRs to add? */
>> + val = 0;
>
> I would add at least MSR_IA32_TSC.
MSR_IA32_TSC is handled by the kernel, that's why it isn't there. The
values do get synced:
state->msrs[NVMM_X64_MSR_TSC] = env->tsc;
...
env->tsc = state->msrs[NVMM_X64_MSR_TSC];
>> + if (qcpu->stop) {
>> + cpu->exception_index = EXCP_INTERRUPT;
>> + qcpu->stop = false;
>> + ret = 1;
>> + break;
>> + }
>> +
>> + nvmm_vcpu_pre_run(cpu);
>> +
>> + if (atomic_read(&cpu->exit_request)) {
>> + qemu_cpu_kick_self();
>> + }
>> +
>
> This is racy without something like KVM's immediate_exit mechanism.
> This should be fixed in NVMM.
I don't immediately see how this is racy. It reproduces the existing
logic found in whpx-all.c, and if there is a real problem it can be
fixed in a future commit along with WHPX.
Maxime
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Paolo Bonzini, 2020/03/02
- Re: [PATCH v4 3/4] Introduce the NVMM impl,
Maxime Villard <=
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Paolo Bonzini, 2020/03/02
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Maxime Villard, 2020/03/10
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Kamil Rytarowski, 2020/03/10
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Paolo Bonzini, 2020/03/10
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Maxime Villard, 2020/03/10
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Paolo Bonzini, 2020/03/11
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Maxime Villard, 2020/03/11
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Paolo Bonzini, 2020/03/11
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Maxime Villard, 2020/03/11
- Re: [PATCH v4 3/4] Introduce the NVMM impl, Kamil Rytarowski, 2020/03/11