|
From: | Vladimir Sementsov-Ogievskiy |
Subject: | Re: [PATCH v3] qapi: introduce 'query-cpu-model-cpuid' action |
Date: | Mon, 29 Mar 2021 15:41:34 +0300 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0 |
29.03.2021 14:48, Daniel P. Berrangé wrote:
On Mon, Mar 29, 2021 at 02:21:53PM +0300, Valeriy Vdovin wrote:On Mon, Mar 29, 2021 at 10:20:54AM +0100, Daniel P. Berrang?? wrote:On Fri, Mar 26, 2021 at 08:30:00PM +0300, Valeriy Vdovin wrote:Other than debug, the method is useful in cases when we would like to utilize QEMU's virtual cpu initialization routines and put the retrieved values into kernel CPUID overriding mechanics for more precise control over how various processes perceive its underlying hardware with container processes as a good example.When I read this, my impression is that QEMU's CPU handling doesn't do what you need, and you're trying to work around it outside of QEMU. Can you give more detailed information about what situations QEMU's CPUID handling doesn't work, and why we can't simply enhance QEMU to do what you need ?We want to override CPUID for container processes to support live migration. For that we want to base on a reliable cpu model, which is present in libvirt and QEMU. We will communicate cpu models information between physical nodes to decide the baseline cpu model and then we could use the new method to get all CPUID value leaves that we would return to containers during CPUID override. In our case the QAPI-way of getting the values is a clean solution, because we can just query it from the outside (not as guest system).IIUC, you seem to be saying that you're not actually going to run a real QEMU VM at all ? You're just using QEMU / QMP as a convenient way expand a named CPI model into CPUID leaves, so you can then use this data in a completely separate container based mgmt application. Essentially treating QMP as a general purpose API for handling CPU models.virsh qemu-monitor-command VM --pretty '{ "execute": "query-cpu-model-cpuid" }' { "return": { "cpuid": { "leafs": [ { "leaf": 0, "subleafs": [ { "eax": 13, "edx": 1231384169, "ecx": 1818588270, "ebx": 1970169159, "subleaf": 0 } ] }, { "leaf": 1, "subleafs": [ { "eax": 329443, "edx": 529267711, "ecx": 4160369187, "ebx": 133120, "subleaf": 0 } ] }, { "leaf": 2, "subleafs": [ { "eax": 1, "edx": 2895997, "ecx": 0, "ebx": 0, "subleaf": 0 } ] }, ] }, "vendor": "GenuineIntel", "class-name": "Skylake-Client-IBRS-x86_64-cpu", "model-id": "Intel Core Processor (Skylake, IBRS)" }, "id": "libvirt-40" }There's feels like there's a lot of conceptual overlap with the query-cpu-model-expansion command. That reports in a arch independant format, but IIUC the property data it returns can be mapped into CPUID leaf values. Is it not possible for you to use this existing command and maintain a mapping of property names -> CPUID leaves ?As already stated in the use-case description above, having this method around, helps us in a way that we can just take values and return them to containers. QEMU code already does a great job, generating CPUID responses, we don't want to do the same in our own code.This is asking QEMU to maintain a new QAPI command which does not appear to have a use case / benefit for QEMU mgmt. It isn't clear to me that this should be considered in scope for QMP.
Hmm. On the other hand, 1. The command just exports some information, like a lot of other qmp query- commands, it doesn't look as something alien in the QEMU interface. 2. We do have a use-case. Not a VM use-case, but a use-case of cpu handling subsystem. Isn't it enough? We want to handle cpu configurations in a compatible with QEMU way. The simplest thing for it is just generate needed information with help of QEMU. Note, that's not the only usage of QEMU binary for not-VM-running. QEMU binary may be used for different block-jobs and manipulating bitmaps in disk images (yes, now we also have qemu-storage-daemon, but still). Do you have an idea how our task could be solved an a better way? -- Best regards, Vladimir
[Prev in Thread] | Current Thread | [Next in Thread] |