|
From: | Robin Voetter |
Subject: | Re: PCIe atomics in pcie-root-port |
Date: | Wed, 12 Apr 2023 18:58:50 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.8.0 |
On 4/6/23 20:40, Alex Williamson wrote:
I think the typical approach for QEMU would be expose options in the downstream ports that would then need to be enabled by the user or management tool, but that's where the complication begins. At some point we would want management tools to "do the right thing" themselves. Is support for PCIe atomics pervasive enough to default to enabling support?
Apparently this is supported from Haswell or newer on Intel, and Ryzen/Threadripper/Epyc on AMD hardware. I don't have any official data for this, though.
It seems automatic detection is a lot of added complication for this feature. For the time being, I think its best to allow enabling PCIe atomics using a device property for the pcie-root-port that is disabled by default. If general hardware support is good enough, it can be enabled by default. Compatibility with older QEMU versions can then be added using a newer hw compat in hw/core/machine.c.How do we handle hotplugged endpoints where the interconnects do not expose atomics support, or perhaps when they expose support that doesn't exist? At some point in the future when we have migration of devices with atomics, do we need to test all endpoint to endpoint paths for equivalent atomic support on the migration target? Are there capabilities on the endpoint that we can virtualize to disable use of atomics if the host and guest topologies are out of sync?
Kind regards, Robin
[Prev in Thread] | Current Thread | [Next in Thread] |