qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 2/2] nvme: allow cmb and pmr to be enabled on same device


From: Andrzej Jakowski
Subject: Re: [PATCH v4 2/2] nvme: allow cmb and pmr to be enabled on same device
Date: Tue, 21 Jul 2020 14:54:19 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.9.0

On 7/15/20 1:06 AM, Klaus Jensen wrote:
> Hi Andrzej,
> 
> I've not been ignoring this, but sorry for not following up earlier.
> 
> I'm hesitent to merge anything that very obviously breaks an OS that we
> know is used a lot to this using this device. Also because the issue has
> not been analyzed well enough to actually know if this is a QEMU or
> kernel issue.

Hi Klaus,

Thx for your response! I understand your hesitance on merging stuff that
obviously breaks guest OS. 

> 
> Now, as far as I can test, having the MSI-X vector table and PBA in BAR
> 0, PMR in BAR 2 and CMB in BAR 4 seems to make everyone happy
> (irregardless of IOMMU on/off).
> 
> Later, when the issue is better understood, we can add options to set
> offsets, BIRs etc.
> 
> The patch below replaces your "[PATCH v4 2/2] nvme: allow cmb and pmr to
> be enabled" (but still requires "[PATCH v4 1/2] ...") and applies to
> git://git.infradead.org/qemu-nvme.git nvme-next branch.
> 
> Can you reproduce the issues with that patch? I can't on a stock Arch
> Linux 5.7.5-arch1-1 kernel.

While I'm happy that approach with MSIX and PBA in BAR0 works fine, I
feel that investigation part why it works while mine doesn't is
missing. It looks to me that both patches are basically following same 
approach: create memory subregion and overlay on top of other memory
region. Why one works and the other doesn't then?

Having in mind that, I have recently focused on understanding problem.
I observed that when guest assigns address to BAR4, addr field in
nvme-bar4 memory region gets populated, but it doesn't get automatically
populated in ctrl_mem (cmb) memory subregion, so later when nvme_addr_is_cmb() 
is called address check works incorrectly and as a consequence vmm does dma 
read instead of memcpy.
I created a patch that sets correct address on ctrl_mem subregion and guest 
OS boots up correctly.

When I looked into pci and memory region code I noticed that indeed address
is only assigned to top level memory region but not to contained subregions.
I think that because in your approach cmb grabs whole bar exclusively it works
fine.

Here is my question (perhaps pci folks can help answer :)): if we consider 
memory region overlapping for pci devices as valid use case should pci 
code on configuration cycles walk through all contained subregion and
update addr field accordingly?

Thx!



reply via email to

[Prev in Thread] Current Thread [Next in Thread]