qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/2] nvme: allow cmb and pmr to be enabled on same device


From: Klaus Jensen
Subject: Re: [PATCH v1 2/2] nvme: allow cmb and pmr to be enabled on same device
Date: Tue, 9 Jun 2020 11:49:39 +0200

On Jun  8 12:44, Andrzej Jakowski wrote:
> On 6/8/20 1:08 AM, Klaus Jensen wrote:
> > On Jun  5 11:10, Andrzej Jakowski wrote:
> >> So far it was not possible to have CMB and PMR emulated on the same
> >> device, because BAR2 was used exclusively either of PMR or CMB. This
> >> patch places CMB at BAR4 offset so it not conflicts with MSI-X vectors.
> >>
> > 
> > Hi Andrzej,
> > 
> > Thanks for doing this, it's a nice addition!
> > 
> > Though, I would prefer that the table and pba was located in BAR0 and
> > keeping BAR4 for exclusive CMB use. I'm no expert on this, but is it ok
> > to have the table and pba in prefetchable memory? Having it "together"
> > with the other controller-level configuration memory just feels more
> > natural to me, but I'm not gonna put my foot down.
> Hi Klaus,
> 
> Thx for your feedback!
> I don't think it matters if MSIX table is in prefetchable vs 
> non-prefetchable memory. 
> My understanding is that spec allows MSIX and PBA to be in any BAR and
> offset. I understand your preference and at the same time think that
> since it is not in violation of the spec why don't we leave it as-is?
> Does anybody know what's typical approach for real devices?

On the SSD in my system, lspci -vv shows:

        Capabilities: [b0] MSI-X: Enable+ Count=33 Masked-
                Vector table: BAR=0 offset=00002000
                PBA: BAR=4 offset=00000000

So, the table is in BAR0 but the PBA is in BAR4. Oh well.

> >> @@ -1342,6 +1346,71 @@ static const MemoryRegionOps nvme_cmb_ops = {
> >>      },
> >>  };
> >>  
> >> +#define NVME_MSIX_BIR (4)
> >> +static void nvme_bar4_init(PCIDevice *pci_dev)
> >> +{
> >> +    NvmeCtrl *n = NVME(pci_dev);
> >> +    int status;
> >> +    uint64_t bar_size = 4096;
> >> +    uint32_t nvme_pba_offset = bar_size / 2;
> >> +    uint32_t nvme_pba_size = QEMU_ALIGN_UP(n->num_queues, 64) / 8;
> >> +    uint32_t cmb_size_units;
> >> +
> >> +    if (n->num_queues * PCI_MSIX_ENTRY_SIZE > nvme_pba_offset) {
> >> +        nvme_pba_offset = n->num_queues * PCI_MSIX_ENTRY_SIZE;
> >> +    }
> >> +
> >> +    if (nvme_pba_offset + nvme_pba_size > 4096) {
> >> +        bar_size = nvme_pba_offset + nvme_pba_size;
> >> +    }
> >> +
> > 
> > This is migration compatibility stuff that is not needed because the
> > nvme device is unmigratable anyway.
> I don't understand that comment. Could you please explain more?
>  
> 

It looks like you cribbed this code from msix_init_exclusive_bar(). All
that fuzz about the PBA starting in the upper half (nvme_pba_offset =
bar_size / 2) for nentries lower or equal to 128 is for migration
compatibility (migrating a VM from an old version of QEMU to a newer
one). It is something that was added when the location of the msix table
and pba became dynamic (it used to be static). But, since the nvme
device is marked as unmigratable (VMStateDescription.unmigratable = 1),
I believe these special cases are irrelevant.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]