qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3 1/6] block/nvme: don't touch the completion e


From: Maxim Levitsky
Subject: Re: [Qemu-devel] [PATCH v3 1/6] block/nvme: don't touch the completion entries
Date: Mon, 08 Jul 2019 16:06:44 +0300

On Mon, 2019-07-08 at 15:00 +0200, Max Reitz wrote:
> On 08.07.19 14:51, Maxim Levitsky wrote:
> > On Mon, 2019-07-08 at 14:23 +0200, Max Reitz wrote:
> > > On 07.07.19 10:43, Maxim Levitsky wrote:
> > > > On Fri, 2019-07-05 at 13:03 +0200, Max Reitz wrote:
> > > > > On 03.07.19 17:59, Maxim Levitsky wrote:
> > > > > > Completion entries are meant to be only read by the host and 
> > > > > > written by the device.
> > > > > > The driver is supposed to scan the completions from the last point 
> > > > > > where it left,
> > > > > > and until it sees a completion with non flipped phase bit.
> > > > > 
> > > > > (Disclaimer: This is the first time I read the nvme driver, or really
> > > > > something in the nvme spec.)
> > > > > 
> > > > > Well, no, completion entries are also meant to be initialized by the
> > > > > host.  To me it looks like this is the place where that happens:
> > > > > Everything that has been processed by the device is immediately being
> > > > > re-initialized.
> > > > > 
> > > > > Maybe we shouldn’t do that here but in nvme_submit_command().  But
> > > > > currently we don’t, and I don’t see any other place where we currently
> > > > > initialize the CQ entries.
> > > > 
> > > > Hi!
> > > > I couldn't find any place in the spec that says that completion entries 
> > > > should be initialized.
> > > > It is probably wise to initialize that area to 0 on driver 
> > > > initialization, but nothing beyond that.
> > > 
> > > Ah, you’re right, I misread.  I didn’t pay as much attention to the
> > > “...prior to setting CC.EN to ‘1’” as I should have.  Yep, and that is
> > > done in nvme_init_queue().
> > > 
> > > OK, I cease my wrongful protest:
> > > 
> > > Reviewed-by: Max Reitz <address@hidden>
> > > 
> > > > 
> > 
> > Thank you very much!
> > BTW, the qemu driver does allocate zeroed memory (in nvme_init_queue, 
> > "q->queue = qemu_try_blockalign0(bs, bytes);"
> 
> Yes, that’s what I was referring to above. :-)
> 
> > Thus I think this is all that is needed in that regard.
> > 
> > Note that this patch doesn't fix any real bug I know of, 
> > but just makes the thing right in regard to the spec.
> > Also racing with hardware in theory can have various memory ordering bugs,
> > although in this case the writes are done in 
> > entries which controller probably won't touch, but still.
> > 
> > TL;DR - no need in code which does nothing and might cause issues.
> > 
> > Do you want me to resend the series or shall I wait till we decide
> > what to do with the image creation support? I done fixing all the
> > review comments long ago, just didn't want to resend the series.
> > Or shall I drop that patch and resend?
> 
> I think I won’t apply the image creation patch now, so it’s probably
> better to just drop it for now.
> 
> > From the urgency standpoint the only patch that really should
> > be merged ASAP is the one that adds support for block sizes,
> > because without it, the whole thing crashes and burns on 4K
> > nvme drives.
> 
> By now we’re in softfreeze anyway, so unless write-zeroes/discard
> support is important now, it’s difficult to justify taking them for 4.1.
>  So for me it would be best if you put patches 1 through 3 into a
> for-4.1 series and move the rest to 4.2.  (I’d probably also split the
> creation patch off, because I don’t think I’m going to apply it before
> having experimented a bit with blockdev-create for qemu-img.)
> 
> If you think write-zeroes/discard support is important for 4.1, feel
> free to include them in the for-4.1 series along with an explanation as
> to why it’s important.

I don't think either that these are important, so I split them as you say.

Best regards,
        Maxim Levitsky






reply via email to

[Prev in Thread] Current Thread [Next in Thread]