qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug 1886362] [NEW] Heap use-after-free in lduw_he_p through e1000e_


From: Jason Wang
Subject: Re: [Bug 1886362] [NEW] Heap use-after-free in lduw_he_p through e1000e_write_to_rx_buffers
Date: Wed, 15 Jul 2020 16:35:09 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2020/7/14 下午6:48, Li Qiang wrote:
Jason Wang <jasowang@redhat.com> 于2020年7月14日周二 下午4:56写道:

On 2020/7/10 下午6:37, Li Qiang wrote:
Paolo Bonzini <pbonzini@redhat.com> 于2020年7月10日周五 上午1:36写道:
On 09/07/20 17:51, Li Qiang wrote:
Maybe we should check whether the address is a RAM address in 'dma_memory_rw'?
But it is a hot path. I'm not sure it is right. Hope more discussion.
Half of the purpose of dma-helpers.c (as opposed to address_space_*
functions in exec.c) is exactly to support writes to MMIO.  This is
Hi Paolo,

Could you please explain more about this(to support writes to MMIO).
I can just see the dma helpers with sg DMA, not related with MMIO.

Please refer doc/devel/memory.rst.

The motivation of memory API is to allow support modeling different
memory regions. DMA to MMIO is allowed in hardware so Qemu should
emulate this behaviour.

I just read the code again.
So the dma_blk_io is used for some device that will need DMA to
MMIO(may be related with
device spec).  But for most of the devices(networking card for
example) there is no need this DMA to MMIO.
So we just ksuse dma_memory_rw.  Is this understanding right?

Then another question.
Though the dma helpers uses a bouncing buffer, it finally write to the
device addressspace in 'address_space_unmap'.
Is there any posibility that we can again write to the MMIO like this issue?


I think the point is to make DMA to MMIO work as real hardware. For e1000e and other networking devices we need make sure such DMA doesn't break anything.

Thanks





especially true of dma_blk_io, which takes care of doing the DMA via a
bounce buffer, possibly in multiple steps and even blocking due to
cpu_register_map_client.

For dma_memory_rw this is not needed, so it only needs to handle
QEMUSGList, but I think the design should be the same.

However, this is indeed a nightmare for re-entrancy.  The easiest
solution is to delay processing of descriptors to a bottom half whenever
MMIO is doing something complicated.  This is also better for latency
because it will free the vCPU thread more quickly and leave the work to
the I/O thread.
Do you mean we define a per-e1000e bottom half. And in the MMIO write
or packet send
trigger this bh?

Probably a TX bh.

I will try to write this tx bh to strength my understanding in this part.
Maybe reference the virtio-net implementation I think.



Thanks,
Li Qiang

So even if we again trigger the MMIO write, then
second bh will not be executed?

Bh is serialized so no re-entrancy issue.

Thanks



Thanks,
Li Qiang

Paolo





reply via email to

[Prev in Thread] Current Thread [Next in Thread]