qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] Added iopmem device emulation


From: Stefan Hajnoczi
Subject: Re: [Qemu-block] [PATCH] Added iopmem device emulation
Date: Mon, 7 Nov 2016 10:28:16 +0000
User-agent: Mutt/1.7.1 (2016-10-04)

On Fri, Nov 04, 2016 at 09:47:33AM -0600, Logan Gunthorpe wrote:
> On 04/11/16 04:49 AM, Stefan Hajnoczi wrote:
> > QEMU already has NVDIMM support (https://pmem.io/).  It can be used both
> > for passthrough and fake non-volatile memory:
> > 
> >   qemu-system-x86_64 \
> >     -M pc,nvdimm=on \
> >     -m 1024,maxmem=$((4096 * 1024 * 1024)),slots=2 \
> >     -object memory-backend-file,id=mem0,mem-path=/tmp/foo,size=$((64 * 1024 
> > * 1024)) \
> >     -device nvdimm,memdev=mem0
> > 
> > Please explain where iopmem comes from, where the hardware spec is, etc?
> 
> Yes, we are aware of nvdimm and, yes, there are quite a few
> commonalities. The difference between nvdimm and iopmem is that the
> memory that backs iopmem is on a PCI device and not connected through
> system memory. Currently, we are working with prototype hardware so
> there is no open spec that I'm aware of but the concept is really
> simple: a single bar directly maps volatile or non-volatile memory.
> 
> One of the primary motivations behind iopmem is to provide memory to do
> peer to peer transactions between PCI devices such that, for example, an
> RDMA NIC could transfer data directly to storage and bypass the system
> memory bus all together.

It may be too early to merge this code into qemu.git if there is no
hardware spec and this is a prototype device that is subject to change.

I'm wondering if there is a way to test or use this device if you are
not releasing specs and code that drives the device.

Have you submitted patches to enable this device in Linux, DPDK, or any
other project?

> > Perhaps you could use nvdimm instead of adding a new device?
> 
> I'm afraid not. The main purpose of this patch is to enable us to test
> kernel drivers for this type of hardware. If we use nvdimm, there is no
> PCI device for our driver to enumerate and the existing, different,
> NVDIMM drivers would be used instead.

Fair enough, it makes sense to implement a PCI device for this purpose.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]