qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH v3 0/3] pc-dimm: factor out MemoryDev


From: David Hildenbrand
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH v3 0/3] pc-dimm: factor out MemoryDevice
Date: Wed, 25 Apr 2018 14:46:20 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.7.0

>>>> For first phase we are using 'virtio-pmem' as cold added devices. AFAIU
>>>> 'VirtioDeviceClass' being parent class and 'hotplug/unplug' methods 
>>>> implemented 
>>>> for virtio-pmem device. So, pci bus hotplug/unplug should call the 
>>>> corresponding
>>>> functions?  
>>> the problem is with trying to use PCI bus based device with bus-less
>>> infrastructure used by (pc|nv)dimms.  
>>
>> I can understand your reasoning, but for me these are some QEMU internal 
>> details
>> that should not stop the virtio-(p)mem train from rolling.
> If it's quickly hacked up prototypes to play with than it's fine
> as far as they are not being merged into QEMU.
> If one plans to merge it, then code should be adapted to
> whatever QEMU internal requirements are.

At one point we will have to decide if we want to develop good software
(which tolerates layer violations if there is a good excuse) or build
the perfect internal architecture. And we all know the latter is not the
case right now and never will be.

So yes, I will be looking into ways to make this work "nicer"
internally, but quite frankly, it has very little priority.

> 
>> In my world, device hotplug is composed of the following steps
>>
>> 1. Resource allocation
>> 2. Attaching the device to a bus (making it accessible by the guest)
>> 3. Notifying the guest
>>
>> I would e.g. also call ACPI sort of a bus structure. Now, the machine hotplug
>> handler currently does parts of 1. and then hands of to ACPI to do 2 and 3.
> it's not a bus, it's concrete device implementing GPE logic,
> on x86 it does the job on notifier #3 in case of hotplug.
> 
>> virtio-mem and virtio-pmem do 1. partially in the realize function and then
>> let 2. and 3. be handled by the proxy device specific hotplug handlers.
>>
>> Mean people might say that the machine should not call the ACPI code but 
>> there
>> should be a ACPI hotplug handler. So we would end up with the same result.
> it should be fine for parent to manage its children but not other way around

A virtio-bus (e.g. CCW) also "belongs" to the machine. But we won't
start to pass all device starting from the machine downwards to the
concrete implementation.

(but I get your point)

> 
> 
>> But anyhow, the resource allocation (getting an address and getting plugged) 
>> will
>> be done in the first step out of the virtio-(p)mem realize function:
>>
>> static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
>> {
>>    ...
>>    /* try to get a mapping in guest address space */
>>     vm->phys_addr = memory_device_get_free_addr(MACHINE(qdev_get_machine))...
> this should be a property, and if it's not set then realize should error out

It is a property but if it is 0 we do auto-detection right now (like DIMM)

> 
>>     if (local_err) {
>>         goto out;
>>     }
>>     ...
>>
>>     /* register the memory region */
>>     memory_device_plug_region(MACHINE(qdev_get_machine()), vm->mr,
>>                               vm->phys_addr);
>>    ...
>> }
>>
>> So this happens before any hotplug handler is called. Everything works
>> just fine. What you don't like about this is the qdev_get_machine(). I
>> also don't like it but in the short term I don't see any problem with
>> it. It is resource allocation and not a "device plug" in the typical form.
> 
> It's not qdev_get_machine() that's issue, it's layer violation,
> where child device is allocating and mapping resources of one of its parents.

Quite simple: introduce a function at the machine where the child can
"request" to get an address and "request" to plug/unplug a region.

Or what would be wrong about that?

> 
> that's been an issue and show stopper for patches in the past,
> and that's probably not going to change in this case either.
> 

I can see that, but again, for me these are internal details.

>  
>>> The important point which we should not to break here while trying to glue
>>> PCI hotplug handler with machine hotplug handler is:  
>>
>> I could later on imagine something like a 2 step approach.
>>
>> 1. resource allocation handler by a machine for MemoryDevices
>> - assigns address, registers memory region
>> 2. hotplug handler (ACPI, PCI, CCW ...)
>> - assigns bus specific stuff, attaches device, notifies guest
>>
>> Importantly the device is not visible to the guest until 2.
> So far it's about how QEMU models and manages wiring process,
> that's why pre_plug/plug handlers were introduced, to allow
> resource owner to attach devices that's plugged into it.
> 
> i.e. PCI devices are managed by PCI subsystem and DIMM
> devices are managed by board where they are mapped into
> reserved address space by board code that owns it.
> 
> Allowing random device to manage board resources directly
> isn't really acceptable (even as temporary solution).

I agree to "random" devices. This should not be the design principle.

> 
> In case of virtio-pmem it might be much cleaner to use
> mapping mechanism provided by PCI sybsytem than trying
> to bridge bus and buss-less device wiring as from device
> modeling point of view (aside from providing RAM to guest)
> it's 2 quite different devices.

And again: Please don't forget virtio-ccw. We _don't_ want to glue
virtio device specifics to the underlying proxy here.

> 
> i.e. if you think new device is RAM, which is governed by
> -m option, then model it as bus-less device like dimm and
> plug it directly into board, if its plugged in to a bus
> it's that bus owner responsibility to allocate/manage
> address space or bridge it to parent device.
> 
> (btw: virtio-pmem looks sort of like ivshmem, maybe they
> can share some code on qemu side)
> 
>> Of course, we could also take care of pre-plug things as you mentioned.
>>
>>>
>>> container MachineState::device_memory is owned by machine and
>>> it's up to machine plug handler (container's owner) to map device's mr
>>> into its address space.
>>> (i.e. nor device's realize nor PCI bus hotplug handler should do it)  
>>
>> I agree, but I think these are internal details.
> it's internal details that we choose not to violate in QEMU
> and were working towards that direction, getting rid of places
> that do it wrongly.

Yes, and I'll try my best to avoid it.

> 
>>> Not sure about virtio-mem but if it would use device_memory container,
>>> it should use machine's plug handler.
>>>
>>> I don't have out head ideas how to glue it cleanly, may be
>>> MachineState::device_memory is just not right thing to use
>>> for such devices.  
>>
>> I strongly disagree. From the user point of view it should not matter what
>> was added/plugged. There is just one guest physical memory and maxmem is
>> defined for one QEMU instance. Exposing such details to the user should
>> definitely be avoided.
> qemu user have to be exposed to details as he already adds
>  -device virtio-pmem,....
> to CLI, maxmem accounting is a separate matter and probably
> shouldn't be mixed with device model and how it's mapped into
> guest's address space.

I can't follow. Please step back and have a look at how it works on the
qemu command line:

1. You specify a maxmem option
2. You plug DIMM/NVDIMM/virtio-mem/virtio-pmem

Some machines (e.g. s390x) use maxmem to setup the maximum possible
guest address space in KVM.

Just because DIMM/NVDIMM was the first user does not mean that it is the
only valid user. That is also the reason why it is named
"query-memory-devices" and not "query-dimm-devices". The abstraction is
there for a reason.

-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]