qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] hw/riscv/virt: Add hotplugging and virtio-md-pci support


From: Björn Töpel
Subject: Re: [PATCH] hw/riscv/virt: Add hotplugging and virtio-md-pci support
Date: Mon, 20 May 2024 20:51:06 +0200

Daniel/David,

Daniel Henrique Barboza <dbarboza@ventanamicro.com> writes:

> On 5/18/24 16:50, David Hildenbrand wrote:
>> 
>> Hi,
>> 
>> 
>>>> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
>>>> index 4fdb66052587..16c2bdbfe6b6 100644
>>>> --- a/hw/riscv/virt.c
>>>> +++ b/hw/riscv/virt.c
>>>> @@ -53,6 +53,8 @@
>>>>    #include "hw/pci-host/gpex.h"
>>>>    #include "hw/display/ramfb.h"
>>>>    #include "hw/acpi/aml-build.h"
>>>> +#include "hw/mem/memory-device.h"
>>>> +#include "hw/virtio/virtio-mem-pci.h"
>>>>    #include "qapi/qapi-visit-common.h"
>>>>    #include "hw/virtio/virtio-iommu.h"
>>>> @@ -1407,6 +1409,7 @@ static void virt_machine_init(MachineState *machine)
>>>>        DeviceState *mmio_irqchip, *virtio_irqchip, *pcie_irqchip;
>>>>        int i, base_hartid, hart_count;
>>>>        int socket_count = riscv_socket_count(machine);
>>>> +    hwaddr device_memory_base, device_memory_size;
>>>>        /* Check socket count limit */
>>>>        if (VIRT_SOCKETS_MAX < socket_count) {
>>>> @@ -1553,6 +1556,25 @@ static void virt_machine_init(MachineState *machine)
>>>>        memory_region_add_subregion(system_memory, memmap[VIRT_MROM].base,
>>>>                                    mask_rom);
>>>> +    device_memory_base = ROUND_UP(s->memmap[VIRT_DRAM].base + 
>>>> machine->ram_size,
>>>> +                                  GiB);
>>>> +    device_memory_size = machine->maxram_size - machine->ram_size;
>>>> +
>>>> +    if (riscv_is_32bit(&s->soc[0])) {
>>>> +        hwaddr memtop = device_memory_base + ROUND_UP(device_memory_size, 
>>>> GiB);
>>>> +
>>>> +        if (memtop > UINT32_MAX) {
>>>> +            error_report("Memory exceeds 32-bit limit by %lu bytes",
>>>> +                         memtop - UINT32_MAX);
>>>> +            exit(EXIT_FAILURE);
>>>> +        }
>>>> +    }
>>>> +
>>>> +    if (device_memory_size > 0) {
>>>> +        machine_memory_devices_init(machine, device_memory_base,
>>>> +                                    device_memory_size);
>>>> +    }
>>>> +
>>>
>>> I think we need a design discussion before proceeding here. You're 
>>> allocating all
>>> available memory as a memory device area, but in theory we might also 
>>> support
>>> pc-dimm hotplugs (which would be the equivalent of adding physical RAM 
>>> dimms to
>>> the board.) in the future too. If you're not familiar with this feature you 
>>> can
>>> check it out the docs in [1].
>> 
>> Note that DIMMs are memory devices as well. You can plug into the memory 
>> device area both, ACPI-based memory devices (DIMM, NVDIMM) or virtio-based 
>> memory devices (virtio-mem, virtio-pmem).
>> 
>>>
>>> As an example, the 'virt' ARM board (hw/arm/virt.c) reserves a space for 
>>> this
>>> type of hotplug by checking how much 'ram_slots' we're allocating for it:
>>>
>>> device_memory_size = ms->maxram_size - ms->ram_size + ms->ram_slots * GiB;
>>>
>> 
>> Note that we increased the region size to be able to fit most requests even 
>> if alignment of memory devices is weird. See below.
>> 
>> In sane setups, this is usually not required (adding a single additional GB 
>> for some flexiility might be good enough).
>> 
>>> Other boards do the same with ms->ram_slots. We should consider doing it as 
>>> well,
>>> now, even if we're not up to the point of supporting pc-dimm hotplug, to 
>>> avoid
>>> having to change the memory layout later in the road and breaking existing
>>> setups.
>>>
>>> If we want to copy the ARM board, ram_slots is capped to ACPI_MAX_RAM_SLOTS 
>>> (256).
>>> Each RAM slot is considered to be a 1GiB dimm, i.e. we would reserve 256GiB 
>>> for
>>> them.
>> 
>> This only reserves some *additional* space to fixup weird alignment of 
>> memory devices. *not* the actual space for these devices.
>> 
>> We don't consider each DIMM to be 1 GiB in size, but add an additional 1 GiB 
>> in case we have to align DIMMs in physical address space.
>> 
>> I *think* this dates back to old x86 handling where we aligned the address 
>> of each DIMM to be at a 1 GiB boundary. So if you would have plugged two 128 
>> MiB DIMMs, you'd have required more than 256 MiB of space in the area after 
>> aligning inside the memory device area.
>> 
>
> Thanks for the explanation. I missed the part where the ram_slots were being
> used just to solve potential alignment issues and pc-dimms could occupy the 
> same
> space being allocated via machine_memory_devices_init().
>
> This patch isn't far off then. If we take care to avoid plugging unaligned 
> memory
> we might not even need this spare area.

I'm a bit lost here, so please bare with me. We don't require the 1 GiB
alignment on RV AFAIU. I'm having a hard time figuring out what missing
in my patch.

[...]

>>> I see that David Hildenbrand is also CCed in the patch so he'll let us know 
>>> if
>>> I'm out of line with what I'm asking.
>> 
>> Supporting PC-DIMMs might be required at some point when dealing with OSes 
>> that don't support virtio-mem and friends.

...and also for testing the PC-DIMM ACPI patching path. ;-)


Cheers,
Björn

reply via email to

[Prev in Thread] Current Thread [Next in Thread]