qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] extend limit of physical sections number


From: Peter Maydell
Subject: Re: [Qemu-devel] [PATCH] extend limit of physical sections number
Date: Tue, 5 Nov 2013 00:36:16 +0000

On 27 September 2013 17:49, Amos Kong <address@hidden> wrote:
>  # qemu -drive file=/disk0,if=none,id=v0,format=qcow2 \
>  -device virtio-blk-pci,drive=v0,id=v00,multifunction=on,addr=0x04.0
>  ....
>
> Launching guest with more than 32 virtio-blk disks,
> qemu will crash, because there are too many BARs.
>
> This patch brings the limit of non-tcg up by a factor
> of 8 (32767 / 4096), i.e. 32*8 = 256.
>
> Signed-off-by: Paolo Bonzini <address@hidden>
> Signed-off-by: Amos Kong <address@hidden>
> ---
>  exec.c | 17 ++++++++++++-----
>  1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index 5aef833..f639c01 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -763,11 +763,18 @@ void phys_mem_set_alloc(void *(*alloc)(ram_addr_t))
>
>  static uint16_t phys_section_add(MemoryRegionSection *section)
>  {
> -    /* The physical section number is ORed with a page-aligned
> -     * pointer to produce the iotlb entries.  Thus it should
> -     * never overflow into the page-aligned value.
> -     */
> -    assert(next_map.sections_nb < TARGET_PAGE_SIZE);
> +    if (tcg_enabled()) {
> +        /* The physical section number is ORed with a page-aligned
> +         * pointer to produce the iotlb entries.  Thus it should
> +         * never overflow into the page-aligned value.
> +         */
> +        assert(next_map.sections_nb < TARGET_PAGE_SIZE);
> +    } else {
> +        /* For KVM or Xen we can use the full range of the ptr field
> +         * in PhysPageEntry.
> +         */
> +        assert(next_map.sections_nb < SHRT_MAX);
> +    }

This looks really weird. Why should the memory subsystem
care whether we're using TCG or KVM or Xen?

-- PMM



reply via email to

[Prev in Thread] Current Thread [Next in Thread]