qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [qemu-s390x] [PATCH v4 01/14] memory-device: drop assert related to


From: Igor Mammedov
Subject: Re: [qemu-s390x] [PATCH v4 01/14] memory-device: drop assert related to align and start of address space
Date: Tue, 29 May 2018 15:27:14 +0200

On Thu, 17 May 2018 10:15:14 +0200
David Hildenbrand <address@hidden> wrote:

> The start of the address space does not have to be aligned for the
> search. Handle this case explicitly when starting the search for a new
> address.
That's true,
but commit message doesn't explain why address_space_start
should be allowed to be non aligned.

At least with this assert we would notice early that
board allocating misaligned address space.
I'd keep the assert unless there is a good reason to drop it.


> 
> Signed-off-by: David Hildenbrand <address@hidden>
> ---
>  hw/mem/memory-device.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/hw/mem/memory-device.c b/hw/mem/memory-device.c
> index 3e04f3954e..361d38bfc5 100644
> --- a/hw/mem/memory-device.c
> +++ b/hw/mem/memory-device.c
> @@ -116,7 +116,6 @@ uint64_t memory_device_get_free_addr(MachineState *ms, 
> const uint64_t *hint,
>      address_space_start = ms->device_memory->base;
>      address_space_end = address_space_start +
>                          memory_region_size(&ms->device_memory->mr);
> -    g_assert(QEMU_ALIGN_UP(address_space_start, align) == 
> address_space_start);
>      g_assert(address_space_end >= address_space_start);
>  
>      memory_device_check_addable(ms, size, errp);
> @@ -149,7 +148,7 @@ uint64_t memory_device_get_free_addr(MachineState *ms, 
> const uint64_t *hint,
>              return 0;
>          }
>      } else {
> -        new_addr = address_space_start;
> +        new_addr = QEMU_ALIGN_UP(address_space_start, align);
>      }
>  
>      /* find address range that will fit new memory device */




reply via email to

[Prev in Thread] Current Thread [Next in Thread]