qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-ppc] [PATCH v2 4/6] spapr: Split memory nodes to power-of-two bloc


From: Alexey Kardashevskiy
Subject: [Qemu-ppc] [PATCH v2 4/6] spapr: Split memory nodes to power-of-two blocks
Date: Wed, 25 Jun 2014 02:43:09 +1000

Linux kernel expects nodes to have power-of-two size and
does WARN_ON if this is not the case:
[    0.041456] WARNING: at drivers/base/memory.c:115
which is:
===
        /* Validate blk_sz is a power of 2 and not less than section size */
        if ((block_sz & (block_sz - 1)) || (block_sz < MIN_MEMORY_BLOCK_SIZE)) {
                WARN_ON(1);
                block_sz = MIN_MEMORY_BLOCK_SIZE;
        }
===

This splits memory nodes into set of smaller blocks with
a size which is a power of two. This makes sure the start
address of every node is aligned to the node size.

Signed-off-by: Alexey Kardashevskiy <address@hidden>
---
Changes:
v2:
* tiny code cleanup in "sizetmp = MIN(sizetmp, 1 << (ffs(mem_start) - 1))"
* updated commit log with a piece of kernel code doing WARN_ON
---
 hw/ppc/spapr.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 64f48de..0ec1dfc 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -719,8 +719,18 @@ static int spapr_populate_memory(sPAPREnvironment *spapr, 
void *fdt)
             mem_start += spapr->rma_size;
             node_size -= spapr->rma_size;
         }
-        spapr_populate_memory_node(fdt, i, mem_start, node_size);
-        mem_start += node_size;
+        for ( ; node_size; ) {
+            hwaddr sizetmp = pow2floor(node_size);
+
+            /* mem_start != 0 here */
+            if (ffs(mem_start) < ffs(sizetmp)) {
+                sizetmp = 1 << (ffs(mem_start) - 1);
+            }
+
+            spapr_populate_memory_node(fdt, i, mem_start, sizetmp);
+            node_size -= sizetmp;
+            mem_start += sizetmp;
+        }
     }
 
     return 0;
-- 
2.0.0




reply via email to

[Prev in Thread] Current Thread [Next in Thread]