qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PULL 19/47] x86: Use g_new() & friends where that makes ob


From: Paolo Bonzini
Subject: [Qemu-devel] [PULL 19/47] x86: Use g_new() & friends where that makes obvious sense
Date: Mon, 15 Dec 2014 17:38:03 +0100

From: Markus Armbruster <address@hidden>

g_new(T, n) is neater than g_malloc(sizeof(T) * n).  It's also safer,
for two reasons.  One, it catches multiplication overflowing size_t.
Two, it returns T * rather than void *, which lets the compiler catch
more type errors.

This commit only touches allocations with size arguments of the form
sizeof(T).

Signed-off-by: Markus Armbruster <address@hidden>
Reviewed-by: Eric Blake <address@hidden>
Signed-off-by: Paolo Bonzini <address@hidden>
---
 hw/i386/pc.c      | 3 +--
 target-i386/kvm.c | 2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 8be50a4..60c1d54 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -601,8 +601,7 @@ int e820_add_entry(uint64_t address, uint64_t length, 
uint32_t type)
     }
 
     /* new "etc/e820" file -- include ram too */
-    e820_table = g_realloc(e820_table,
-                           sizeof(struct e820_entry) * (e820_entries+1));
+    e820_table = g_renew(struct e820_entry, e820_table, e820_entries + 1);
     e820_table[e820_entries].address = cpu_to_le64(address);
     e820_table[e820_entries].length = cpu_to_le64(length);
     e820_table[e820_entries].type = cpu_to_le32(type);
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 60c4475..8832a02 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -278,7 +278,7 @@ static void kvm_hwpoison_page_add(ram_addr_t ram_addr)
             return;
         }
     }
-    page = g_malloc(sizeof(HWPoisonPage));
+    page = g_new(HWPoisonPage, 1);
     page->ram_addr = ram_addr;
     QLIST_INSERT_HEAD(&hwpoison_page_list, page, list);
 }
-- 
1.8.3.1





reply via email to

[Prev in Thread] Current Thread [Next in Thread]