[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 18/33] util/mmap-alloc: Factor out calculation of the pagesize for
From: |
Paolo Bonzini |
Subject: |
[PULL 18/33] util/mmap-alloc: Factor out calculation of the pagesize for the guard page |
Date: |
Tue, 15 Jun 2021 15:38:40 +0200 |
From: David Hildenbrand <david@redhat.com>
Let's factor out calculating the size of the guard page and rename the
variable to make it clearer that this pagesize only applies to the
guard page.
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Murilo Opsfelder Araujo <muriloo@linux.ibm.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com> for memory backend and machine
core
Cc: Igor Kotrasinski <i.kotrasinsk@partner.samsung.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20210510114328.21835-2-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
util/mmap-alloc.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/util/mmap-alloc.c b/util/mmap-alloc.c
index e6fa8b598b..24854064b4 100644
--- a/util/mmap-alloc.c
+++ b/util/mmap-alloc.c
@@ -82,6 +82,16 @@ size_t qemu_mempath_getpagesize(const char *mem_path)
return qemu_real_host_page_size;
}
+static inline size_t mmap_guard_pagesize(int fd)
+{
+#if defined(__powerpc64__) && defined(__linux__)
+ /* Mappings in the same segment must share the same page size */
+ return qemu_fd_getpagesize(fd);
+#else
+ return qemu_real_host_page_size;
+#endif
+}
+
void *qemu_ram_mmap(int fd,
size_t size,
size_t align,
@@ -90,12 +100,12 @@ void *qemu_ram_mmap(int fd,
bool is_pmem,
off_t map_offset)
{
+ const size_t guard_pagesize = mmap_guard_pagesize(fd);
int prot;
int flags;
int map_sync_flags = 0;
int guardfd;
size_t offset;
- size_t pagesize;
size_t total;
void *guardptr;
void *ptr;
@@ -116,8 +126,7 @@ void *qemu_ram_mmap(int fd,
* anonymous memory is OK.
*/
flags = MAP_PRIVATE;
- pagesize = qemu_fd_getpagesize(fd);
- if (fd == -1 || pagesize == qemu_real_host_page_size) {
+ if (fd == -1 || guard_pagesize == qemu_real_host_page_size) {
guardfd = -1;
flags |= MAP_ANONYMOUS;
} else {
@@ -126,7 +135,6 @@ void *qemu_ram_mmap(int fd,
}
#else
guardfd = -1;
- pagesize = qemu_real_host_page_size;
flags = MAP_PRIVATE | MAP_ANONYMOUS;
#endif
@@ -138,7 +146,7 @@ void *qemu_ram_mmap(int fd,
assert(is_power_of_2(align));
/* Always align to host page size */
- assert(align >= pagesize);
+ assert(align >= guard_pagesize);
flags = MAP_FIXED;
flags |= fd == -1 ? MAP_ANONYMOUS : 0;
@@ -193,8 +201,8 @@ void *qemu_ram_mmap(int fd,
* a guard page guarding against potential buffer overflows.
*/
total -= offset;
- if (total > size + pagesize) {
- munmap(ptr + size + pagesize, total - size - pagesize);
+ if (total > size + guard_pagesize) {
+ munmap(ptr + size + guard_pagesize, total - size - guard_pagesize);
}
return ptr;
@@ -202,15 +210,8 @@ void *qemu_ram_mmap(int fd,
void qemu_ram_munmap(int fd, void *ptr, size_t size)
{
- size_t pagesize;
-
if (ptr) {
/* Unmap both the RAM block and the guard page */
-#if defined(__powerpc64__) && defined(__linux__)
- pagesize = qemu_fd_getpagesize(fd);
-#else
- pagesize = qemu_real_host_page_size;
-#endif
- munmap(ptr, size + pagesize);
+ munmap(ptr, size + mmap_guard_pagesize(fd));
}
}
--
2.31.1
- [PULL 23/33] softmmu/memory: Pass ram_flags to qemu_ram_alloc() and qemu_ram_alloc_internal(), (continued)
- [PULL 23/33] softmmu/memory: Pass ram_flags to qemu_ram_alloc() and qemu_ram_alloc_internal(), Paolo Bonzini, 2021/06/15
- [PULL 22/33] softmmu/memory: Pass ram_flags to memory_region_init_ram_shared_nomigrate(), Paolo Bonzini, 2021/06/15
- [PULL 03/33] qemu-config: use qemu_opts_from_qdict, Paolo Bonzini, 2021/06/15
- [PULL 12/33] esp: correctly accumulate extended messages for PDMA, Paolo Bonzini, 2021/06/15
- [PULL 14/33] esp: store lun coming from the MESSAGE OUT phase, Paolo Bonzini, 2021/06/15
- [PULL 19/33] util/mmap-alloc: Factor out reserving of a memory region to mmap_reserve(), Paolo Bonzini, 2021/06/15
- [PULL 20/33] util/mmap-alloc: Factor out activating of memory to mmap_activate(), Paolo Bonzini, 2021/06/15
- [PULL 25/33] memory: Introduce RAM_NORESERVE and wire it up in qemu_ram_mmap(), Paolo Bonzini, 2021/06/15
- [PULL 33/33] configure: map x32 to cpu_family x86_64 for meson, Paolo Bonzini, 2021/06/15
- [PULL 30/33] hmp: Print "share" property of memory backends with "info memdev", Paolo Bonzini, 2021/06/15
- [PULL 18/33] util/mmap-alloc: Factor out calculation of the pagesize for the guard page,
Paolo Bonzini <=
- [PULL 21/33] softmmu/memory: Pass ram_flags to qemu_ram_alloc_from_fd(), Paolo Bonzini, 2021/06/15
- [PULL 29/33] qmp: Include "share" property of memory backends, Paolo Bonzini, 2021/06/15
- [PULL 26/33] util/mmap-alloc: Support RAM_NORESERVE via MAP_NORESERVE under Linux, Paolo Bonzini, 2021/06/15
- [PULL 32/33] hmp: Print "reserve" property of memory backends with "info memdev", Paolo Bonzini, 2021/06/15
- [PULL 24/33] util/mmap-alloc: Pass flags instead of separate bools to qemu_ram_mmap(), Paolo Bonzini, 2021/06/15
- [PULL 27/33] hostmem: Wire up RAM_NORESERVE via "reserve" property, Paolo Bonzini, 2021/06/15
- [PULL 31/33] qmp: Include "reserve" property of memory backends, Paolo Bonzini, 2021/06/15
- [PULL 28/33] qmp: Clarify memory backend properties returned via query-memdev, Paolo Bonzini, 2021/06/15
- Re: [PULL 00/33] Misc patches for 2021-06-15, Peter Maydell, 2021/06/15