qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 26/39] tests/tcg: make test-mmap a little less aggressive


From: Thomas Huth
Subject: Re: [PATCH v2 26/39] tests/tcg: make test-mmap a little less aggressive
Date: Fri, 9 Jul 2021 09:15:36 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

On 08/07/2021 21.09, Alex Bennée wrote:
The check_aligned_anonymous_unfixed_mmaps and
check_aligned_anonymous_unfixed_colliding_mmaps do a lot of mmap's and
copying of data. This is especially unfriendly to targets like hexagon
which have quite large pages and need to do sanity checks on each
memory access.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
---
  tests/tcg/multiarch/test-mmap.c | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tests/tcg/multiarch/test-mmap.c b/tests/tcg/multiarch/test-mmap.c
index 11d0e777b1..b77deee37e 100644
--- a/tests/tcg/multiarch/test-mmap.c
+++ b/tests/tcg/multiarch/test-mmap.c
@@ -58,12 +58,12 @@ void check_aligned_anonymous_unfixed_mmaps(void)
        int i;
fprintf(stdout, "%s", __func__);
-       for (i = 0; i < 0x1fff; i++)
+       for (i = 0; i < 0x1ff; i++)
        {

While you're at it, you could also fix the coding style here and put the curly bracket on the right hand side of the for-statement.

                size_t len;
len = pagesize + (pagesize * i & 7);
-               p1 = mmap(NULL, len, PROT_READ,
+               p1 = mmap(NULL, len, PROT_READ,
                          MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
                p2 = mmap(NULL, len, PROT_READ,
                          MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
@@ -142,7 +142,7 @@ void check_aligned_anonymous_unfixed_colliding_mmaps(void)
        int i;
fprintf(stdout, "%s", __func__);
-       for (i = 0; i < 0x2fff; i++)
+       for (i = 0; i < 0x2ff; i++)

dito

        {
                int nlen;
                p1 = mmap(NULL, pagesize, PROT_READ,


 Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]