qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 19/28] tcg: Tidy split_cross_256mb


From: Richard Henderson
Subject: Re: [PATCH v3 19/28] tcg: Tidy split_cross_256mb
Date: Thu, 10 Jun 2021 08:20:47 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.1

On 6/9/21 7:59 AM, Luis Fernando Fujita Pires wrote:
From: Richard Henderson <richard.henderson@linaro.org>
Return output buffer and size via output pointer arguments, rather than
returning size via tcg_ctx->code_gen_buffer_size.

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
  tcg/region.c | 15 +++++++--------
  1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/tcg/region.c b/tcg/region.c index b44246e1aa..652f328d2c 100644
--- a/tcg/region.c
+++ b/tcg/region.c
@@ -467,7 +467,8 @@ static inline bool cross_256mb(void *addr, size_t size)
  /* We weren't able to allocate a buffer without crossing that boundary,
     so make do with the larger portion of the buffer that doesn't cross.
     Returns the new base of the buffer, and adjusts code_gen_buffer_size.  */ -
static inline void *split_cross_256mb(void *buf1, size_t size1)
+static inline void split_cross_256mb(void **obuf, size_t *osize,
+                                     void *buf1, size_t size1)

Need to adjust the comment, now that we're no longer adjusting 
code_gen_buffer_size in here.

Done, thanks.

@@ -583,8 +583,7 @@ static bool alloc_code_gen_buffer_anon(size_t size, int
prot,
              /* fallthru */
          default:
              /* Split the original buffer.  Free the smaller half.  */
-            buf2 = split_cross_256mb(buf, size);
-            size2 = tcg_ctx->code_gen_buffer_size;
+            split_cross_256mb(&buf2, &size2, buf, size);

This will be fixed by patch 21 (tcg: Allocate code_gen_buffer into struct 
tcg_region_state), but shouldn't we update tcg_ctx->code_gen_buffer_size here?

Good catch.  I moved the store to _size from above to below.


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]