[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[PULL 19/34] tcg: Tidy tcg_n_regions
From: |
Richard Henderson |
Subject: |
[PULL 19/34] tcg: Tidy tcg_n_regions |
Date: |
Fri, 11 Jun 2021 16:41:29 -0700 |
Compute the value using straight division and bounds,
rather than a loop. Pass in tb_size rather than reading
from tcg_init_ctx.code_gen_buffer_size,
Reviewed-by: Luis Pires <luis.pires@eldorado.org.br>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
tcg/region.c | 29 ++++++++++++-----------------
1 file changed, 12 insertions(+), 17 deletions(-)
diff --git a/tcg/region.c b/tcg/region.c
index b143eaf69c..037a01e4ed 100644
--- a/tcg/region.c
+++ b/tcg/region.c
@@ -364,38 +364,33 @@ void tcg_region_reset_all(void)
tcg_region_tree_reset_all();
}
-static size_t tcg_n_regions(unsigned max_cpus)
+static size_t tcg_n_regions(size_t tb_size, unsigned max_cpus)
{
#ifdef CONFIG_USER_ONLY
return 1;
#else
+ size_t n_regions;
+
/*
* It is likely that some vCPUs will translate more code than others,
* so we first try to set more regions than max_cpus, with those regions
* being of reasonable size. If that's not possible we make do by evenly
* dividing the code_gen_buffer among the vCPUs.
*/
- size_t i;
-
/* Use a single region if all we have is one vCPU thread */
if (max_cpus == 1 || !qemu_tcg_mttcg_enabled()) {
return 1;
}
- /* Try to have more regions than max_cpus, with each region being >= 2 MB
*/
- for (i = 8; i > 0; i--) {
- size_t regions_per_thread = i;
- size_t region_size;
-
- region_size = tcg_init_ctx.code_gen_buffer_size;
- region_size /= max_cpus * regions_per_thread;
-
- if (region_size >= 2 * 1024u * 1024) {
- return max_cpus * regions_per_thread;
- }
+ /*
+ * Try to have more regions than max_cpus, with each region being >= 2 MB.
+ * If we can't, then just allocate one region per vCPU thread.
+ */
+ n_regions = tb_size / (2 * MiB);
+ if (n_regions <= max_cpus) {
+ return max_cpus;
}
- /* If we can't, then just allocate one region per vCPU thread */
- return max_cpus;
+ return MIN(n_regions, max_cpus * 8);
#endif
}
@@ -833,7 +828,7 @@ void tcg_region_init(size_t tb_size, int splitwx, unsigned
max_cpus)
buf = tcg_init_ctx.code_gen_buffer;
total_size = tcg_init_ctx.code_gen_buffer_size;
page_size = qemu_real_host_page_size;
- n_regions = tcg_n_regions(max_cpus);
+ n_regions = tcg_n_regions(total_size, max_cpus);
/* The first region will be 'aligned - buf' bytes larger than the others */
aligned = QEMU_ALIGN_PTR_UP(buf, page_size);
--
2.25.1
- [PULL 03/34] tcg: Re-order tcg_region_init vs tcg_prologue_init, (continued)
- [PULL 03/34] tcg: Re-order tcg_region_init vs tcg_prologue_init, Richard Henderson, 2021/06/11
- [PULL 08/34] accel/tcg: Inline cpu_gen_init, Richard Henderson, 2021/06/11
- [PULL 11/34] tcg: Create tcg_init, Richard Henderson, 2021/06/11
- [PULL 07/34] tcg: Split out region.c, Richard Henderson, 2021/06/11
- [PULL 09/34] accel/tcg: Move alloc_code_gen_buffer to tcg/region.c, Richard Henderson, 2021/06/11
- [PULL 15/34] tcg: Introduce tcg_max_ctxs, Richard Henderson, 2021/06/11
- [PULL 14/34] accel/tcg: Pass down max_cpus to tcg_init, Richard Henderson, 2021/06/11
- [PULL 12/34] accel/tcg: Merge tcg_exec_init into tcg_init_machine, Richard Henderson, 2021/06/11
- [PULL 16/34] tcg: Move MAX_CODE_GEN_BUFFER_SIZE to tcg-target.h, Richard Henderson, 2021/06/11
- [PULL 18/34] tcg: Rename region.start to region.after_prologue, Richard Henderson, 2021/06/11
- [PULL 19/34] tcg: Tidy tcg_n_regions,
Richard Henderson <=
- [PULL 17/34] tcg: Replace region.end with region.total_size, Richard Henderson, 2021/06/11
- [PULL 21/34] tcg: Move in_code_gen_buffer and tests to region.c, Richard Henderson, 2021/06/11
- [PULL 23/34] tcg: Return the map protection from alloc_code_gen_buffer, Richard Henderson, 2021/06/11
- [PULL 13/34] accel/tcg: Use MiB in tcg_init_machine, Richard Henderson, 2021/06/11
- [PULL 26/34] tcg: Round the tb_size default from qemu_get_host_physmem, Richard Henderson, 2021/06/11
- [PULL 28/34] tcg: When allocating for !splitwx, begin with PROT_NONE, Richard Henderson, 2021/06/11
- [PULL 31/34] tcg: Fix documentation for tcg_constant_* vs tcg_temp_free_*, Richard Henderson, 2021/06/11
- [PULL 24/34] tcg: Sink qemu_madvise call to common code, Richard Henderson, 2021/06/11
- [PULL 22/34] tcg: Allocate code_gen_buffer into struct tcg_region_state, Richard Henderson, 2021/06/11
- [PULL 20/34] tcg: Tidy split_cross_256mb, Richard Henderson, 2021/06/11