[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v3 23/29] bitmap: add atomic set functions
From: |
Fam Zheng |
Subject: |
Re: [Qemu-devel] [PATCH v3 23/29] bitmap: add atomic set functions |
Date: |
Thu, 28 May 2015 10:15:14 +0800 |
User-agent: |
Mutt/1.5.23 (2014-03-12) |
On Tue, 05/26 18:54, Paolo Bonzini wrote:
> From: Stefan Hajnoczi <address@hidden>
>
> Use atomic_or() for atomic bitmaps where several threads may set bits at
> the same time. This avoids the race condition between threads loading
> an element, bitwise ORing, and then storing the element.
>
> When setting all bits in a word we can avoid atomic ops and instead just
> use an smp_mb() at the end.
>
> Most bitmap users don't need atomicity so introduce new functions.
>
> Signed-off-by: Stefan Hajnoczi <address@hidden>
> Message-Id: <address@hidden>
> [Avoid barrier in the single word case, use full barrier instead of write.
> - Paolo]
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
> include/qemu/bitmap.h | 2 ++
> include/qemu/bitops.h | 14 ++++++++++++++
> util/bitmap.c | 38 ++++++++++++++++++++++++++++++++++++++
> 3 files changed, 54 insertions(+)
>
> diff --git a/include/qemu/bitmap.h b/include/qemu/bitmap.h
> index f0273c9..3e0a4f3 100644
> --- a/include/qemu/bitmap.h
> +++ b/include/qemu/bitmap.h
> @@ -39,6 +39,7 @@
> * bitmap_empty(src, nbits) Are all bits zero in *src?
> * bitmap_full(src, nbits) Are all bits set in *src?
> * bitmap_set(dst, pos, nbits) Set specified bit area
> + * bitmap_set_atomic(dst, pos, nbits) Set specified bit area with atomic
> ops
> * bitmap_clear(dst, pos, nbits) Clear specified bit area
> * bitmap_find_next_zero_area(buf, len, pos, n, mask) Find bit free
> area
> */
> @@ -226,6 +227,7 @@ static inline int bitmap_intersects(const unsigned long
> *src1,
> }
>
> void bitmap_set(unsigned long *map, long i, long len);
> +void bitmap_set_atomic(unsigned long *map, long i, long len);
> void bitmap_clear(unsigned long *map, long start, long nr);
> unsigned long bitmap_find_next_zero_area(unsigned long *map,
> unsigned long size,
> diff --git a/include/qemu/bitops.h b/include/qemu/bitops.h
> index 8abdcf9..8164225 100644
> --- a/include/qemu/bitops.h
> +++ b/include/qemu/bitops.h
> @@ -16,6 +16,7 @@
> #include <assert.h>
>
> #include "host-utils.h"
> +#include "atomic.h"
>
> #define BITS_PER_BYTE CHAR_BIT
> #define BITS_PER_LONG (sizeof (unsigned long) * BITS_PER_BYTE)
> @@ -39,6 +40,19 @@ static inline void set_bit(long nr, unsigned long *addr)
> }
>
> /**
> + * set_bit_atomic - Set a bit in memory atomically
> + * @nr: the bit to set
> + * @addr: the address to start counting from
> + */
> +static inline void set_bit_atomic(long nr, unsigned long *addr)
> +{
> + unsigned long mask = BIT_MASK(nr);
> + unsigned long *p = addr + BIT_WORD(nr);
> +
> + atomic_or(p, mask);
> +}
> +
> +/**
> * clear_bit - Clears a bit in memory
> * @nr: Bit to clear
> * @addr: Address to start counting from
> diff --git a/util/bitmap.c b/util/bitmap.c
> index 9c6bb52..39994af 100644
> --- a/util/bitmap.c
> +++ b/util/bitmap.c
> @@ -11,6 +11,7 @@
>
> #include "qemu/bitops.h"
> #include "qemu/bitmap.h"
> +#include "qemu/atomic.h"
>
> /*
> * bitmaps provide an array of bits, implemented using an an
> @@ -177,6 +178,43 @@ void bitmap_set(unsigned long *map, long start, long nr)
> }
> }
>
> +void bitmap_set_atomic(unsigned long *map, long start, long nr)
> +{
> + unsigned long *p = map + BIT_WORD(start);
> + const long size = start + nr;
> + int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
> + unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
> +
> + /* First word */
> + if (nr - bits_to_set > 0) {
> + atomic_or(p, mask_to_set);
> + nr -= bits_to_set;
> + bits_to_set = BITS_PER_LONG;
> + mask_to_set = ~0UL;
> + p++;
> + }
> +
> + /* Full words */
> + if (bits_to_set == BITS_PER_LONG) {
> + while (nr >= BITS_PER_LONG) {
> + *p = ~0UL;
> + nr -= BITS_PER_LONG;
> + p++;
Out of curiosity: why not use a memset here?
Reviewed-by: Fam Zheng <address@hidden>
> + }
> + }
> +
> + /* Last word */
> + if (nr) {
> + mask_to_set &= BITMAP_LAST_WORD_MASK(size);
> + atomic_or(p, mask_to_set);
> + } else {
> + /* If we avoided the full barrier in atomic_or(), issue a
> + * barrier to account for the assignments in the while loop.
> + */
> + smp_mb();
> + }
> +}
> +
> void bitmap_clear(unsigned long *map, long start, long nr)
> {
> unsigned long *p = map + BIT_WORD(start);
> --
> 1.8.3.1
>
>
>
- [Qemu-devel] [PATCH v3 13/29] ram_addr: tweaks to xen_modified_memory, (continued)
- [Qemu-devel] [PATCH v3 13/29] ram_addr: tweaks to xen_modified_memory, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 09/29] memory: track DIRTY_MEMORY_CODE in mr->dirty_log_mask, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 22/29] memory: do not touch code dirty bitmap unless TCG is enabled, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 05/29] memory: differentiate memory_region_is_logging and memory_region_get_dirty_log_mask, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 04/29] display: add memory_region_sync_dirty_bitmap calls, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 15/29] exec: move functions to translate-all.h, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 23/29] bitmap: add atomic set functions, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 19/29] exec: pass client mask to cpu_physical_memory_set_dirty_range, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 07/29] framebuffer: check memory_region_is_logging, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 11/29] memory: include DIRTY_MEMORY_MIGRATION in the dirty log mask, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 16/29] translate-all: remove unnecessary argument to tb_invalidate_phys_range, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 21/29] exec: only check relevant bitmaps for cleanliness, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 24/29] bitmap: add atomic test and clear, Paolo Bonzini, 2015/05/26
- [Qemu-devel] [PATCH v3 14/29] exec: use memory_region_get_dirty_log_mask to optimize dirty tracking, Paolo Bonzini, 2015/05/26