|
From: | Chunyan Liu |
Subject: | Re: [Qemu-devel] [PATCH v20 13/26] qed.c: replace QEMUOptionParameter with QemuOpts |
Date: | Tue, 18 Feb 2014 15:42:17 +0800 |
On 02/11/2014 11:33 PM, Chunyan Liu wrote:
> qed.c: replace QEMUOptionParameter with QemuOpts
>
> Signed-off-by: Dong Xu Wang <address@hidden>
> Signed-off-by: Chunyan Liu <address@hidden>
> ---
> block/qed.c | 89 +++++++++++++++++++++++++++++------------------------------
> block/qed.h | 3 +-
> 2 files changed, 45 insertions(+), 47 deletions(-)
>
> + cluster_size = qemu_opt_get_size_del(opts,
> + BLOCK_OPT_CLUSTER_SIZE,
> + QED_DEFAULT_CLUSTER_SIZE);
> + table_size = qemu_opt_get_size_del(opts, BLOCK_OPT_TABLE_SIZE,
> + QED_DEFAULT_TABLE_SIZE);
>
> + {Why does cluster size list a default, but table size does not?
> + .name = BLOCK_OPT_CLUSTER_SIZE,
> + .type = QEMU_OPT_SIZE,
> + .help = "Cluster size (in bytes)",
> + .def_value_str = stringify(QED_DEFAULT_CLUSTER_SIZE)
> + },
> + {
> + .name = BLOCK_OPT_TABLE_SIZE,
> + .type = QEMU_OPT_SIZE,
> + .help = "L1/L2 table size (in clusters)"
> + },
Why this change? I actually prefer enums over #defines, because they
> +++ b/block/qed.h
> @@ -43,7 +43,7 @@
> *
> * All fields are little-endian on disk.
> */
> -
> +#define QED_DEFAULT_CLUSTER_SIZE 65536
> enum {
> QED_MAGIC = 'Q' | 'E' << 8 | 'D' << 16 | '\0' << 24,
>
> @@ -69,7 +69,6 @@ enum {
> */
> QED_MIN_CLUSTER_SIZE = 4 * 1024, /* in bytes */
> QED_MAX_CLUSTER_SIZE = 64 * 1024 * 1024,
> - QED_DEFAULT_CLUSTER_SIZE = 64 * 1024,
behave nicer in gdb.
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
[Prev in Thread] | Current Thread | [Next in Thread] |