qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport


From: Raghavendra Talur
Subject: Re: [Qemu-devel] [PATCH v19 3/5] block/gluster: remove rdma transport
Date: Mon, 18 Jul 2016 19:05:05 +0530

On Mon, Jul 18, 2016 at 5:48 PM, Markus Armbruster <address@hidden>
wrote:

> Prasanna Kalever <address@hidden> writes:
>
> > On Mon, Jul 18, 2016 at 2:23 PM, Markus Armbruster <address@hidden>
> wrote:
> >> Prasanna Kumar Kalever <address@hidden> writes:
> >>
> >>> gluster volfile server fetch happens through unix and/or tcp, it
> doesn't
> >>> support volfile fetch over rdma, hence removing the dead code
> >>>
> >>> Signed-off-by: Prasanna Kumar Kalever <address@hidden>
> >>> ---
> >>>  block/gluster.c | 35 +----------------------------------
> >>>  1 file changed, 1 insertion(+), 34 deletions(-)
> >>>
> >>> diff --git a/block/gluster.c b/block/gluster.c
> >>> index 40ee852..59f77bb 100644
> >>> --- a/block/gluster.c
> >>> +++ b/block/gluster.c
> >>> @@ -134,8 +134,7 @@ static int parse_volume_options(GlusterConf
> *gconf, char *path)
> >>>   *
> >>>   * 'transport' specifies the transport type used to connect to gluster
> >>>   * management daemon (glusterd). Valid transport types are
> >>> - * tcp, unix and rdma. If a transport type isn't specified, then tcp
> >>> - * type is assumed.
> >>> + * tcp, unix. If a transport type isn't specified, then tcp type is
> assumed.
> >>>   *
> >>>   * 'host' specifies the host where the volume file specification for
> >>>   * the given volume resides. This can be either hostname, ipv4 address
> >>> @@ -162,7 +161,6 @@ static int parse_volume_options(GlusterConf
> *gconf, char *path)
> >>>   * file=gluster+tcp://[1:2:3:4:5:6:7:8]:24007/testvol/dir/a.img
> >>>   * file=gluster+tcp://host.domain.com:24007/testvol/dir/a.img
> >>>   * file=gluster+unix:///testvol/dir/a.img?socket=/tmp/glusterd.socket
> >>> - * file=gluster+rdma://1.2.3.4:24007/testvol/a.img
> >>>   */
> >>>  static int qemu_gluster_parseuri(GlusterConf *gconf, const char
> *filename)
> >>>  {
> >>> @@ -184,8 +182,6 @@ static int qemu_gluster_parseuri(GlusterConf
> *gconf, const char *filename)
> >>>      } else if (!strcmp(uri->scheme, "gluster+unix")) {
> >>>          gconf->transport = g_strdup("unix");
> >>
> >> Outside this patch's scope: string literals would be just fine for
> >> gconf->transport.
> >
> > If we remove rdma support, again comments here may drag people into
> confusion.
> > Do you recommend to have this as a separate patch ?
>
> I'm afraid we're talking about totally different things.  But it doesn't
> actually matter, because I now see that I got confused.  Simply ignore
> this comment.
>
> >>
> >>>          is_unix = true;
> >>> -    } else if (!strcmp(uri->scheme, "gluster+rdma")) {
> >>> -        gconf->transport = g_strdup("rdma");
> >>>      } else {
> >>>          ret = -EINVAL;
> >>>          goto out;
> >>> @@ -1048,37 +1044,8 @@ static BlockDriver bdrv_gluster_unix = {
> >>>      .create_opts                  = &qemu_gluster_create_opts,
> >>>  };
> >>>
> >>> -static BlockDriver bdrv_gluster_rdma = {
> >>> -    .format_name                  = "gluster",
> >>> -    .protocol_name                = "gluster+rdma",
> >>> -    .instance_size                = sizeof(BDRVGlusterState),
> >>> -    .bdrv_needs_filename          = true,
> >>> -    .bdrv_file_open               = qemu_gluster_open,
> >>> -    .bdrv_reopen_prepare          = qemu_gluster_reopen_prepare,
> >>> -    .bdrv_reopen_commit           = qemu_gluster_reopen_commit,
> >>> -    .bdrv_reopen_abort            = qemu_gluster_reopen_abort,
> >>> -    .bdrv_close                   = qemu_gluster_close,
> >>> -    .bdrv_create                  = qemu_gluster_create,
> >>> -    .bdrv_getlength               = qemu_gluster_getlength,
> >>> -    .bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
> >>> -    .bdrv_truncate                = qemu_gluster_truncate,
> >>> -    .bdrv_co_readv                = qemu_gluster_co_readv,
> >>> -    .bdrv_co_writev               = qemu_gluster_co_writev,
> >>> -    .bdrv_co_flush_to_disk        = qemu_gluster_co_flush_to_disk,
> >>> -    .bdrv_has_zero_init           = qemu_gluster_has_zero_init,
> >>> -#ifdef CONFIG_GLUSTERFS_DISCARD
> >>> -    .bdrv_co_discard              = qemu_gluster_co_discard,
> >>> -#endif
> >>> -#ifdef CONFIG_GLUSTERFS_ZEROFILL
> >>> -    .bdrv_co_pwrite_zeroes        = qemu_gluster_co_pwrite_zeroes,
> >>> -#endif
> >>> -    .bdrv_co_get_block_status     = qemu_gluster_co_get_block_status,
> >>> -    .create_opts                  = &qemu_gluster_create_opts,
> >>> -};
> >>> -
> >>>  static void bdrv_gluster_init(void)
> >>>  {
> >>> -    bdrv_register(&bdrv_gluster_rdma);
> >>>      bdrv_register(&bdrv_gluster_unix);
> >>>      bdrv_register(&bdrv_gluster_tcp);
> >>>      bdrv_register(&bdrv_gluster);
> >>
> >> This is fine if gluster+rdma never actually worked.  I tried to find out
> >> at https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h.
> >> Transport rdma is mentioned there.  Does it work?
> >
> > this is transport used for fetching the volume file from the nodes.
> > Which is of type tcp previously and then now it also supports the unix
> > transport.
> >
> > More response from gluster community
> > @http://www.gluster.org/pipermail/gluster-devel/2016-July/050112.html
>
> Quote Raghavendra Talur's reply:
>
>     > My understanding is that @transport argumet in
>     > glfs_set_volfile_server() is meant for specifying transport used in
>     > fetching volfile server,
>     >
>
>     Yes, @transport arg here is transport to use for fetching volfile.
>
>
>     > IIRC which currently supports tcp and unix only...
>     >
>     Yes, this is correct too.
>
> Here, Raghavendra seems to confirm that only tcp and unix are supported.
>
>     >
>     > The doc here
>     > https://github.com/gluster/glusterfs/blob/master/api/src/glfs.h
>     > +166 shows the rdma as well, which is something I cannot digest.
>     >
>     This is doc written with assumption that rdma would work too.
>
>
>     >
>     >
>     > Can someone correct me ?
>     >
>     > Have we ever supported volfile fetch over rdma ?
>     >
>
>     I think no. To test, you would have to set rdma as only transport
> option in
>     glusterd.vol and see what happens in volfile fetch.
>
> But here, it sounds like it might work anyway!
>

Prasanna, Rafi and I tested this. When rdma option is specified, gluster
defaults to tcp silently. I do not like this behavior.


>     IMO, fetching volfile over rdma is an overkill and would not be
> required.
>     RDMA should be kept only for IO operations.
>
>     We should just remove it from the docs.
>
> Don't misunderstand me, I'm very much in favor of removing the rdma
> transport here.  All I'm trying to do is figure out what backward
> compatibility ramifications that might have.
>
> If protocol gluster+rdma has never actually worked, we can safely remove
> it.
>
> But if it happens to work even though it isn't really supported, the
> situation is complicated.  Dropping it might break working setups.
> Could perhaps be justified by "your setup works, but it's unsupported,
> and I'm forcing you to switch to a supported setup now, before you come
> to grief."
>
> If it used to work but no more, or if it will stop working, it's
> differently complicated.  Dropping it would still break working setups,
> but they'd be bound to break anyway.
>
> Thus, my questions: does protocol gluster+rdma work before your patch?
> If no, did it ever work?  "I don't know" is an acceptable answer to the
> latter question, but not so much to the former, because that one is
> easily testable.
>

Yes, it appeared to user that the option worked and removing the option
would break such setups. I agree with Markus that removing the option right
away would hurt users and we should follow a deprecation strategy for
backward compatibility.


>
> Once we have answers to these questions, we can decide what needs to be
> done for compatibility, if anything.
>


reply via email to

[Prev in Thread] Current Thread [Next in Thread]