[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple
From: |
Prasanna Kumar Kalever |
Subject: |
Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers |
Date: |
Thu, 12 Nov 2015 04:46:42 -0500 (EST) |
On Tuesday, November 10, 2015 10:54:25 PM, Jeff Cody wrote:
>
> On Tue, Nov 10, 2015 at 02:39:16PM +0530, Prasanna Kumar Kalever wrote:
> > This patch adds a way to specify multiple volfile servers to the gluster
> > block backend of QEMU with tcp|rdma transport types and their port numbers.
> >
> > Problem:
> >
> > Currently VM Image on gluster volume is specified like this:
> >
> > file=gluster[+tcp]://host[:port]/testvol/a.img
> >
> > Assuming we have three hosts in trusted pool with replica 3 volume
> > in action and unfortunately host (mentioned in the command above) went down
> > for some reason, since the volume is replica 3 we now have other 2 hosts
> > active from which we can boot the VM.
> >
> > But currently there is no mechanism to pass the other 2 gluster host
> > addresses to qemu.
> >
> > Solution:
> >
> > New way of specifying VM Image on gluster volume with volfile servers:
> > (We still support old syntax to maintain backward compatibility)
> >
> > Basic command line syntax looks like:
> >
> > Pattern I:
> > -drive driver=gluster,
> > volume=testvol,path=/path/a.raw,
> > servers.0.host=1.2.3.4,
> > [servers.0.port=24007,]
> > [servers.0.transport=tcp,]
> > servers.1.host=5.6.7.8,
> > [servers.1.port=24008,]
> > [servers.1.transport=rdma,] ...
> >
> > Pattern II:
> > 'json:{"driver":"qcow2","file":{"driver":"gluster",
> > "volume":"testvol","path":"/path/a.qcow2",
> > "servers":[{tuple0},{tuple1}, ...{tupleN}]}}'
> >
> > driver => 'gluster' (protocol name)
> > volume => name of gluster volume where our VM image resides
> > path => absolute path of image in gluster volume
> >
> > {tuple} => {"host":"1.2.3.4"[,"port":"24007","transport":"tcp"]}
> >
> > host => host address (hostname/ipv4/ipv6 addresses)
> > port => port number on which glusterd is listening. (default
> > 24007)
> > transport => transport type used to connect to gluster management
> > daemon,
> > it can be tcp|rdma (default 'tcp')
> >
> > Examples:
> > 1.
> > -drive driver=qcow2,file.driver=gluster,
> > file.volume=testvol,file.path=/path/a.qcow2,
> > file.servers.0.host=1.2.3.4,
> > file.servers.0.port=24007,
> > file.servers.0.transport=tcp,
> > file.servers.1.host=5.6.7.8,
> > file.servers.1.port=24008,
> > file.servers.1.transport=rdma
> > 2.
> > 'json:{"driver":"qcow2","file":{"driver":"gluster","volume":"testvol",
> > "path":"/path/a.qcow2","servers":
> > [{"host":"1.2.3.4","port":"24007","transport":"tcp"},
> > {"host":"4.5.6.7","port":"24008","transport":"rdma"}] } }'
> >
> > This patch gives a mechanism to provide all the server addresses, which are
> > in
> > replica set, so in case host1 is down VM can still boot from any of the
> > active hosts.
> >
> > This is equivalent to the backup-volfile-servers option supported by
> > mount.glusterfs (FUSE way of mounting gluster volume)
> >
> > Credits: Sincere thanks to Kevin Wolf <address@hidden> and
> > "Deepak C Shetty" <address@hidden> for inputs and all their support
> >
> > Signed-off-by: Prasanna Kumar Kalever <address@hidden>
>
>
> Previous versions of this commit mentioned that the new functionality
> is dependent on a recent fix in libgfapi. This commit message is
> missing that line; does its absence mean that the new functionality is
> not dependent on any particular libgfapi version?
>
> What happens if the new functionality is tried on the last stable
> libgfapi release?
Sorry for not removing this since long, actually the libgfapi fix is for
defaults values
i.e. When glfs_set_volfile_server is invocated multiple times, only on the first
invocation gfapi code replace port 0 with 24007 and transport NULL with "tcp".
Any have to remove this dependency, I have put up code that will take care of
defaults.
Thanks,
-prasanna
Hence, replacing the parameters at the entry function is the right way.
>
> Thanks!
> Jeff
>
>
- Re: [Qemu-devel] [PATCH v12 3/3] block/gluster: add support for multiple gluster servers, (continued)
- [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers, Prasanna Kumar Kalever, 2015/11/10
- [Qemu-devel] [PATCH v2 1/3] block/gluster: rename [server, volname, image] -> [host, volume, path], Prasanna Kumar Kalever, 2015/11/10
- [Qemu-devel] [PATCH v2 2/3] block/gluster: code cleanup, Prasanna Kumar Kalever, 2015/11/10
- [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers, Prasanna Kumar Kalever, 2015/11/10
- Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers, Jeff Cody, 2015/11/10
- Re: [Qemu-devel] [PATCH v13 3/3] block/gluster: add support for multiple gluster servers,
Prasanna Kumar Kalever <=
- Re: [Qemu-devel] [PATCH 0/3] block/gluster: add support for multiple gluster servers, Eric Blake, 2015/11/10