[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] client config to access (mirrored) storage cluster

From: Anand Avati
Subject: Re: [Gluster-devel] client config to access (mirrored) storage cluster
Date: Wed, 2 Apr 2008 16:23:44 +0530

> And, of course, the spec file itself has :
> option remote-host roundrobin.gluster.local       # DNS Round Robin
> In this case, "roundrobin.gluster.local" points to three servers
> (192.168.252.[123]), which is fine assuming each of those servers are
> functional.  What would happen, however, if one of those servers
> suffered a critical failure ?

Does Gluster cache all of the responses from a DNS query, or does it
> just pick the first one ?  If the first usable response fails, will
> Gluster attempt to resolve again ?  What if the refresh time for that
> DNS entry is set high enough that the DNS server provides the same
> response again and again ?

glusterfs will cache all the entries and use each of them 'once' (per
reconnect) before doing a fresh dns lookup.

Or, put another way, if ClientA (by chance) resolves
> roundrobin.gluster.local to, but .1 is currently down -
> what happens ?

it will attempt on .2, and if that fails (or disconnects after a while), it
will attempt on .3, and once all the entries are used 'once', it will do a
fresh dns query.  it does not honor dns refresh timeouts (yet).

Is there a way to define multiple hosts in the volume declaration in
> the client config ?  For example :
> volume santa
>   type protocol/client
>   option transport-type tcp/client
>   #option remote-host,,
>   option remote-host
>   option remote-host
>   option remote-host
>   option remote-subvolume mailspool
> end-volume

this is not supported.

Ultimately, the big question i'm attempting to resolve is as follows :
> What is the best practise method to define client connectivity to an HA
> Gluster cluster ?

dns round robin at the moment, HA translator when it is available. Have you
investigated whether loading AFr on the client side works well for you? AFR
handles failover as well.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]