[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] about afr

From: nicolas prochazka
Subject: Re: [Gluster-devel] about afr
Date: Mon, 12 Jan 2009 20:39:39 +0100

for your attention,
it seems that's this problem occur only when files is open and use and gluster mount point .
I use big files of computation ( ~ 10G)  with in the most important part, read. In this case problem occurs.
If i using only small files which create only some time, no problem occur, gluster mount can use other afr server.

Nicolas Prochazka

2009/1/12 nicolas prochazka <address@hidden>
I'm tryning to set
option transport-timeout 5
in protocol/client

so a max of 10 seconds before restoring gluster in normal situation ?
no success, i always in the same situation, a 'ls /mnt/gluster'   not respond after > 10 mins
I can not reuse glustermount exept kill glusterfs process.

Nicolas Prochazka

2009/1/12 Raghavendra G <address@hidden>

Hi Nicolas,

how much time did you wait before concluding the mount point to be not working? afr waits for a maximum of (2 * transport-timeout) seconds before returning sending reply to the application. Can you wait for some time and check out is this the issue you are facing?


On Mon, Jan 12, 2009 at 7:49 PM, nicolas prochazka <address@hidden> wrote:
I've installed this model to test Gluster :

+ 2 servers ( A B )
   - with glusterfsd  server  ( glusterfs--mainline--3.0--patch-842 )
   - with glusterfs  client
server conf file .

+ 1 server C only client mode.

My issue :
If C open big file in this client configuration and then i stop server A (or B ) 
gluster mount point on server C seems to be block, i can not do 'ls -l'  for example.
Is a this thing is normal ? as C open his file on A or B , then it is blocking when server down ?
I was thinking in client AFR, client can reopen file/block an other server , i'm wrong ?
Should use HA translator ?

Nicolas Prochazka.

volume brickless
type storage/posix
option directory /mnt/disks/export

volume brick
type features/posix-locks
option mandatory on          # enables mandatory locking on all files
subvolumes brickless

volume server
type protocol/server
subvolumes brick
option transport-type tcp
option auth.addr.brick.allow 10.98.98.*

client config
volume brick_10.98.98.1
type protocol/client
option transport-type tcp/client
option remote-host
option remote-subvolume brick

volume brick_10.98.98.2
type protocol/client
option transport-type tcp/client
option remote-host
option remote-subvolume brick

volume last
type cluster/replicate
subvolumes brick_10.98.98.1

volume iothreads
type performance/io-threads
option thread-count 2
option cache-size 32MB
subvolumes last

volume io-cache
type performance/io-cache
option cache-size 1024MB             # default is 32MB
option page-size  1MB              #128KB is default option
option force-revalidate-timeout 2  # default is 1
subvolumes iothreads

volume writebehind
type performance/write-behind
option aggregate-size 256KB # default is 0bytes
option window-size 3MB
option flush-behind on      # default is 'off'
subvolumes io-cache

Gluster-devel mailing list

Raghavendra G

reply via email to

[Prev in Thread] Current Thread [Next in Thread]