gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] nodes don't use swap


From: Jordi Moles
Subject: Re: [Gluster-devel] nodes don't use swap
Date: Tue, 11 Mar 2008 16:40:30 +0100
User-agent: Thunderbird 2.0.0.12 (X11/20080213)

Hi,

thanks.

I'm running a mail system where dovecots and postfixs share glusterfs. There are also 6 nodes which run the shared storage system. Fuse version is fuse-2.7.2glfs8 and glusterfs version is mainline--2.5, patch 690

The config file from nodes is:

**********
**********

volume esp
   type storage/posix
   option directory /mnt/compartit
end-volume

volume espa
   type features/posix-locks
   subvolumes esp
end-volume

volume espai
  type performance/io-threads
  option thread-count 15
  option cache-size 512MB
  subvolumes espa
end-volume

volume nm
   type storage/posix
   option directory /mnt/namespace
end-volume

volume ultim
   type protocol/server
   subvolumes espai nm
   option transport-type tcp/server
   option auth.ip.espai.allow *
   option auth.ip.nm.allow *
end-volume

**********
**********

dovecots has:

*********
*********

volume espai1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume espai
end-volume

volume espai2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume espai
end-volume

volume espai3
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.206
   option remote-subvolume espai
end-volume

volume espai4
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.207
   option remote-subvolume espai
end-volume

volume espai5
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.213
   option remote-subvolume espai
end-volume

volume espai6
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.214
   option remote-subvolume espai
end-volume

volume namespace1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume nm
end-volume

volume namespace2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume nm
end-volume

volume gru1
   type cluster/afr
   subvolumes espai1 espai2
end-volume

volume grup1
 type performance/io-cache
 option cache-size 64MB
 option page-size 1MB
 option priority *.txt:2,*:1
 option force-revalidate-timeout 2
 subvolumes gru1
end-volume

volume gru2
   type cluster/afr
   subvolumes espai3 espai4
end-volume

volume grup2
 type performance/io-cache
 option cache-size 64MB
 option page-size 1MB
 option priority *.txt:2,*:1
 option force-revalidate-timeout 2
 subvolumes gru2
end-volume

volume gru3
   type cluster/afr
   subvolumes espai5 espai6
end-volume

volume grup3
 type performance/io-cache
 option cache-size 64MB
 option page-size 1MB
 option priority *.txt:2,*:1
 option force-revalidate-timeout 2
 subvolumes gru3
end-volume

volume nm
   type cluster/afr
   subvolumes namespace1 namespace2
end-volume

volume ultim
   type cluster/unify
   subvolumes grup1 grup2 grup3
   option scheduler rr
   option namespace nm
end-volume


*********
*********

and finally postfixs has

********
********

volume espai1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume espai
end-volume

volume espai2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume espai
end-volume

volume espai3
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.206
   option remote-subvolume espai
end-volume

volume espai4
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.207
   option remote-subvolume espai
end-volume

volume espai5
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.213
   option remote-subvolume espai
end-volume

volume espai6
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.214
   option remote-subvolume espai
end-volume

volume namespace1
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.204
   option remote-subvolume nm
end-volume

volume namespace2
   type protocol/client
   option transport-type tcp/client
   option remote-host 192.168.1.205
   option remote-subvolume nm
end-volume

volume gru1
   type cluster/afr
   subvolumes espai1 espai2
end-volume

volume grup1
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes gru1
end-volume

volume gru2
   type cluster/afr
   subvolumes espai3 espai4
end-volume

volume grup2
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes gru2
end-volume

volume gru3
   type cluster/afr
   subvolumes espai5 espai6
end-volume

volume grup3
  type performance/write-behind
  option aggregate-size 1MB
  option flush-behind on
  subvolumes gru3
end-volume

volume nm
   type cluster/afr
   subvolumes namespace1 namespace2
end-volume

volume ultim
   type cluster/unify
   subvolumes grup1 grup2 grup3
   option scheduler rr
   option namespace nm
end-volume

********
********

All machines are virtual, from xen 3.2.0
The thing is that i'm running some tests to see what the bottlenecks are.
I've tried for example, to send emails every second to the system, and also check some mailboxes every minute. This is just fine. I think i could improve performance, but none of the parts of gluster
get to that point when they run out of memory or cpu.

but when i do a disk test, like ddt, postmark or bonnie, i get this problem where some nodes just run out of everything, cpu, memory, etc. Each of them has 2GB ram memory, and 4GB of swap, but swap is never used. It also looks like they only use one of the two cpus they have, and i don't think it is a Xen problem because i've used before this type of setups with xen, and some software actually uses more than one cpu.

If you need any further information... just let me kno.

Thanks.


















En/na Basavanagowda Kanur ha escrit:
Jordi,
  We would like to know more details about your setup to understand what is
causing the bottle-neck.

Please post the spec files that you are using.

--
Gowda

On Tue, Mar 11, 2008 at 5:22 PM, Jordi Moles <address@hidden> wrote:

hi everyone,

i'm stressing a gluterfs system i've set up. i've given 2GB of ram
memory to every node and 4GB for swap. now i've got the system totally
stressed :) but nodes don't seem to be able to use swap memory. Is it
normal?
can i change anything to make gluster use swap?

I've tried ddt, postmark and bonnie to create thousands of files and see
how the system reacts, and the bottleneck so far is the ram memory of
the nodes. They eat the 2GB the have and don't seem to be able to use
swap.

Nodes has also two processors, and i would also like to know if they can
make profit of that or gluster is limited to one cpu.

thanks.


_______________________________________________
Gluster-devel mailing list
address@hidden
http://lists.nongnu.org/mailman/listinfo/gluster-devel







reply via email to

[Prev in Thread] Current Thread [Next in Thread]