gluster-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] glusterfs with vservers


From: Martin Fick
Subject: Re: [Gluster-devel] glusterfs with vservers
Date: Fri, 21 Nov 2008 09:18:58 -0800 (PST)

--- On Fri, 11/21/08, Daniel van Ham Colchete <address@hidden> wrote:

> it doesn't matter if you use virtual servers or not.
> That's the point with virtual servers. 

Well, that's a bold theoretical statement.  Every 
virtualization technology has its side effects.  
That is why I asked:

> > Has anyone had success with a similar scenario?


> If the network linking them is working GlusterFS will work
> as it would with real servers. 

Yes, I realize that is the theory, thus the reason 
I am attempting to do it, but do you have any real 
world experience with the scenario I mention to back 
that statement up?

I suspect the problems is networking, that is why I 
asked about setting the source IP on from the client
side.  If it is possible, it seems like it might be 
a useful feature whenever a client has multiple IPs 
to choose from.  In the case of the host with linux 
vservers, it does.

> If you think the problem is at GlusterFS please 
> provide more details. What are your config files? 
> What are you doing? What commands are you using? 
> What are you seeing? Otherwise there is no way 
> one can guess what you are doing wrong...

What makes you assume that I am doing something 
wrong? :)  I hope that I am, so that it can be fixed,
but no offense, I have been working at this for a 
while.  As I mentioned I am able to mount it on the 
host.  Since I am using a cfg file residing on the 
guest server, the only real difference in the 
equation is vserver.  

But since you asked, below are my configs and output
logs:

server.vol
----------

volume default
  type storage/posix
  option directory /export/default
end-volume

volume server
  type protocol/server
  option transport-type tcp/server
  option bind-address 10.10.20.11
  option client-volume-filename /etc/glusterfs/client.vol

  subvolumes default
  option auth.ip.default.allow *
end-volume


client.vol
----------

volume client
  type protocol/client
  option transport-type tcp/client
  option remote-host 10.10.20.11

  option remote-subvolume default
end-volume



IPs
----
The guest server IP is 10.10.20.11
The guest client IP is 192.168.1.150
The host         IP is 192.168.1.12


On the host, this works:
 A)

  glusterfs -s 10.10.20.11 /mnt

But, this does not work: 
 B)

  vnamespace -e 221 glusterfs -s 10.10.20.11 mnt

221 is the vserver context of the client guest.

Here are the server logs in both cases:

A)

2008-11-20 17:38:47 D [tcp-server.c:145:tcp_server_notify] server: Re
gistering socket (6) for new transport object of 10.10.20.11
2008-11-20 17:38:47 E [server-protocol.c:5212:mop_getspec] server: Un
able to open /etc/glusterfs/client.vol.10.10.20.11 (No such file or d
irectory)
2008-11-20 17:38:47 E [protocol.c:271:gf_block_unserialize_transport]
 server: EOF from peer (10.10.20.11:1023)
2008-11-20 17:38:47 D [tcp.c:87:tcp_disconnect] server: connection di
sconnected
2008-11-20 17:38:47 D [server-protocol.c:6269:server_protocol_cleanup
] server: cleaned up transport state for client 10.10.20.11:1023
2008-11-20 17:38:47 D [tcp-server.c:257:gf_transport_fini] server: de
stroying transport object for 10.10.20.11:1023 (fd=6)
2008-11-20 17:38:48 D [tcp-server.c:145:tcp_server_notify] server: Re
gistering socket (6) for new transport object of 10.10.20.11
2008-11-20 17:38:48 D [ip.c:120:gf_auth] default: allowed = "*", rece
ived ip addr = "10.10.20.11"
2008-11-20 17:38:48 D [server-protocol.c:5674:mop_setvolume] server:
accepted client from 10.10.20.11:1022
2008-11-20 17:38:48 D [server-protocol.c:5717:mop_setvolume] server:
creating inode table with lru_limit=1024, xlator=default
2008-11-20 17:38:48 D [inode.c:1163:inode_table_new] default: creatin
g new inode table with lru_limit=1024, sizeof(inode_t)=96
2008-11-20 17:38:48 D [inode.c:577:__create_inode] default/inode: cre
ate inode(1)
2008-11-20 17:38:48 D [inode.c:367:__active_inode] default/inode: act
ivating inode(1), lru=0/1024


B)

2008-11-20 17:40:32 D [tcp-server.c:145:tcp_server_notify] server: Re
gistering socket (6) for new transport object of 10.10.20.11
2008-11-20 17:40:32 E [server-protocol.c:5212:mop_getspec] server: Un
able to open /etc/glusterfs/client.vol.10.10.20.11 (No such file or d
irectory)
2008-11-20 17:40:32 E [protocol.c:271:gf_block_unserialize_transport]
 server: EOF from peer (10.10.20.11:1023)
2008-11-20 17:40:32 D [tcp.c:87:tcp_disconnect] server: connection di
sconnected
2008-11-20 17:40:32 D [server-protocol.c:6269:server_protocol_cleanup
] server: cleaned up transport state for client 10.10.20.11:1023
2008-11-20 17:40:32 D [tcp-server.c:257:gf_transport_fini] server: de
stroying transport object for 10.10.20.11:1023 (fd=6)



Note how in both cases the server thinks the client is 
itself.  Which means that even if I got it to work, I 
would not be able to properly perform IP based 
authorization.


Here are the client logs in both cases:

A)

2008-11-20 17:38:47 W [client-protocol.c:280:client_protocol_xfer] trans: 
attempting to pipeline request type(2) op(4) with handshake


B)

2008-11-20 17:40:32 W [client-protocol.c:280:client_protocol_xfer] trans: 
attempting to pipeline request type(2) op(4) with handshake
2008-11-20 17:40:32 E [fuse-bridge.c:2702:init] glusterfs-fuse: fuse_mount 
failed (Transport endpoint is not connected)

2008-11-20 17:40:32 E [glusterfs.c:547:main] glusterfs: Initializing FUSE failed


And, lastly, my kernel is debian 2.6.25-2-vserver-686 


If I try using vexec, secure-mount or fstab/fstab.remote, the
client never even seems able to contact the server.  Only
vnamespace gets me this far.


Thanks,
 
-Martin



      




reply via email to

[Prev in Thread] Current Thread [Next in Thread]