[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Gluster-devel] Choice of Translator question

From: Kevan Benson
Subject: Re: [Gluster-devel] Choice of Translator question
Date: Thu, 27 Dec 2007 16:05:09 -0800
User-agent: Thunderbird (X11/20071031)

Gareth Bult wrote:
This could be the problem.

When I do this on a 1G file, I have 1 file in each stripe partition of size ~ 

I don't get (n) files where n=1G/chunk size ... (!)

If I did, I could see how it would work .. but I don't ..

Are you saying I "definitely should" see files broken down into multiple sub 
files, or were you assuming this is how it worked?

That sounds like the striping isn't working, as you should see that if you were just doing AFR.

Everything I've read on the stripe translator, bot in the docs, and on the list from devs and people using it lead me to believe that when defined, the stripe translator should take files matching the naming convention and over the size specified in the "option block-size" translator option and break them into chunks, to be stored on the disparate shared the stripe translator defines.

Here's an example config that I would think should do this, all implemented on one system, with names to simulate their location if there were two different systems. You should be able to use it as a client as well as a server config.

volume server1disk1
        type storage/posix
        option directory /tmp/server1disk1

volume server1disk2
        type storage/posix
        option directory /tmp/server1disk2

volume server2disk1
        type storage/posix
        option directory /tmp/server2disk1

volume server2disk2
        type storage/posix
        option directory /tmp/server2disk2

volume afr1
        type cluster/afr
        subvolumes server1disk1 server2disk1

volume afr2
        type cluster/afr
        subvolumes server1disk2 server2disk2

volume stripe
        type cluster/stripe
        option block-size *:1MB # Stripe files over 1MB in 1MB chunks
        subvolumes afr1 afr2

I would think with this you would see 1000 files written to each "server" for a 1GB file write. e.g. 500 in server1disk1 and 500 in server2disk1 for afr1, and 500 in server1disk2 and 500 in server2disk2 for afr2. Neither afr1 nor afr2 have a complete copy of the file, each have half of it, as the stripe translator striped it in 1MB chunks across them.


-Kevan Benson
-A-1 Networks

reply via email to

[Prev in Thread] Current Thread [Next in Thread]