vrs-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Vrs-development] Re: Overview


From: Bill Lance
Subject: Re: [Vrs-development] Re: Overview
Date: Wed, 13 Feb 2002 13:58:03 -0800 (PST)

--- Chris Smith <address@hidden> wrote:
> On Wednesday 13 February 2002 19:05, Bill Lance
> wrote:

> 
> You're still going to have private LDS configuration
> files that don't fit 
> into this scheme.  Though they don't have to contain
> anything sensitive.  
> Things like what ports to bind to, where the
> Discovery server is blaa blaa.
> 

That's what I ment by initialization configuration
file.


> > There is one time where it becomes a serious
> question.
> >  That's the reverse of the bootstrap process at
> the
> > end of a Cluster's life cycle.  What happens if a
> > Cluster shrink to only two LDS's, and one of them
> > drops.  The Cluster could die right then and
> there, or
> > it could try to hold everything alive on the one
> > surviving machine. Technically, either should be
> > possible.
> 
> Which was what I could see happening.  If a cluster
> can be reduced down to a 
> single LDS through failures/loss of interest by
> other LDS's, then does the 
> 'uni machine cluster' stop serving requests because
> there is only one LDS 
> left?
> 

I could see it go either way, depending on the
interest and purposes of the Cluster organizers.  In
other words, it's an Administrative decision.

If the Cluster was formed for a temporary one time
project, it should disolve on one node.

If data backup overrides the privacy risks, it should
devolve to one machine that then attempts to do a
physical backup of the entire Repository, or at least
to continue to run alone for a while, hoping for
relief and reinforcments from new nodes..

All of these options should be configurable by the
Administrator.


> 
> It's an interesting issue.  However, if you mandate
> that a single machine in 
> a cluster may only store n-1 chunks of data, where n
> is the number of 
> machines in the cluster, 

lost me here,  where does this rule come from?

then when you're down to 1
> machine, you cannot 
> satisfy any requests.
> 
> Ah. Problem.
> Cluster starts with 2 machines.  Resources are
> registered with the cluster 
> and shared across the machines.  2 more machines
> join the cluster.
> [Q: is the data re-partitaioned across the 4
> machines]

Yes.  Or more specifically, the mirror blocks provided
by the new nodes are added to the total pool. 
Additional mirrors of existing Cluster blocks are
written to the new machines.  Most likely, nothing
with be removed from running nodes (I don't think as a
rule.  There may be some long term balancing in
larger, older Clusters.  Or if the data in the
Repository churns a lot for some reason  Interesting
to see how this may work out.) 



> More resources are registered with the cluster.
> 2 machines now leave (or rather DIE HORRIBLY, taking
> their data share with 
> them.
> Is the data set complete for all resources
> registered?
> 


I can easily see our approach working well with large
Clusters and a high mirror to Cluster block ratio. 
But how it behaves under stress when too much data and
too few machines start to press it remains unknown. 
We need a working model to start testing these
questions.


 
> Yeah.  I'd not have the entire dataset available on
> a single LDS UNLESS it is 
> a Private_LDS ( I think we need a proper term for
> this class of LDS to help 
> with discussions - how about Private Data Server PDS
> or LDS/P)

What do you mean by a private LDS?




__________________________________________________
Do You Yahoo!?
Send FREE Valentine eCards with Yahoo! Greetings!
http://greetings.yahoo.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]