vrs-development
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Vrs-development] Cluster Management Message (was CVS)


From: Bill Lance
Subject: Re: [Vrs-development] Cluster Management Message (was CVS)
Date: Wed, 20 Feb 2002 08:31:06 -0800 (PST)

--- Chris Smith <address@hidden> wrote:
> On Tuesday 19 February 2002 16:32, Bill Lance wrote:
> 

> 
> Okay.  VRS clusters cannot funtion like this if you
> use
> GWDomains.  The Cluster Management Messages travel
> between the LDSs that are clustered, thus holding
> the
> cluster together.  Now, each LDS is actually a
> GWDomain,
> and the message that pass between them are GWService
> calls.  The fact that the service call has had to
> leave
> one GWDomain, travel across the network and invokes
> a
> GWService in another GWDomain is hidden from the
> application layer (the LDS application).  The
> 'remote'
> GWDomain could be running on the same server for all
> the application cares.  As long as its GWService
> call
> gets satisfied, it doesn't care what happened along
> the
> way.  View GWServices requests as traditional
> function
> calls.  When you call a 'function' it might actual
> be
> a function that exists in a different application on
> a
> different server on a different network.  Doesn't
> matter.
> Thats what Goldwater does - helps you build
> distributed
> scalable applications.
> 
> Make sense?
> 

Yup, it does.  

> So, the Cluster Managment Messages are just
> GWService
> calls between LDSs.  You would never see these kind
> of
> messages coming in through the LDS client access
> port
> (be it port 80 or whatever). 

If they transverse a network, doesn't by definition
mean that the travel from socket to socket? And that
means some port number.

 So I would say that "a
> request to any of these ports results in the same
> response" is false.  Each port supports different
> traffic.
> 

Is that a GW characteristic that services are
preassigned a specific port?  That would mess us up a
bit in the Cluster idea.  My thought was that all
Cluster Management Messages travel between the same
port, and get routed to the right LDS module by the
Port Manager, perhaps in the form of Pheonix.  That
manager would listen to port:80 for service traffic
and port:xx for CMM stuff.

> (That isn't to say that we can't piggy back
> GWService
> request messages over HTTP and thus through the
> client
> access node...... but lets play with those toys
> later!!)
> 
> > I'm suggesting a little different approach with
> the
> > Cluster Image Object.  Instead of the depending on
> a
> > persistant process for it's integraty, a VRS
> cluster
> > depends on a persistant and common set of data.
> 
> Right - so a VRS consists of a distributed,
> segmented
> and mirrored data space, with equaly distributed
> redundant
> access servers.  The availability of one or more
> access
> servers means that you can access the data space.
> 

Well, let me clarify a point.  I have talked about two
different set of data.  The data placed in the
Repository is "segmented, distributed and mirrored". 
However, the Cluster Image Data is NOT segmented.  It
IS distrubuted and mirrored, but not to disk, only to
in-memory tables in all LDS's


> >  When a process access data, it is confident
> > that the data represents the current state of the
> > cluster. 
> 
> Nice ideal.  But in practice one has to accept that
> there
> will be some synchronisation lag.  How you manage
> updates
> to a resource within the data set is tricky.  If you
> can
> guarantee integrity, it'll be fine.
> 
> For example, an LDS might be recovering a resource,
> and
> one of the segments of data it requires is on
> another
> LDS.  However, this LDS has received an update for
> this
> resource (a head of the LDS requesting it) and so
> the
> segment of data it has DOES NOT belong to the data
> chain
> the requesting LDS is trying to construct.  So the
> 'more
> up to date' LDS must keep hold of the previous
> cluster
> until all LDS's have been updated, then the cluster
> can
> be expired.  Phew.  Nasty.  And there needs to be
> locking
> too (one part of the LDS is serving a webService
> request
> whilst another part of the LDS is updating that very
> data
> chain required in satisfying the webservice
> request).
> This is getting bonkers!  I like it!  :o)
> 

Your right on here, the classic issue would be
synching the data adequately.  Of course, we are all
familiar with locks, and the feared 'deadly-embrace'
they imply.

I had mention some time ago that I often look to
biology for inspiration.  There are two fundamentally
different ways that a biological organism coordinates
itself.  The one most familar to folks, and closest to
computer science, is the nervous system.  The other is
chemical, and is far more fundamental to a cell and
organisms life.  There are organisms without nervous
systems of any kind.  This form uses the movement of
molecules as messangers.

What distinguishes this form from the nervous system
is that it is both 'loosely coupled' and highly
specific.  Using an undirected messagner, a chemical
released in some loose transport media (i.e. the
blood) makes it 'loosely coupled'.  But the message
carried is very specific, once the messanger arrives. 


This is the opposite of the nervous system, where the
coupling is very tight, a one to one wire, but the
message is very simple, like "Zap. Your it".

How might the 'loosely coupled, specific message' idea
translate to software design?  Damn good question.

An example might be in the Pointer Table in the
Repository that's posted already. At first glance, it
seems to have a very ridgedly defined structure.  But
it's awful damned casual in it's work.  The list of
Mirror Block addresses can, in fact, be very fuzzy. 
The stack of addresses can be damaged, and it will
still work.  Little homeostatic feedback loops, like
the CRC check on a data block at the Cluster Block
level, can catch bad blocks or missassigned blocks and
fix them. 

The connections between the Cluster Block and the
Mirror Blocks is loosely coupled.  But the CRC check
makes the data of the message extreamly specific. 

Now, theoretically, this should make a far more robust
system.  I guess the question is just when does fuzzy
become mush.  I suppose we will just have to build it
and test it to answer those questions.



> 
> Ah.  But the VRS is just a distributed data set with
> several access servers.  Any one of these access
> severs
> is able to satisfy requests for data within the
> dataset.
> So clients need to know about the existence of these
> LDSs.  I see it that a lookup for a single
> webservice
> would resolve to one of these LDSs.

A Net service request to any level I or II LDS should
see exactly the same list of available services, those
posted to the Cluster Registry. Now, a particular LDS
may end up specializing in a popular service, but only
because it has already loaded the necessary dataset.
It would have nothing to do with who owned the LDS
host computer.



> 
> As LDSs join the cluster, the cluser managment sort
> out the cluster, but the new LDS needs to be added
> to
> the UDDI thingy.
> 

There shouldn't be any relationship between what LDS's
are online with what Net services that the Cluster
offers.



__________________________________________________
Do You Yahoo!?
Yahoo! Sports - Coverage of the 2002 Olympic Games
http://sports.yahoo.com



reply via email to

[Prev in Thread] Current Thread [Next in Thread]