[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Vrs-development] The Design Beginning: The VRS Cluster

From: Chris Smith
Subject: [Vrs-development] The Design Beginning: The VRS Cluster
Date: Tue, 2 Apr 2002 17:48:01 +0100

Okay - I'm going to jam down my thoughts based on the
design proposal I posted last week or so.

Please comment on the following and together we'll flesh
this out to something of a complete design decision:

> 2) Clustering of LDS's into a VRS
>    This topic to specifically cover:
>    a. Organisation.
>       a.1. VRS's as an unbounded set of LDS's or a
>            finite set of LDS's.  The management problems
>            associated with these.
>       a.2. Can VRS's interlink?
>       a.3. Advantages/Disadvantages of clusters of
>            'VRSlets'.
>    b. The scope of data/services within a VRS.  Is data
>        shared across VRS boundaries (may be covered by
>        a.2/a.3)
>    c. Publically accessable LDS's.  Allowing arbitary
>       clients to access services within the VRS.
>       ( spans topic 3 ).
>       c.1. VRS's having multiple public LDS's.

This will do to be getting on with:

The VRS is a networked cluster of trusted LDS's.
Internally each LDS has a table containing details of the
other LDS's in the cluster.  This table is FINITE in size,
and is set at configuration time for a given LDS (see Qa).
There is no reason why each LDS can have different table
size confiurations based on resource availability (ie each
LDS may be individually tuned).

An 'Every-LDS is connected to every-other LDS' topology
would allow propogation of messages asynchronously very
quickly.  However, this pushes up the resource requirements
(domain table size, connected sockets etc).
What about other topologies, like loosely linked clusters,
where only a third (a random figure!) of the LDS's are
directly connected to other LDS's.  So long as there is
a route from one LDS to another via two or more other
LDS's, then resilience is maintained.
This will reduce the resource overhead and reduce the
impact of having a finite domain table size.
I'm not sure if we end up with a cyclic graph here tho'.

I don't think distinct VRS's should interlink.  That is,
given a world of multiple VRS's, a service offered by an
LDS within a particular VRS cluster cannot span a VRS/VRS
boundary.  This is not to say that an LDS cannot be part
of more than one VRS (see Qb).

(Aside: services within a VRS may call other services.
 In this guise, the service doing the call is acting as
 a client and thus will go through the whole Service
 Discovery proceedure.  This may result in a call to
 a service that is in a totally different VRS.  The
 current thinking of VRS's accepts this behaviour).

Services available within the VRS may be tagged as public.
These services are accessible by non-VRS (the local VRS)
members through a suitable network server.

An LDS may offer itself as a 'public' network server by
allowing requests from external hosts.  If no services
within the VRS are tagged as public, then no services
will be visible through the 'public' network servers.
Multiple LDS may declare themselves as public (see Qc).

Qa) Is having a FINITE size table a bad thing?  This is a
   'feature' inherited from Goldwater, so I'd like to keep
   it because it works (as Goldwater will be viewing each
   LDS as a Domain and needs to keep track of who is
   available and who is not).  If a finite table size is
   bad (ie cannot be worked around) then we'll have to
   throw away Goldwater Domains for this project  :o(
   ... This is why I've held off completing the GW Domain
   rewirte ...
   BTW, The memory requirements for GW Domain tables is
   low - couple of hundred bytes per domain.  So this
   finite number could be big enough so as not to be
   restrictive in any way.... say 20-30K for 100 or so
   Domains known about by a single LDS.

Qb) Can an LDS join more than one VRS?  I think we should
   allow this, but how do we partition the multiple
   personalities an LDS may have?
   This is probably not answerable now - food for thought.

Qc) Given that multiple LDS's may be 'public', any service
   discovery scheme needs to offer the IP address and
   access type (SOAP/XML-RPC/Jabber message type) of all
   public LDS's (just like DNS).
   However, I'd like to see some kind of feeback from the
   VRS to the SDS (Service Discovery Server(s)) so that
   an LDS that repeatedly fails, or is taken out of the
   VRS is no longer offered in response to a Service
   Discovery Request.  It also means that LDS's added
   to the VRS that are made public, or start off as
   public will be automatically visible in any SDRs.

   If a standard SDS protocol is used, then this could be
   achieved by a stand-alone daemon which receives
   special VRS broadcast messages and updates the SDS
   data on-the-fly.

   Service Discovery should also do some sort of load
   balancing - which could be as basic as round robbin,
   or fed from VRS feeback.


Chris Smith
  Technical Architect - netFluid Technology Limited.
  "Internet Technologies, Distributed Systems and Tuxedo Consultancy"
  E: address@hidden  W: http://www.nfluid.co.uk

reply via email to

[Prev in Thread] Current Thread [Next in Thread]