gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] storm/doc/pegboard/attacking_gisp--hemppah peg.rst


From: Hermanni Hyytiälä
Subject: [Gzz-commits] storm/doc/pegboard/attacking_gisp--hemppah peg.rst
Date: Wed, 11 Jun 2003 04:10:26 -0400

CVSROOT:        /cvsroot/storm
Module name:    storm
Branch:         
Changes by:     Hermanni Hyytiälä <address@hidden>      03/06/11 04:10:26

Modified files:
        doc/pegboard/attacking_gisp--hemppah: peg.rst 

Log message:
        updates

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/storm/storm/doc/pegboard/attacking_gisp--hemppah/peg.rst.diff?tr1=1.17&tr2=1.18&r1=text&r2=text

Patches:
Index: storm/doc/pegboard/attacking_gisp--hemppah/peg.rst
diff -u storm/doc/pegboard/attacking_gisp--hemppah/peg.rst:1.17 
storm/doc/pegboard/attacking_gisp--hemppah/peg.rst:1.18
--- storm/doc/pegboard/attacking_gisp--hemppah/peg.rst:1.17     Tue Jun 10 
09:10:33 2003
+++ storm/doc/pegboard/attacking_gisp--hemppah/peg.rst  Wed Jun 11 04:10:25 2003
@@ -4,8 +4,8 @@
 
 :Authors:  Hermanni Hyytiälä
 :Date-Created: 2003-06-05
-:Last-Modified: $Date: 2003/06/10 13:10:33 $
-:Revision: $Revision: 1.17 $
+:Last-Modified: $Date: 2003/06/11 08:10:25 $
+:Revision: $Revision: 1.18 $
 :Status:   Incomplete
 
 .. :Stakeholders:
@@ -77,39 +77,70 @@
 system while the test is running. 
 
 We expect that GISP's fault tolerance is at least at the same level as 
Chord_'s 
-fault tolerace since GISP's routing table is based on Chord's routing table:
+fault tolerace since GISP's routing table is based on Chord's routing table.
 
-.. place here some results from the Chord paper
+Chord's general properties:
+- O(log^2 n) messages are required to join/leave operations 
+- O(log n) lookup efficiency
+- Routing table maintains information about O(log n) peers
+- Requires active stabilization protocol to aggressively maintain
+  the routing tables of all peers
+- Has no specific mechanism to heal partitioned peer groups
+
+The assumption for the results above is that the system is "in the steady 
state", 
+according to the authors.
+
+Note: The length of the path to resolve the query is O(log n) *with high 
+probability*.  
+
+Example Chord's path lengths in a static network (from figure):
+
+~10 peers: 2 (average)
+~20 peers: 2 (average)
+~100 peers: 3 (average)
+~1000 peers: 5 (average)
+~10000 peers: 6 (average)
+~16000 peers: 7 (average)
+
+Chord's fault tolerance properties include (from the figures):
+
+Simultaneous peer failures: 
+- Randomly selected fraction of nodes fail
+- No peers join or leave the system
+- Result: 20% of lookups fail, when 20% of peers are failed
+
+Lookups during peers join and leave the system: 
+- The fraction of lookups fail as a function of the rate (over time)
+  at which peers join and leave the system
+- Only failures caused by Chord state inconsistency are included, not
+  failures due to lost keys (text copied directly from the figure text)
+- The authors
+- Queries are not retried
+- 500 peers
+- Result: 6.5% of lookups fail, when peer join/leave rate per second
+  is 0.1 (corresponds to peer joining and leaving every 10 seconds
+  on average)
 
-Thus, we expect that GISP is able to route some queries to their destination 
-peers eventually, altough the performance of a lookup is expected to decrease. 
Also, 
-we except that some of the queries are lost. With this test case, we wish 
-to get more information how big the lost rate is.
 
 Eventually, we will compare (and validate) our test results with the Chord's 
 fault tolerance properties.
 
 Simulation Process: 
-- Create 90 normal peers in the network (ID is in the format "peer1 -peer90")
-- Create 10 "dumb" peers in the network (ID is in the format "dumb1-dumb10")
-- If necessary, the number of total peers and "dumb" peers can be 
increased/decreased
-  
-    .. list different settings...
-  
-- Create 5000 key/value items in the network (the format is "key1-5000", 
"value1-5000")
-- Perform 2500 queries randomly with *normal* peers (random peer selection 
(peer1-5000) 
-  and random query selection (key1-5000))
+- 9*10^k normal peers, 1*10^k "dumb" peers, where k = 1..3
+- n*10^k normal peers, d*10^k "dumb" peers, where k = 1..3, n = 1..9
+  and d = 1..9
+- For a normal peers ID is in the format "peer1, peer2..." and for a "dumb" 
peer 
+  ID is in the format "dumb1, dumb2..." 
+- Create 5000 key/value items in the network (the format is "key1, key2...", 
+  "value1, value2...")
+- Perform 2500 queries randomly with *normal* peers (random peer selection  
+  and random query selection)
 - Try to use same code as in GISP's implementation/simulation base
 - For "dumb" peers we have to create own class 
   (extends GISPXML-class) which has "dumb" methods for query 
   forward and processing
 - Test case is performed with loop (e.g., while(true) etc)
 - Update all peers' routing information every loop pass
-
-.. Then you need to run until you've reached a steady state, and have an 
estimate
-   of how long that will take beforehand, and also about what the steady
-   state will be like.
-   
 - "Distributed" peermap is based on Java's HashMap data structure 
(synchronized)
 - Use org.apache.log4j package for logging information
 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]