gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...


From: Hermanni Hyytiälä
Subject: [Gzz-commits] gzz/Documentation/misc/hemppah-progradu mastert...
Date: Tue, 10 Dec 2002 08:05:09 -0500

CVSROOT:        /cvsroot/gzz
Module name:    gzz
Changes by:     Hermanni Hyytiälä <address@hidden>      02/12/10 08:05:09

Modified files:
        Documentation/misc/hemppah-progradu: masterthesis.tex 

Log message:
        more DHT updates

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/masterthesis.tex.diff?tr1=1.22&tr2=1.23&r1=text&r2=text

Patches:
Index: gzz/Documentation/misc/hemppah-progradu/masterthesis.tex
diff -u gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.22 
gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.23
--- gzz/Documentation/misc/hemppah-progradu/masterthesis.tex:1.22       Tue Dec 
10 07:52:20 2002
+++ gzz/Documentation/misc/hemppah-progradu/masterthesis.tex    Tue Dec 10 
08:05:08 2002
@@ -182,6 +182,15 @@
 
 \subsection{Distributed hash table}
 
+In Distributed Hash Table (DHT) approach, each value is associated with a 
unique key (e.g. SHA-1 \cite{fips-sha-1})in an m-bit virtual address space. The 
virtual 
+address space is partitioned into sections, which form adjoining regions of 
this address space. In general, 
+either a single computer or multiple computers is assigned to each section of 
the virtual address space. Each 
+computer is assigned one or more sections, and they maintains copies of those 
key-value bindings whose key values 
+lie within its assigned cell. This means, in general, that computer that hosts 
corresponding key-value pair, 
+is not owned by the user that decided to provide the resource to the netowork. 
Moreover, the allocation of the address 
+space and the assigment of computers to sections is dynamic. Therefore, 
everytime when a node joins or 
+leaves the network, the address space is reallocated.
+
 \subsection{Hybrid architecture}
 
 \subsection{Tree based architecture}
@@ -204,23 +213,13 @@
 
 
 
-\chapter{Summary of existing Peer-to-Peer file sharing systems}
-
-In this section, we review briefly existing algorithms used in existing 
Peer-to-Peer systems.
-
-\section{Distributed Hash Tables}
-
-In DHT approach, each value is associated with a unique key (e.g. SHA-1 
\cite{fips-sha-1})in an m-bit virtual address space. The virtual 
-address space is partitioned into sections, which form adjoining regions of 
this address space. In general, 
-either a single computer or multiple computers is assigned to each section of 
the virtual address space. Each 
-computer is assigned one or more sections, and they maintains copies of those 
key-value bindings whose key values 
-lie within its assigned cell. This means, in general, that computer that hosts 
corresponding key-value pair, 
-is not owned by the user that decided to provide the resource to the netowork. 
Moreover, the allocation of the address 
-space and the assigment of computers to sections is dynamic. Therefore, 
everytime when a node joins or 
-leaves the network, the address space is reallocated.
+\chapter{Summary of existing Peer-to-Peer systems}
 
+This section reviews briefly existing algorithms used in existing Peer-to-Peer 
systems. Note that this section 
+is not meant to be an exhaustive survey of Peer-to-Peer systems. Instead, this 
section introduces a few systems 
+from each architectural perspective.
 
-\subsection{Plaxton Algorithm}
+\section{Plaxton}
 Plaxton \cite{plaxton97accessingnearby} developed the first routing algorithm, 
which can be used with DHTs.
 The algorithm is not designed to be used in dynamic distributed systems, 
because Plaxton algorithm 
 assumes a proportional static node population. However, algorithm provides 
very efficient routing for search 
@@ -231,25 +230,25 @@
 Plaxton's algorithm routes in $O(log n)$ hops and requires a routing table 
size of $O(log n)$. 
   
 
-\subsection{Tapestry}
+\section{Tapestry}
 Tapestry \cite{zhao01tapestry} is a adaption of Plaxton's algorithm 
\cite{plaxton97accessingnearby}. Tapestry 
 routes queries with path lengths of $O(log n)$, and each node, for a systems 
with $n$ nodes, maintains routing table 
 size of $O(log n)$. When a node leaves or joins to network,  $O(log^2 n)$ 
messages are required.
 
-\subsection{Pastry}
+\section{Pastry}
 In Pastry \cite{rowston01pastry}, the key space is considered as a virtual 
circle. Each node is responsible for keys 
 which are closest numerically. The neighbors consist of leaf set, which is the 
set of $|L|$ closest nodes. In addition, 
 Pastry has another set of neighbors randomly spread out in the key space for 
more efficient routing. As in Plaxton approach, 
 Pastry also forwards the query to the neighbor which have the longest shared 
prefix of the key. Pastry routes within 
 the pathlength of $O(log n)$, each node has $O(log n)$ neighbors and departure 
or joining of node requires $(log^2 n)$ messages.
 
-\subsection{CAN}
+\section{CAN}
 In the CAN model \cite{ratnasamy01can}, nodes are mapped into a virtual 
$d$-dimensional coordinate key space. Each node 
 is associated with a hypercubal blocks of this keyspace and every block keeps 
information on its immediate hypercubal 
 neighbors. In CAN, nodes have $O(d)$ neighbors and expected pathlengths are 
$O(dn^\frac{1}{d})$. Node insertion or deletion affects 
 $O(number of dimensions)$ existing nodes. Setting $d = log_2(n)/2$, CAN 
provides similar scalability as Plaxton approach.
 
-\subsection{Chord}
+\section{Chord}
 Chord \cite{stoica01chord} uses virtual circle as the key space. As Pastry, 
Chord also threats node's neighbors as leaf sets. 
 However, in Chord, there are two sets of neighbors: each node has a successor 
list of k nodes which immediately follows the node 
 in the key space. For better efficiency, each node has additional finger list 
of $O(log n)$ nodes placed around the key space.
@@ -257,7 +256,7 @@
 hops. Additionally in Chord, a node join or leave requires $O(log^2 n)$ 
messages.
  
 
-\subsection{Kademlia}
+\section{Kademlia}
 
 Kademlia \cite{maymounkov02kademlia} is based on a XOR-based metric topology. 
In this approach, every query (message) exchanged conveys 
 useful contact information. Furthermore, Kademlia uses this information to 
send parallel query messages. XOR-metrics are used to calculate
@@ -266,7 +265,7 @@
 efficiently than other DHT approaches. For a system with $n$ nodes, Kademlia's 
algorithm routes in $O(log n)$ hops and requires 
 a routing table size of $O(log n)$.
 
-\subsection{Coral}
+\section{Coral}
 
 Coral [NOTYETPUBLISHED] is based on a new abstraction called distributed 
sloppy hash table (DSHT) and is a layer on existing 
 lookup systems, such as Chord, CAN, Kademlia, Pastry and Tapestry. In contrast 
to original DHTs, Coral provides a lookup, which 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]