gzz-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Gzz-commits] gzz/Documentation/misc/hemppah-progradu prograd...


From: Hermanni Hyytiälä
Subject: [Gzz-commits] gzz/Documentation/misc/hemppah-progradu prograd...
Date: Mon, 16 Dec 2002 08:53:20 -0500

CVSROOT:        /cvsroot/gzz
Module name:    gzz
Changes by:     Hermanni Hyytiälä <address@hidden>      02/12/16 08:53:20

Modified files:
        Documentation/misc/hemppah-progradu: progradu.bib 
                                             research_problems 

Log message:
        Some (basic) open questions for d), e), f), g) and, h). Little fixes.

CVSWeb URLs:
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/progradu.bib.diff?tr1=1.30&tr2=1.31&r1=text&r2=text
http://savannah.gnu.org/cgi-bin/viewcvs/gzz/gzz/Documentation/misc/hemppah-progradu/research_problems.diff?tr1=1.1&tr2=1.2&r1=text&r2=text

Patches:
Index: gzz/Documentation/misc/hemppah-progradu/progradu.bib
diff -u gzz/Documentation/misc/hemppah-progradu/progradu.bib:1.30 
gzz/Documentation/misc/hemppah-progradu/progradu.bib:1.31
--- gzz/Documentation/misc/hemppah-progradu/progradu.bib:1.30   Wed Dec 11 
08:40:52 2002
+++ gzz/Documentation/misc/hemppah-progradu/progradu.bib        Mon Dec 16 
08:53:19 2002
@@ -116,7 +116,7 @@
        isbn = {1-58113-529-7},
        pages = {41--52},
        location = {Winnipeg, Manitoba, Canada},
-       url = {http://oceanstore.cs.berkeley.edu/publications/ 
papers/pdf/SPAA02.pdf},
+       url = 
{http://oceanstore.cs.berkeley.edu/publications/papers/pdf/SPAA02.pdf},
        publisher = {ACM Press},
 }
 
Index: gzz/Documentation/misc/hemppah-progradu/research_problems
diff -u gzz/Documentation/misc/hemppah-progradu/research_problems:1.1 
gzz/Documentation/misc/hemppah-progradu/research_problems:1.2
--- gzz/Documentation/misc/hemppah-progradu/research_problems:1.1       Thu Dec 
12 09:32:30 2002
+++ gzz/Documentation/misc/hemppah-progradu/research_problems   Mon Dec 16 
08:53:19 2002
@@ -1,8 +1,10 @@
-1. Approaches
 
-Please notice: in this section + = pro, - = con
 
-There are five approaches when performing searches in p2p networks.
+1. Approaches
+
+-there are five approaches when performing searches in p2p networks.
+-please notice that in this section *all* the pros and cons are not mentioned
+-all characterizations do not apply to all examples which are given
 
 1.1. Distributed Hash Tables (DHT)
 
@@ -67,7 +69,7 @@
 Kademlia:      N/A             O(log n)        O(log n)
 Viceroy:       N/A             7               O(log n)
 Small Worlds:  N/A             O(1)            O(log^2 n)
-Flooding:      N/A             N/A             No limit!
+Flooding:      N/A             O(1)            "high"
 Hybrid:                N/A             N/A             N/A
 Social:                N/A             N/A             N/A
 
@@ -77,7 +79,7 @@
 Number of messages when a node joins or leaves the network.
 
 Space: 
-How many neighbour nodes each node maintains in routing table.
+Total amount of space required in a system for routing tables
 
 Search: 
 Number of messages when an object lookup is performed
@@ -141,15 +143,17 @@
 
 2.2.1. Facts     
 -urn-5 is random "unique keyword", e.g. "Front page of New York Times 
newspaper"
--urn-5 can be updated
+-urn-5 can't be udated, but we can associate a new block with the urn-5 name
 -urn-5 name is saved in the header of Storm block (|block urn-5: "Front page 
of New York Times newspaper"|)
+-urn-5 names can be created before Storm blocks
 -key signings are saved in separate Storm blocks
--finding key blocks for data block can be performed locally (in logarithmic 
time)
+-finding key blocks for data block can be performed locally ("fast enough")
 -node has an internal index for key blocks and associated data blocks
 -for every urn-5 name, there is zero, one or more Storm blocks associated with 
it
+-in every block's header, there is a timestamp for date&time
 
 2.2.2. Objectives
--Find the specific (and the most recent) Storm block as quicly as possible 
(and return it to the user) based on the given urn-5
+-Find the specific (and the most recent) Storm block as quicly as possible 
(and return it to the user) based on the given urn-5 with a block's ID
 -simple pseudo thinking: "find all Storm blocks from the network, which uses 
specific urn-5 name. Compare blocks and 
 return the most recent block, if the signing key is "valid"."
 
@@ -164,8 +168,10 @@
        -one for urn-5 names, which are associated with block IDs
        -is this approach too difficult to maintain ?
        
+       
 -is there possibility, in the specific urn-5 name, to maintain information 
about most recent block's ID for better search performance (or moreover, 
 tree based structure for all blocks for specific urn-5 name) ?
+-How CAs' data should be saved ?
 
 
 2.3. "Searching for Storm blocks associated with specific urn-5 name, where 
@@ -173,10 +179,10 @@
      has been signed with a given key"
      
 2.3.1. Facts
--in addition to section 2.2.1, in every block's header, there is a timestamp 
for date&time
+-same as for 2.2.1.
 
 2.3.2. Objectives
--Find the specific (most recent) Storm block as quicly as possible (and return 
it to the user) based on the given urn-5
+-Find the specific (most recent) Storm block as quicly as possible (and return 
it to the user) based on the given urn-5 with a block's ID
 *and* a given date (range ?)
 -simple pseudo thinking: "find all Storm blocks from the network, which uses 
specific urn-5 name. Compare blocks and 
 return the most recent block, if the signing key and block's timestamp are 
"valid"."
@@ -194,3 +200,97 @@
        
 -is there possibility, in the specific urn-5 name, to maintain information 
about most recent block's ID for better search performance (or moreover, 
 tree based structure for all blocks for specific urn-5 name) ?
+-How CAs' data should be saved ?
+
+      
+
+2.4. "How does search engine should work?"
+
+2.4.1 Facts
+-There might be vicious blocks (urn-5 names, Storm blocks) in the network
+
+2.4.2 Objectives
+-Search engine should only return the blocks which are genuine (is this 
possible at this level ??)
+-Should be fast
+-realtime (?): show search results as data is founded from the network
+-relevance of data (or not ?)
+-keyword/fuzzy search support
+-should be able to determine (somehow) is the returned data correct or not 
(CAs !?)
+
+2.4.3. Existing approaches and this specific research problem
+
+a) DHTs and SWNs
+-currently, there is no support for keyword/fuzzy searches
+-however, in Overnet (Kademlia), there is support for keyword searches. Here's 
one example of Overnet's protocol:
+       
ed2k://|file|file_name_his_here|file_size_is_here_in_bytes|file_hash_is_here|
+       
ed2k://|file|Threads_-_The_closest_youll_ever_want_to_come_to_nuclear_war.jpg|271725|94bee4ec22f5e5493081a3df0bdd1595|
+-Metadata ?
+       
+b) FBNs
+-there is a support for keyword/fuzzy searches
+
+2.4.4. Open questions:
+-can we benefit from block's ID somehow? Could we use it for authenticating 
urn-5 names: check urn-5 name's most recent block's ID,
+and make decisions from that !?
+-should search engine return all data or only part of data, which is genuine ?
+-How we know if data is genuine or not ?
+-How CAs' data should be saved ?
+
+
+2.5. "Searching for all transclusions, which all refer into a specific point 
of specific document, from 
+      several blocks."
+      
+2.5.1. Objectives
+-find and return all the transclusions from the network, which refer to 
specific point of specific document, if exists
+-simple pseudo thinking: "Find all blocks in the network and check if block 
refers to a specific part of specific 
+document. If ok, return it."
+
+       
+2.5.2. Facts
+-transclusion is a part of a document (with block-id X) that says: "here, use 
20 bytes of block Y offset 52"
+
+2.5.3. Open questions
+-how transclusions are saved: is there somekind of urn/uri/hash in the block's 
header, which refers to another block ?
+-how part information (range, offset) of another block are defined and where 
this information is saved ?
+
+
+2.6. "Searching for all transclusions, which all refer into a specific point 
of specific document, from 
+      several blocks, where only *some* of the transclusions are relevant."
+     
+2.6.1. Objectives
+
+2.6.2. Facts
+-transclusion is a part of a document (with block-id X) that says: "here, use 
20 bytes of block Y offset 52"
+-simple pseudo thinking: "Find all blocks in the network and check if block 
refers to a specific part of specific 
+document, and check that certain requirements are met. If ok, return it."
+
+
+2.6.3. Open questions
+-how transclusions are saved: is there somekind of urn/uri/hash in the block's 
header, which refers to another block ?
+-how part information (range, offset) of another block are defined and where 
this information is saved ?
+-how are the conditions given ?
+-in practice, what are the conditions, and how these conditions should be 
applied ?
+      
+2.7.  "Searching for all Xanadu links, which all refer into a specific point 
of specific document, from 
+      several blocks."
+     
+
+2.7.1. Objectives
+-
+
+2.7.2. Facts     
+-A Xanadu link is a document saying (with block-id X ?): "The 20 bytes of 
block Y offset 52 are connected to 40 bytes of block Z, offset 40, because
+they talk about the same thing"
+
+-how Xanadu links are saved: is there somekind of urn/uri/hash in the block's 
header, which refers to another block ? 
+      
+2.7.3. Open questions
+      
+2.8.  "Searching for all Xanadu links, which all refer into a specific point 
of speficic document, from 
+      several blocks, where only *some* of the Xanadu links are relevant."
+      
+-A Xanadu link is a document saying (with block-id X ?): "The 20 bytes of 
block Y offset 52 are connected to 40 bytes of block Z, offset 40, because
+they talk about the same thing"
+
+2.8.3. Open questions 
+-how Xanadu links are saved: is there somekind of urn/uri/hash in the block's 
header, which refers to another block ?



reply via email to

[Prev in Thread] Current Thread [Next in Thread]