gnunet-svn
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[GNUnet-SVN] r13350 - gnunet


From: gnunet
Subject: [GNUnet-SVN] r13350 - gnunet
Date: Thu, 21 Oct 2010 15:25:32 +0200

Author: grothoff
Date: 2010-10-21 15:25:32 +0200 (Thu, 21 Oct 2010)
New Revision: 13350

Modified:
   gnunet/TODO
Log:
update

Modified: gnunet/TODO
===================================================================
--- gnunet/TODO 2010-10-21 13:22:46 UTC (rev 13349)
+++ gnunet/TODO 2010-10-21 13:25:32 UTC (rev 13350)
@@ -11,6 +11,9 @@
        likely good enough until we get ATS going; still should be tested...
     => "peers connected (transport)" now instantly goes to ZERO (core 
statistic),
        but "established sessions" stays up...
+  - service:
+    + 2-peer perf test goes WAY over bandwidth limit (i.e. 300 kbps/set, 2 
MB/s transfer rate); clearly core does
+      not properly enforce the limit [MW]
 
 0.9.0pre3:
 * Determine RC bugs and fix those (release should have no known real bugs)
@@ -33,16 +36,11 @@
   - also do UPnP-based (external) IP detection
     (Note: build library always, build UPnP service when dependencies like 
libxml2 are available)
 * FS: [CG]
-  - service:
-    + 2-peer perf test does NOT terminate for large (500 MB) files because
-      somehow blocks are not found (suspect: load-based no DB lookup + forward 
first, no clean up of routing table?)
-    + 2-peer perf test goes WAY over bandwidth limit (i.e. 300 kbps/set, 2 
MB/s transfer rate); clearly core does
-      not properly enforce the limit
   - library:
     + reconstruct IBLOCKS from DBLOCKS if possible (during download; see FIXME 
in fs_download)
     + add support for pushing "already seen" search results to FS service for 
bloomfilter
     + use different 'priority' for probe downloads vs. normal downloads
-  - implement FS performance tests
+  - implement multi-peer FS performance tests
     + insert
     + download
     + search




reply via email to

[Prev in Thread] Current Thread [Next in Thread]