[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[GNUnet-SVN] r26746 - gnunet-java
From: |
gnunet |
Subject: |
[GNUnet-SVN] r26746 - gnunet-java |
Date: |
Thu, 4 Apr 2013 02:30:42 +0200 |
Author: dold
Date: 2013-04-04 02:30:42 +0200 (Thu, 04 Apr 2013)
New Revision: 26746
Modified:
gnunet-java/ISSUES
Log:
issues
Modified: gnunet-java/ISSUES
===================================================================
--- gnunet-java/ISSUES 2013-04-03 21:17:41 UTC (rev 26745)
+++ gnunet-java/ISSUES 2013-04-04 00:30:42 UTC (rev 26746)
@@ -1,70 +1,48 @@
-* discuss topology in exp round
- * analytical solution?
- * degenerate cases (e.h. n=3)
+general problem with consensus:
+current approach with exchange/inventory/completion does not really work,
+because malicious peers can lie, it is hard to have a threshold for the
completion set.
-* related to your bug-report:
-==7580== Invalid read of size 4
-==7580== at 0x65221E0: send_connect (mesh_api.c:803)
-==7580== by 0x6527409: reconnect_cbk (mesh_api.c:876)
-==7580== by 0x4E76FB8: GNUNET_SCHEDULER_run (scheduler.c:597)
-==7580== by 0x4E81BB5: GNUNET_SERVICE_run (service.c:1816)
-==7580== by 0x4017A5: main (gnunet-service-consensus.c:2769)
-==7580== Address 0x7fefff830 is just below the stack ptr. To suppress, use:
--workaround-gcc296-bugs=yes
-==7580==
+some very basic description of the first two rounds here:
+https://bitbucket.org/dold/consensus-doc/src/
+(probably shoud be on sam / using git)
- * occurs sometimes, don't know if my fault or stream/mesh
+* consensus implementation is quite complicated / a lot is being rewritten
right now
+=> not working right now at all
- * problem with gnunet-consensus and testbed on shutdown, what am I doing
wrong?
- * doing operation_done on the handles returned by service_connected fixed
some of the assertion errors
+review consensus api
+ * something a GNUNET_CONSENSUS_listen?
+ * element api (akin to block api)
+ * who determines the right/allowed element plugin(s) for consensus?
+ * add plugin(s) to consensus_create
- * there are two stages of done: *we* have all elements, vs. enough other
peers have all elements
+discuss element validation api w.r.t. voting
+ * checks voter certificate
+ * what to do if cert does not check out?
+ * use BF for users, should have access to elements
+ * stores double-voters, only accepts first two votes to prove
+ double voting
+ * better: only store the two lexically smallest votes
- * tbd: testing response to non-decodable IBF
- * how to test this systematically?
- * e.g. GNUNET_CONSENSUS_flout_scramble_ibfs (double probability)
- * tbd: testing if the inventory phase actually works
- * some elements must be missing for this
- * use GNUNET_CONSENSUS_flout_drop_elements (double probability)
+* review way of storing inventory
+ * bitmap per element
- * different peers can be in different rounds (one in inventory, other in
stock)
- * IBFs, SEs, SYNCs and FINs have to be tagged with their round
+review message queue
+ * currently doesn't do much
+ * used for both client-service and peer-peer
- * implementing the stock round, just use exp scheme or more explicit exchange
(we know which peer wants/has which element)
+problem in inventory round:
+ * no redundancy in the sense of exp-rounds, we need
+ to check for IBF_Key collisions
+ * retry with another mapping on failure
- * some delicate bugs, e.g. related to double insertion bucket poisoning
+maximum number of different ibf-to-hashcode mappings:
+sizeof (ElementHash) / sizeof(IBF_Key)
- * gave up on state-machine for sending messages to other peers
- * there's one queue per peer now
+* current method of storage:
+ * 8 maps IBF_Key -> 2^(HashCode-ptr}
+ * 1 map HashCode -> ElementInfo
- * replay of premature SE's
- * if we don't expect a strata message, store it, and check
- on start of next round if we could use it right now
+* actually way much overhead, as IBF_Keys are used as HashCode in the hashmap.
- * in what way is it expensive to keep a tunnel/stream connection open?
- * related question: when should we start connecting to other peers?
- * currently: after conclude call
+any improvements / more efficient ways / fancy data structures to do this?
- * implementing peer shuffle
- * currently shuffle is a no-op
- * should use bits from global session id, right?
-
- * I'm using queues for messages quite often, am I doing something wrong?
- * client messages are queued
- * messages to other peers are queued
-
- * dealing with hash collisions in the inventory phase
- * previous phase is fine
-
- * doing inventory in nlogn:
- * can't IBFs be used as commutative cryptographic hash?
- * messages then would be:
- * id-list (peer ibf sign plus minus)*
- * but: when we do direct exchange, we know that the other peer knows that we
don't have an element, so we don't have to
- request it explicitly
-
- * consensus service implementation is one very big file, any idea how to
structure it nicely?
-
- * problem with the log-scheme
- * if there's one slow peer, the whole consensus will be slow
- * any way to mitigate this?
-
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [GNUnet-SVN] r26746 - gnunet-java,
gnunet <=