sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Sks-devel] SKS apocalypse mitigation


From: Andrew Gallagher
Subject: [Sks-devel] SKS apocalypse mitigation
Date: Fri, 23 Mar 2018 11:10:49 +0000
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.5.2

Hi, all.

I fear I am reheating an old argument here, but news this week caught my
attention:

https://www.theguardian.com/technology/2018/mar/20/child-abuse-imagery-bitcoin-blockchain-illegal-content

tl;dr: Somebody has uploaded child porn to Bitcoin. That opens the
possibility that *anyone* using Bitcoin could be prosecuted for
possession. Whether this will actually happen or not is unclear, but
similar abuse of SKS is an apocalyptic possibility that has been
discussed before on this list.

I've read Minsky's paper. The reconciliation process is simply a way of
comparing two sets without having to transmit the full contents of each
set. The process is optimised to be highly efficient when the difference
between the sets is small, and gets less efficient as the sets diverge.

Updating the sets on each side is outside the scope of the recon
algorithm, and in SKS it proceeds by a sequence of client pull requests
to the remote server. This is important, because it opens a way to
implement object blacklists in a minimally-disruptive manner.

An SKS server can unilaterally decide not to request any object it likes
from its peers. In combination with a local database cleaner that
deletes existing objects, and a submission filter that prevents them
from being reuploaded, it is entirely technically possible to blacklist
objects from a given system.

The problems start when differences in the blacklists between peers
cause their sets to diverge artificially. The normal reconciliation
process will never resolve these differences and a small amount of extra
work will be expended during each reconciliation. This is not fatal in
itself, as SKS imposes a difference limit beyond which peers will simply
stop reconciling, so the increase in load should be contained.

The trick is to ensure that all the servers in the pool agree (to a
reasonable level) on the blacklist. This could be as simple as a file
hosted at a well known URL that each pool server downloads on a
schedule. The problem then becomes a procedural one - who hosts this,
who decides what goes in it, and what are the criteria?

It has been argued that the current technical inability of SKS operators
to blacklist objects could be used as a legal defence. I'm not convinced
this is tenable even now, and legal trends indicate that it is going to
become less and less tenable as time goes on.

Another effective method that does not require an ongoing management
process would be to blacklist all image IDs - this would also have many
other benefits (I say this as someone who once foolishly added an
enormous image to his key). This would cause a cliff edge in the number
of objects and, unless carefully choreographed, could result in a mass
failure of recon.

One way to prevent this would be to add the blacklist of images in the
code itself during a version bump, but only enable the filter at some
timestamp well in the future - then a few days before the deadline,
increase the version criterion for the pool. That way, all pool members
will move in lockstep and recon interruptions should be temporary and
limited to clock skew.

These two methods are complementary and can be implemented either
together or separately. I think we need to start planning now, before
events take over.

-- 
Andrew Gallagher

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]