[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Livelihood statistics of the SKS keyserver network

From: Andrew Gallagher
Subject: Re: Livelihood statistics of the SKS keyserver network
Date: Thu, 13 May 2021 12:34:20 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.9.0

Hi, Gunnar.

On 13/05/2021 05:15, Gunnar Wolf wrote:

What I wanted to bring to you all is a work I started doing over two
years ago, then completely forgot about it... And now, looks like an
interesting point from where to start understanding what the SKS
network looks like - Even, possibly, to help try to address its future.

If you are interested, please visit:

Nicely done! :-)

The first thing to understand what I'm plotting are the many graphs
that are _not_ present in the summary page: Scroll down to the table
that has many dates. Clicking on any such date will show you a walk of
gossiping servers, starting from basically a random server in the
network (I pick up whatever answers to -
That's why you will see many entries with zero or very few nodes: it
means I was unlucky with the resolution for a given day). The (quite
ugly and ad-hoc) source for those graphs is at:

It might be worth exploring whether starting with more than one initial node would help your luck - usually, na.pool, eu.pool resolve to different servers and at least one of those should be in good shape at any given time.

Do note that at the beginning I sampled the network much more often
(hourly). I decided this was too much, and since June 2019, am now
walking the network only every four hours. I do hope nobody sees this
as excessive!

Given that fastly and happy-eyeballs spider the network several times a minute, I can't imagine anyone would have a problem with your hourly extravagance. :-D

I am also plotting a single aggregate from all those data points: The
three graphs you will see at the top of the page, as well as the page
itself, are generated by:

This shows the very large drop the SKS network had in mid-2019, as
well as its behavior since then. I am happy, even hopeful, to note
that it seems the network hit reliability minimums between October
2020 and February 2021, but it seems there is a slight trend for
improvement, at least back to the late-2019 levels.

I think we can attribute some of that to Casey's work on Hockeypuck which has downgraded the poison key issue from a killer to an annoyance.

Please do tell me if this data sounds interesting, and if you can
thing of anything to improve on what I'm doing. Of course, I cannot
apply any changes  to already-collected data, but there are surely
many other things that can be considered.

This is very interesting work, thank you. I find the connectivity graphs hard to make sense of though as the nodes tend to be drawn on top of each other. A quick google hasn't thrown up any answers but I'm sure there's a way to address this by imposing some minimum separation between nodes. The graph browser we include in our software in $WORK has the ability to do this, but I need to check the details before recommending it, as I'm not sure if it has a noninteractive mode.

Some simple changes that I would otherwise suggest:

1. produce a connectivity graph with only working nodes
2. ignore localhost and private IPs, they will never work :-)
3. should it not default to port 11371 and fall back to 80?
4. don't plot * in the graph

Some more complex issues:

It seems that your Hockeypuck status parsing is broken, as a significant number of Hockeypuck nodes are listed consistently as Nil. It's probably due to assumptions you're making about the DOM in Hpricot. Thankfully, Hockeypuck emits a JSON stats page at 'pks/lookup?op=stats&options=mr' . If you get Nil from Hpricot, maybe you could fall back to the JSON? Beware that if you request the JSON page from SKS it will give you an HTML stats page regardless.

Also, the peers of some Hockeypuck nodes are throwing InvalidURIError - I suspect this is because of an inconsistency between SKS and Hockeypuck in how peers are reported - SKS says "host port", while Hockeypuck by default uses "host:port", but beware that many Hockeypuck operators have changed this back to SKS format so that the pool spider can parse the peer tree.

Unfortunately, it's not as simple as splitting on /(\s+|:)/ as this will bork on ipv6 addresses. I'd suggest something like the following:

  host, port = server.split(/\s+/)
  if port == ''
    host, port = host.split(/:/)

I think the only nodes with raw ipv6 peers are pgpkeys.[co.]uk, which are SKS and so use whitespace format - so while this is a kludge it should work. And since those ipv6 peers are all internal HA workers you could probably safely ignore them. In fact you could probably safely ignore any raw ipv4 or ipv6 peers, as public-facing nodes should be using a DNS name.


Andrew Gallagher

Attachment: OpenPGP_signature
Description: OpenPGP digital signature

reply via email to

[Prev in Thread] Current Thread [Next in Thread]