sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sks-devel] Adding DB_INIT_LOCK to sks-keyserver (revisited)


From: Jeff Johnson
Subject: Re: [Sks-devel] Adding DB_INIT_LOCK to sks-keyserver (revisited)
Date: Sat, 27 Feb 2010 08:49:37 -0500

On Feb 27, 2010, at 6:26 AM, Kim Minh Kaplan wrote:

> Jeff Johnson writes:
> 
>> I am talking about catastophic recovery, particularly
>> in the sense of hardening, as in not having to reload
>> an entire database, for certain types of failures.
> 
> The procedure for catastrophic is described in depth in the Berkeley DB
> manual.  In particular if you want to be able to do this kind of
> recovery you should not delete the log files unless you are really sure
> they are not needed any more.  That being said, catastrophic recovery is
> to be dealt mostly at the system administration and organisation
> procedures level.  See [1] for more on this.
> 
>> Examine the size of your logs, and the size of the tables, in KDB/*.
>> 
>> The logs should be approx. the same size (key material is rather 
>> uncompressible)
>> as the tables in order to guarantee *everything* can be recreated.
> 
> No.  Whatever the size of the database the logs start really small.
> Then they grow as operations are committed until the next checkpoint.
> Once the checkpoint is over the log files can be deleted (but should not
> if you plan to do catastrophic recovery or some form of advanced
> redudancy) and the cycle starts again.  So it is perfectly normal to
> have small amount of log files if you remove unused ones.
> 
>> My logs (particularly after running
>>      db_checkpoint -1
>>      db_archive -dv
>> are not sufficient to recreate the database in its entirety, just by
>> looking at the size of the files involved.
> 
> This is normal and the expected outcome of db_checkpoint.  After
> db_checkpoint you do not need any log files to recreate the database in
> its entirety, the snapshot is sufficient.
> 
>> The definition for catastrophic recovery depends on the size of the logs 
>> that are kept.
> 
> I use the definition of catastrophic recovery from Berkeley DB's
> manual's chapter "Database and log file archival".  With this I can not
> see any need to plan for catastrophic recovery of the prefix tree as it
> can be constructed from scratch and *must* be kept synchronized with the
> keys database: using a prefix tree that is not exactly the one
> corresponding to the keys database sounds like a recipe for trouble.
> That basically means that you can *not* use the catastrophic recovery
> procedure.
> 


If catastrophic recovery for SKS is what is recommended by BDB, that's
entirely workable.

But conceptually, there are 4 pools of information in a SKS database:
        1) the dump store indexed by file offset (with fastbuild)
        2) the DB_BTREE table which index the primary key-hash->file-offset 
lookup
        3) the DB_BTREE tables which index the secondary info->key-hash lookup
        4) the logs which serialize operations not yet checkpointed.

If build rather than fastbuild is used, pools 1) and 2) become the same.
When checkpoint'd, pool 4) is pushed to pool 3).

But much of the information in pools 3+4 is derived from the key material
in pools 1+2, and is (at least conceptually) unnecessary to be included
in catastrophoic recovery since it can be regenerated when necessary.

That makes catastrophic recovery procedures easier because there's less
data to backup and archive.

> The keys database could use some form of backup procedure.  The command
> "sks dump" is a good one but currently it requires that you stop the
> recon and db process.  One of the SKS server operator mentionned that
> removing Dbenv.RECOVER from keydb.ml works fine and could permit to dump
> the database without interrupting the server.
> 
>> What is the schema in use for the KDB tables? I'm looking
>> for the {key,data} definitions for put operations performed
>> on the tables in KDB in particular.
> 
> If memory serves me well, key is {key-hash, key-material}, keyid is
> {key-id, key-hash}, word is {word, key-hash}.  The other databases I do
> not know.
> 

The schema you describe are secondary -> primary relations that can be used
with DB->associate:
        {key->id, key-hash}     ->      {key-hash, key->material}
        {word, key-hash}        ->      {key-hash, key-material}
The value of the secondary is used to key the primary retrieval.

Adding DB_TRUNCATE (or removing file and adding DB_CREATE) to DB->associate
flags can then be used to regenerate secondary indices when needed, replacing
the redundancy in catastrophic recovery with the cost of the regeneration
operation.

But there's nothing whatsoever wrong with catastrophic recovery
as described in the BDB manual, if you don't want to use DB->associate.

73 de Jeff




reply via email to

[Prev in Thread] Current Thread [Next in Thread]