[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gne]API

From: Rob Scott
Subject: Re: [Bug-gne]API
Date: Thu, 28 Jun 2001 18:51:26 +0100

Then produce a log file from every transaction which *changes* the database
content. The log file will be a set of SQL statements which can be applied
to bring a slave database up to the same state as the master. Not
replication really, but I've used it where replication isn't practical.

yes this is the sort of thing I was thinking about a few days ago, but then again theres
not much point in rewriting someone else's code.

although this would allow us to mould the process to our own needs, it would require a bit of work on
something thats pretty much already done.

im still in favour of the replication to be honest.
after all, its quite a flexible system.

it may be worth me playing around with replication for a while on some old machines.

of course we won't have to think seriously about mirroring for quite a while.

for redundancy / speed, do we still think one db will be enough, or should we start thinking about a chain (or better, tree) of replicating dbs, with the perl doing a round robin of them when reading data.
or when writing data, putting it in the master.

i suppose this still depends on
1) what machine(s) we can get for the purpose
2) how well we manage to optimise the db usage.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]