[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Gnumed-bugs] Problems attempting to update server 20.3 (client 1.5.
Re: [Gnumed-bugs] Problems attempting to update server 20.3 (client 1.5.3) to server 20.9
Thu, 4 Feb 2016 11:24:39 +0100
On Thu, Feb 04, 2016 at 03:28:58AM +0000, Jim Busser wrote:
> In attempting more than once to update from server 20.3 to
> server 20.9 (from gnumed-server.20.9.tgz), the first attempt
> seemed to halt after 805 lines of logging, with the log
> gm.bootstrapper (./bootstrap_gm_db_system.py::reindex_all() #1062):
> REINDEXing cloned target database so upgrade does not fail because of a
> broken index
> and, on the second attempt 20 minutes later, the process seemed to stall
> after only 521 lines of logging with
> gm.bootstrapper (./bootstrap_gm_db_system.py::connect() #258): trying
> DB connection to gnumed_v20 on localhost as postgres
This is fairly normal and in can, in some cases, take quite a
while (I have now added a note to the log to that effect) if
there is a rather substantial amount of data in the database.
Both your logs, in fact, show entirely normal output, so far.
The rationale is that an index may sometimes get corrupted by
whatever reason. The fix for that would be a reindex. The
integrity of an upgrade can become compromised by a broken
index (because the index can make PG return faulty query
results on which part of the upgrade may depend). Hence I
decided to apply the fix for potentially broken indices
before an upgrade is attempted, regardless of whether signs
for a broken index exist (there are no signs the bootstrapper
can check for programmatically short of creating a second
index with identical definition and comparing the two ...).
> The above had no adverse effect on my still running my
> current client 1.5.3 on (what I imagine) is a
> not-yet-updated-from-20.3 version of the server.
That is correct, you are fine.
> Happy to attempt whatever examination is suggested, and/or to retry, and or
> to (instead) try 20.10.
I might ask you to re-attempt and give it some 2 hours or so
because I would become suspicious.
While it is running you might like to connect to the database
from elsewhere (preferably as user "postgres") and monitor
select * from pg_stat_activity;
select * from pg_locks;
for potential clues. I have also added the VERBOSE option to
our REINDEX call such that users may track progress in the
PostgreSQL log (in there might be things to look for even
You can also issue from psql a
REINDEX VERBOSE DATABASE gnumed_v20;
to get a feel for what's happening in. That won't speed up
reindexing during the fixup but will give you an idea of what
order of magnitude in reindexing time is involved.
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346