[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Koha-devel] Building zebradb

From: Tümer Garip
Subject: [Koha-devel] Building zebradb
Date: Fri, 10 Mar 2006 19:47:02 +0200

We have now put the zebra into production level systems. So here is some
experience to share.

Building the zebra database from single records is a veeeeery looong
process. (100K records 150k items)

Best method we found:

1- Change zebra.cfg file to include


2- Write (or hack export.pl) to export all the marc records as one big
chunk to the correct directory with an extension .iso2079 And system
call "zebraidx -g iso2079 -d <dbnamehere> update records -n".

This ensures that zebra knows its reading marc records rather than xml
and builds 100K+ records in zooming speed.
Your zoom module always uses the grs.xml filter while you can anytime
update or reindex any big chunk of the database as long as you have marc

3-We are still using the old API so we read the xml and use
MARC::Record->new_from_xml( $xmldata )
A note here that we did not had to upgrade MARC::Record or MARC::Charset
at all. Any marc created within KOHA is UTF8 and any marc imported into
KOHA (old marc_subfield_tables) was correctly decoded to utf8 with
char_decode of biblio.

4- We modified circ2.pm and items table to have item onloan field and
mapped it to marc holdings data. Now our opac search do not call mysql
but for the branchname.

5- Average updates per day is about 2000 (circulation+cataloger). I can
say that the speed of the zoom search which slows down during a commit
operation is acceptable considering the speed gain we have on the

6- Zebra behaves very well with searches but is very tempremental with
updates. A queue of updates sometimes crashes the zebraserver. When the
database crash we can not save anything even though we are using shadow
files. I'll be reporting on this issue once we can isolate the problems.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]