|Subject:||[monit] Problem monitoring mongrel Rails servers|
|Date:||Thu, 3 Apr 2008 16:08:44 +0100|
I am having difficulty monitoring a cluster of mongrel servers on Solaris.
I have a cluster of mongrel servers, and my .monitrc script looks like this (there is a similar entry for each mongrel server, with port number changed):
check process mongrel_prod_30000 with pidfile /web/docs/forms.eurostar.inet/current/tmp/pids/mongrel.30000.pid
if failed host elbrus port 30000 protocol http
and request "/" then alert
if totalmem > 100 Mb then restart
if cpu > 60% for 2 cycles then alert
if cpu > 80% for 5 cycles then restart
if loadavg(5min) greater than 10 for 8 cycles then restart
if 3 restarts within 5 cycles then timeout
start program = "/usr/local/bin/mongrel_rails cluster::start -C /web/docs/forms.eurostar.inet/current/config/clusters/production.yml --only 30000 --clean"
stop program = "/usr/local/bin/mongrel_rails cluster::stop -C /web/docs/forms.eurostar.inet/current/config/clusters/production.yml --only 30000 --force"
If I run monit everything seems fine, I see all the mongrel servers running. If I kill one of them, for example the one referred to above, I get an email alert:
Does not exist Service mongrel_prod_30000
And the web page sets the mongrel to "not monitored". I then get another email alert:
Execution failed Service mongrel_prod_30000
Date: Thu, 03 Apr 2008 16:02:51 +0100
Description: 'mongrel_prod_30000' failed to start
Your faithful employee,
I have checked for permissions problems, but the start and stop scripts work fine from the commandline, run as root, and the monit is running as root also.
I also tried monitoring apache, and the start/stop worked fine using a similar script, so I expect something may be wrong with mongrels but I cannot see what it could be, since they work fine otherwise.
I don't want to have to monitor my monitor!
Any advice warmly received!
|[Prev in Thread]||Current Thread||[Next in Thread]|