[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: cvs pserver performance

From: Russ Tremain
Subject: Re: cvs pserver performance
Date: Wed, 24 Jan 2001 16:06:26 -0800

Just to close out this topic, our performance problems
were not related to network performancel  The excessive
TCP-level retransmits that I had seen earlier was
a transient problem over our WAN.

The problem was all disk i/o, primarily in the tmp device
that was being used by the pserver.

I found this out initially by going into the cold room
and listening to the machine.  The disk array was
hammering away like an old coffee grinder, with only
one pserver process running.

Part of the problem was the way the disk array was
set up to mirror all writes.  It was not tuned
for performance.

So we created a new device that was tuned for performance,
and mounted it on the /tmp used by the pserver.  This
improved performance greatly, especially for simultaneous
accesses.  We can now support several simultaneous updates
without the significant performance degradation I described

When someone does a full update against our large repository,
each pserver process creates some 6500 directories in /tmp
and writes some 7500 files.  This is a copy of the directory
structure of the repository and the CVS/ meta files from the
client.  Then it compares this against the repository in order
to calculate the diffs that need to be sent to the client.

This is why the /tmp device must be as fast as possible.

If you have a large repository, with a large development
organization, it is hard to see how a pserver can ramp up
without some strategy whereby you give each pserver
a fast /tmp device.


At 2:38 PM -0800 1/10/01, Russ Tremain wrote:
>At 1:35 PM -0800 1/10/01, Larry Jones wrote:
>>Russ Tremain writes:
>>> We are running cvs 1.10.8 as a pserver on a rather beefy
>>> solaris system.  Our repository is fairly large and contains
>>> about 45,000 files.  This machine is idle most of the time,
>>> and its only job is to run the CVS server.
>>There were a bunch of memory leaks in 1.10.8 that were fixed in 1.11
>>which could well cause the server process to grow very large and start
>>thrashing when checking out or updating that many files.  Upgrading may
>>help your performance and I'd recommend it in any case.
>good suggestion... I will try this.
>>> When I snoop on the ethernet interface, I find that all clients
>>> are sending to the server port 2401.
>>> I was surprised by this, since most servers only listen on
>>> a well-known socket for connections, negotiate a private socket
>>> to handle a particular client, and then use this new socket for
>>> further communications.
>>You're confusing ports with sockets.  A socket is identified by the
>>local host address, the local port number, the remote host address, and
>>the remote port number.  Most servers work the same way CVS does: they
>>listen on a well-known *port* for connections; when they accept a
>>connection, they get a private *socket* that is used for further
>>communication.  That socket still has the well-known local port, but
>>each such socket has a different remote address/port.
>ahh... yes, so I was.  so inetd just opens a socket and cvs inherits
>this open file descriptor, and then inetd's job is done.  The kernel
>is now able to route all packets that come in on this socket to
>the process with the open file descriptor, which is now cvs.
>>> My understanding is that if a bunch of processes are reading
>>> the same socket, then the packet is consumed by the first
>>> reader.
>>That is correct, but each CVS server has a unique socket (they just all
>>have the same local port number).
>>> Therefore, the retransmits would make sense as a source of the
>>> poor performance we are experiencing when we have multiple
>>> updates running.
>>If you're dropping network packets, you either have a network problem or
>>the server is badly overloaded.  You need to identify where the packets
>>are getting dropped and, if it's the server, you need to do some
>>performance analysis on it to discover the problem.  The O'Reilly book
>>on System Performance Tuning is a good place to start (
>I don't think it is a network problem per se, since I believe I can
>duplicate it on a system without using the network.  But I need
>to do some work here to set up a test.  (we had another system
>demonstrating similar characteristics when all the updates were
>local processes).
>Another wrinkle that I didn't mention is that we use a perl script
>to do a chroot and handle the overflow "--allow-root" args, so
>this could be causing problems as well.
>At any rate, I'm relieved to hear that that my original hypothesis
>was *completely* wrong.. :)
>Thanks for your help... I will report back what I find out
>to the list.
>>-Larry Jones
>>Buddy, if you think I'm even going to BE here, you're crazy! -- Calvin
>>Info-cvs mailing list

reply via email to

[Prev in Thread] Current Thread [Next in Thread]