pan-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Pan-users] Re: Better processing of very large groups?


From: walt
Subject: [Pan-users] Re: Better processing of very large groups?
Date: Fri, 3 Jul 2009 01:28:33 +0000 (UTC)

On Thu, 02 Jul 2009 23:53:53 +0000, Duncan wrote:

> Ron Johnson <address@hidden> posted
> address@hidden, excerpted below, on  Thu,
> 02 Jul 2009 13:14:20 -0500:
> 
>> Because giganews has such a long retention period, some groups can have
>> a very *large number* of messages.  If you subscribe to two or more of
>> them, you could run out of memory.
>> 
>> As it is, pan seems to sequentially scan thru all messages when marking
>> a group of them as Read.
>> 
>> There needs to be a better and less memory intensive method of handling
>> huge groups.  B-trees, hash tables, SQL-Lite, I don't know, but
>> *something* better than the status quo.

The basic problem overwhelming usenet is that people are using it for file
sharing, a purpose for which it was not intended and is not well suited.

But you knew that already :o)  It would be an interesting novelty for news
servers to offer a well-compressed zipfile of article headers, say, for each
calendar month or week.  

> ...at least one group on Giganews has "rolled
> over" the 32-bit article sequence integer pan uses.  It needs to be a
> 64- bit number, or at least 33-bit.  More groups will follow over time.
> AFAIK there's a patch floating around to allow pan to deal with this...

Our recent pan angel, K. Haley, has already included that change in his/her
git archive (git://github.com/lostcoder/pan2.git) for which I once again
send my thanks to him/her.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]