pan-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Pan-users] pan for Windows crashes when reading large newsgroup


From: Duncan
Subject: Re: [Pan-users] pan for Windows crashes when reading large newsgroup
Date: Thu, 25 Oct 2012 04:12:41 +0000 (UTC)
User-agent: Pan/0.140 (Chocolate Salty Balls; GIT f91bd24 /usr/src/portage/src/egit-src/pan2)

Zan Lynx posted on Wed, 24 Oct 2012 20:25:49 -0600 as excerpted:

> For MinGW it is --large-address-aware given to the "ld" linker. From the
> search results I read, MinGW support libraries will work fine since
> their code is almost entirely from Unix where addresses > 2GB have been
> common.

I believe I said it elsewhere, but it may be worth repeating.  The 
largest groups, on servers with enough retention, are known to trigger 
this pan issue up well past the 4-gig barrier into typical 64-bit address 
space (8 gigs plus, IIRC a projections was ~17 gigs for the largest group 
that poster was aware of).

So while busting the 2-gig barrier may help temporarily, for someone who 
has just started having the problem as I believe is the case here, it's 
hardly a general case solution.  For that, even 4-gig isn't even close.

The alternatives I see are five (the first two establishing the 
boundaries):

1) Simplest, throw 64-bit hardware (and learn how to do a 64-bit MS build 
yourself if necessary) and 8 or 16 gigs of RAM at it.  But while 
theoretically the cleanest solution barring #2, this could be impractical 
due to lack of hardware budget or time/patience to learn how to get pan 
built.

2) Likely most impractical, just forget about news until the database 
backend patches get written and merged.  Most impractical, since unless 
you're a coder, the timeline here is entirely out of your control.

3) Find some alternative other than pan.  I had /thought/ that MS had a 
number of  choices for binary harvesters, but I've been out of the MS 
loop for over a decade now, and out of binaries for maybe half a decade, 
so I really can't say what current status is.  Much as it pains this 
FLOSS (common abbreviation for free/libre and open source software) guy 
to say it, were I still on MS (but if I were, I'd obviously not be so 
much a FLOSS guy), I'd definitely be researching this.

4) This large-address thing, if it works, may put off the problem for 
long enough to make #2 workable.  Obviously this is the current 
discussion.

5) Find some way to continue working within current parameters.  Thus my 
suggestions on ASAP delete, score-based action-delete, tightening up 
expires as much as possible, etc.  This could buy some time, perhaps 
significant time (more than a doubling of the memory) if the actions cut 
in soon enough, which I'm not sure on, PROVIDED less than half of all 
articles are considered interesting, so more than half can be action-
deleted before the full memory cost hits.

I'd be spending some time on this last one, too, at least enough to test 
and see if the memory cost occurs before or after the score-based action-
delete has a chance to kick in.

Alternatively, Heinrich, you implemented the actions.  Does the delete 
action kick in before the memory cost has been paid, or after?

But even if the memory cost has already been paid by then, the stricter 
automated control over deleting uninteresting could help, particularly if 
downloaded posts are deleted ASAP as well.  I really am interested in 
this angle and would love to see some followup on how well it actually 
works, both because I like its chances and because I find the score-based 
auto-delete personally fascinating as a really powerful toy that I wish I 
had back when I did binaries.

(Some day I might get back into binaries, too.  If I weren't effectively 
unemployed and more worried about just keeping the power and net on and a 
roof over my head ATM...  Seriously.  If I disappear for awhile, people 
here now know what happened...)

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman




reply via email to

[Prev in Thread] Current Thread [Next in Thread]