pan-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Pan-users] 64 bits fails as solution to large binary groups


From: Heinrich Müller
Subject: Re: [Pan-users] 64 bits fails as solution to large binary groups
Date: Mon, 10 Oct 2011 07:59:13 +0000 (UTC)
User-agent: Pan/0.135 (Tomorrow I'll Wake Up and Scald Myself with Tea; GIT d8bfcda master)

Am Mon, 10 Oct 2011 01:46:20 -0500 schrieb Ron Johnson:

> On 10/10/2011 01:05 AM, Heinrich Mueller wrote:
>> Am 10.10.2011 05:15, schrieb Ron Johnson:
>>> On 10/09/2011 09:58 PM, Lacrocivious Acrophosist wrote:
>>>> Ron Johnson<address@hidden> writes:
>>>>
>>>>
>>>>> Having to upgrade to Ubuntu Maverick because Natty sucks, I decided
>>>>> to also migrate to 64 bits now that Adobe has released a 64 bit
>>>>> Flash.
>>>>>
>>>>> One of the first things that I did was try out Pan on a binary
>>>>> group.
>>>>>
>>>>> Many hours later, it had fetched 6 weeks of headers and consumed
>>>>> 6.8GB of RAM. The 2+ years of data in Giganews would require 123GB
>>>>> of RAM.
>>>>>
>>>>> :(
>>>>>
>>>>>
>>>> At risk of exposing myself as a Known Idiot... is this 64-bit
>>>> performance different from 32-bit performance, and can you 'prove'
>>>> it? ;-)
>>>>
>>>>
>>> What do you mean by "different performance"?
>>>
>>> It's a fact that 32-bit Pan runs out of *process* address space at
>>> around 2GB. 64-bit Pan doesn't technically have that problem, but
>>> effectively it does, although it does for all practical intents.
>>>
>>>> As for the multi-bazillion-header binary groups... is there *any*
>>>> 'old style'
>>>> newsreader capable of downloading all their headers? By 'old style' I
>>>> mean newsreaders intended to include conversation. Giganews, for one,
>>>> would seem to me to make this nearly impossible due to their vast
>>>> retention span.
>>>>
>>>>
>>> Any "straight to file" news reader could do it, given the time to d/l
>>> all the headers.
>>>
>>> Pan's fatal binary group flaw is that it stores all the headers in
>>> memory before writing them out to disk.
>>>
>> Now I see again. That's really tragic. Perhaps I'll find the time for a
>> solution.
>>
>>
> That would be *great*.  I hope the code is modular and isolated enough
> that only low level code needs to be changed.

Not really, but I'll think of something...





reply via email to

[Prev in Thread] Current Thread [Next in Thread]