[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Working on the ddf

From: Gianluca Guida
Subject: Re: Working on the ddf
Date: Sun, 18 Apr 2004 11:17:48 +0900
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Sat, 17 Apr 2004 12:02:09 -0400,
Neal H. Walfield wrote:
> > Managing block caches as normal process memory won't work too well. 
> > The mempolicy server should treat them as a special memory. 
> Can you make an argument why you think this to be the case?

This is what my mail was about. Just keep reading. :)

> Extra
> frames are treated specially.  

Using extra frames for caches doesn't help, because a cache 
entry may even be dirty, unless you want to force a write through 
behaviour, so we cannot guarantee that every caching page is
immediately freeable.

Anyways, this is not the issue. The issue is about controlling the
store caches size, as i'll tell on next paragraphs.

> If you think we need something else,
> what exactly do we need and why?

As I tried to show in my previous mail, the cache (wherever it's located)
needs a specific memory allocation policy. Infact, using common sense it's
easy to figure out that a process need more space when it's faulting too 
much, but a cache need more space when it gets too many cache misses. 

As you say, it need not to be explained how store caches can improve the
speed of the system. Thus, we need to give to the store caches a good 
physical memory allocation policy to have a good system performance.

Processes and store cache need a different policy for physical memory
allocation, so the mempolicy server should have an explicit policy for 
store caches and should explicitly give memory to them.

This can not be done by making whatever is using the device worry about
caching because if the mempolicy want to enlarge or shrink the store 
cache can just give a page to the process implementing caching but 
-- being obviously the processes self paging -- the mempolicy server
can't be sure that the page would go to the cache store.

> > What i want to say with this is that I/O block data caches can't be 
> > treated as normal memory implemented in an untrusted server.
> I was arguing for no block level cache and make whatever is using the
> device worry about caching (thorugh the use of extra frames, i.e. not
> through the use of normal memory in an untrusted server) due to the
> inherent nature of the way in which block devices are used
> (i.e. almost always by a single client).
> >  They [I/O block caches] are a 
> > separate and independent concept that should be taken into account when 
> > designing physical memory management of an Operating System.
> I fail to see how the proposed method is inadequate.  You have noted
> that block caches should not be done in normal memory.  I agree with
> that, hence the extra frame concept.  As for the use of block caches,
> you are going to have to make an argument for why we need both block
> and file system caching (or just block caching).

About extra frames, I think my previous paragraph will clear things here.
The mempolicy server should treat store caches pages differently from 
process pages memory. 

About block level caches against filesystem driven caches, I think that 
you're right that filesystem level caches will be smarter and hence 
they will have better performance, but proposed method is inadequate
because there won't be a smart policy for the size of the caches.

I don't have a solution yet -- I am just sharing doubts -- but I think
something near to a solution would be to let the the cache stores be 
recognized by the physmem server. the mempolicy server will shrink and 
enlarge them while the filesystem server or whatever is using the device
will decide which entry to fetch and which one to flush. A G+E frame 
mechanism could be used even for the store caches (ie the cache will have
guaranteed pages and other entries that should be kept freeable because
they will be considered extra).

This direction, as you can note is to let the store cache to be an 
architectural component, thus moving it at 'lower' level than what it is
now. Oh and btw, I never been so mad to think about a two level store 
cache in physical memory, don't worry. ;)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]