bug-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] ext2fs and large stores (> 1.5G)


From: M. Gerards
Subject: Re: [PATCH] ext2fs and large stores (> 1.5G)
Date: Tue, 29 Apr 2003 23:59:21 +0200
User-agent: Internet Messaging Program (IMP) 3.1

...

> The rest of disk_image is used as block cache for indirect blocks.  Code 
> that uses indirect block must use disk_image_request and 
> disk_image_release.  disk_image_request first checks if the block is 
> already in the memory, and if it's not, then allocate it from the block 
> cache.
> 
> When using disk_image_release for modified blocks, they are marked as 
> "dirty".  The actual write of the block happens in the pokel or 
> sync_global_ptr which call disk_image_clear.  disk_image_clear clears 
> the dirty flag of the block thus allowing it to be reused.

Isn't it easier to use what Neal proposed instead? You use LRU and a block
cache, Neal proposed to let gnumach handle this by mapping everything. You can
do this by making disk_image_request create a mapping and put it in a hashtable,
which IMHO is better than your current solution, one gigantic lookup table (Or
did I misunderstand your code? I had a quick look.).

> * block sizes different than 4096 are not supported.  Don't try it! 
> Discussion about this issue is welcomed, as I didn't think about this yet.

When using what I proposed above you should request ext2 blocks to
disk_image_request (or is this how it works now), make a mapping (if this wasn't
done for another block in the same page), you can also register if a block is
dirty just like you do now. And just register if it is dirty, never remove this
mapping from the hashtable. In pager_write_page (or better: the function called
by this function) you should check if the block is dirty and write it if it is,
if it is not ignore it.

Keep up the good work! If you want to implement what I proposed and need help,
feel free to ask! :). Please tell me if I misunderstood a part of your code...

Thanks,
Marco Gerards




reply via email to

[Prev in Thread] Current Thread [Next in Thread]