grub-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 3/3] disk: read into cache directly


From: Vladimir 'phcoder' Serbinenko
Subject: Re: [PATCH v2 3/3] disk: read into cache directly
Date: Wed, 02 Mar 2016 09:55:30 +0000



Le Wed, Mar 2, 2016 à 4:46 AM, Andrei Borzenkov <address@hidden> a écrit :
02.03.2016 03:22, Vladimir 'phcoder' Serbinenko пишет:
> Is there any way this patch reclaims unused memory in case of partial cache
> eviction?

Not yet, I wanted to make sure base looks sane first. Are there any
cases of partial eviction besides blocklist write?

I mean that if you read directly 1M, then you invalidate 255 x 4K, then you really have only 4K of cache left but you still use 1M of RAM. This creates additional memory pressure. Good thing is that grub_malloc will invalidate whole cache and reclaim memory when it's unabkle to find any other spot but it still increases memory fragmentation as grub_malloc prefers squeezing before reclaiming memory.  
> If not it could potentially water lots of memory. One thing you
> need to consider is that initrd for Solaris can easily be 60% of total RAM
> (300 MiB on 512MiB machine) What if requested read is bigger than risk
> cache size?
>

Not sure I parse it; but yes, it puts more stress on cache by requiring
unfragmented memory. If we follow this route we need to gracefully fall
back to cache element size.

Problem that in this case caller already has 300M of RAM in buffer it passed and you can't allocate another 300M as machine simply doesn't have so much RAM.
What's the cost of core size of your approach? What's speed benefit between your approach and just fixing read_small like Leif proposes? 
Do we have any reasonable chance to separate aligned and non-aligned
memory pools?

How would you allocate 60% of RAM in single chunk in this arrangement? 
_______________________________________________
Grub-devel mailing list
address@hidden
https://lists.gnu.org/mailman/listinfo/grub-devel

reply via email to

[Prev in Thread] Current Thread [Next in Thread]