l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Resolved: Unexpected page fault from 0xdc003 ad address 0x??


From: Marcus Brinkmann
Subject: Re: Resolved: Unexpected page fault from 0xdc003 ad address 0x??
Date: Wed, 27 Oct 2004 12:39:35 +0200
User-agent: Wanderlust/2.10.1 (Watching The Wheels) SEMI/1.14.6 (Maruoka) FLIM/1.14.6 (Marutamachi) APEL/10.6 Emacs/21.3 (i386-pc-linux-gnu) MULE/5.0 (SAKAKI)

At Wed, 27 Oct 2004 10:46:46 +0200,
Bas Wijnen <address@hidden> wrote:
> Isn't it better to just skip parts of physmem's address space, to make 
> sure the alignment of the served page is equal in both address spaces? 
> If 0x8000 bytes are requested at address 0x1000, it will have to be cut 
> into 1, 2, 4, 1 pages.  So if physmem's first free page is at 0xb000,
> it should serve 0xb000-0xc000, 0xc000-0xe000, 0x10000-0x14000, and 
> 0xe000-0xf000 (or 0x14000-0x15000, whatever makes more sense for the 
> implementation.)  The code in physmem/zalloc.c looked like it would do 
> exactly that.

Well, first: On ia32, beside superpages, only 4KB pages are supported.
I actually expect that physmem and the user-space pager will only deal
with individual pages, not with arbitrary fpages.  That I use
arbitrary fpages in the startup code is more or less a result of me
reusing the code I used to map the whole address space to physmem.

To answer your question: physmem could rearrange the memory to make it
aligned better.  But currently it doesn't do that.  And I doubt it is
practical to do that all the time, esp on hardware only supporting 4KB
pages, and assuming a pager that pages memory as individual 4KB
frames.  Also, consider shared pages: in that case the load address
will be different across tasks.

In any case, it's not something to worry about.  Now, I fixed this bug
here (not in CVS yet), but I just noticed after some debugging that
the ELF loader is not correct yet.  I found the problem, and will add
a work around until we have COW.

Thanks,
Marcus






reply via email to

[Prev in Thread] Current Thread [Next in Thread]