[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: %gs:0 thread pseudoregister in oskit-mach
Re: %gs:0 thread pseudoregister in oskit-mach
Fri, 26 Apr 2002 00:33:53 -0400 (EDT)
I don't know a lot about L4, and I have just been skimming through the X.2
spec a bit. But I worked on the Fluke microkernel at Utah, and Fluke was
influenced by the original L4, so I have a general sense about this stuff
and you can correct me on any details.
As to %fs, I don't know. You'd have to ask the people who came up with the
TLS spec. Maybe they were also trying to leave %fs available for
implementing the Win32 ABI, or maybe it is just because %gs is existing
practice in linuxthreads and maybe there was no particular reason for that.
> Yes. As I said, %gs:0 can indeed be changed by user-level code.
The only time %gs:0 would be changed would be by the pthreads
implementation when doing a user-level thread switch when using the model
of n pthreads to m kernel threads. But if an LIPC context switch happens
the way I'm guessing (essentially a user-level switch performed by code in
the KIP) then I imagine that pthreads would want to just use 1-to-1 and
integrate with the LIPC optimizations.
> First solution that comes to mind is to use %gs:4 (or something) for
> addressing the UTCB. I'm not too happy with this solution, though
> (what stops the next guy to come and claim %gs:4 for some other use).
> Another problem with this solution is that %gs:0 must be treated as a
> regular register and saved/restored on context switches.
That is exactly what I've made Mach do. It's really the only reasonable
solution in Mach, where there is no user memory region intrinsically
associated with a kernel thread. But in the L4/Fluke model where kernel
threads have a user-space memory component that threads have an easy way to
point to, it's obviously the natural and desireable thing to unify this
with the TLS pointer storage.
> The optimal solution from L4's point of view would be to use the
> UserDefinedHandle virtual register. I do realize, however, that this
> is not possibe if the compiler/linker can create code like "movl
> %gs:0, %eax". It also makes TLS lookup more expensive as it needs
> another indirection to get to the wanted memory contents.
The TLS stuff is pretty complex and has many wrinkles. I am only gleaning
what I know from reading some draft specs and trying to make sense of them.
As I understand it, all TLS accesses from shared objects will use a helper
function that is implemented in glibc. So it is no problem for that code
in libc to do some kernel-specific magic to locate the TLS data structures.
However, when a program itself does TLS accesses (which will include all
uses of errno) it may generate optimized code sequences that expect to load
a pointer from %gs:0 and apply some offset to it to find the TLS data.
The parts in the TLS draft about "changed for what the ABI does" mean that
that ELF spec is about just what all the relocs mean and what the linker
and dynamic linker need to do with them. The further details of how the
thread register works are part of the OS ABI and not specified as part of
ELF per se.
But for GNU/Hurd we want to have new ABI details be compatible with what's
used on GNU/Linux, so that the compilers, binutils, and dynamic linker
behavior for the two platforms remains the same. The %gs:0 model is what
GNU/Linux is using now. It allows for a TCB of arbitrary size and content,
but only if it can be interspersed in memory with the user-mode TLS data
whose size is known only at dynamic linker startup time. This would fit
fine e.g. with Fluke threads, where the user provides some user memory for
the kernel's use (whose address constitutes the local thread ID similarly
to L4). But I gather from the X.2 spec that the UTCBs are tightly-packed
in an area preallocated by the system at task startup, so this would be a
problem. Perhaps it would be possible to preallocate a big TLS data region
immediately below the UTCB area, sized to hold as many appropriately sized
TLS data blocks as the UTCB area can hold threads, and have the TLS support
code return offsets from %gs:0 that get biased by the size of the TLS area.
But I think it may be the case that the "optimized local exec model" for
TLS data used by a program's own code will reduce at link time to constant
negative offsets from the %gs:0 pointer value, and there won't be any
run-time way to apply a bias to those.
This would be a problem for any new L4Linux that wants to support binaries
built with new TLS support and a new glibc/linuxthreads for Linux/x86 too.