[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [libunwind] Feature request: Get an unw_dyn_info_t* from an address

From: David Mosberger
Subject: Re: [libunwind] Feature request: Get an unw_dyn_info_t* from an address
Date: Tue, 16 Dec 2003 10:25:29 -0800

>>>>> On Tue, 16 Dec 2003 13:53:15 +0100, Johan Walles <address@hidden> said:

  Johan> We sometimes unload dynamically generated code from memory.
  Johan> When we do that we need to tell libunwind about it.  We do
  Johan> that by calling _U_dyn_cancel(unw_dyn_info_t *di).  For
  Johan> calling that function we need a pointer to the function's
  Johan> unw_dyn_info_t.

  Johan> We can keep track of that pointer ourselves.  However, since
  Johan> libunwind already does (otherwise, how could it unwind the
  Johan> stack?), we'd prefer being able to ask libunwind for the
  Johan> unw_dyn_info_t* corresponding to a certain address.

  Johan> Is that easily doable?  What kind of complexity would we be
  Johan> looking at for getting that information?  Would we be better
  Johan> off storing that information ourselves?

libunwind can do it, but not in constant time.  In order for register
and cancel to be constant-time, it can't really setup any fancy
data-structures (I don't consider hash-tables to be constant
time---not if you can't bound the number of elements in the table).

Also, note that _U_dyn_cancel() does NOT flush the cache.  The idea
here is that a code-generator may often cancel the registration of
several routines in a contiguous range and in that case the
code-generator could use a single flush-cache call.

Perhaps it would make sense to add a "cancel_and_flush_range" call,
which flushes a cache-range and cancels all the dynamic unwind-info
registrations that fall in the range.  I think this would still be
O(N) (N = number of dynamic unwind-info registrations), but since you
could cancel an abitrary number X of dynamic registration, the
overhead would amortize to O(N)/X.  Would that help you?


reply via email to

[Prev in Thread] Current Thread [Next in Thread]