l4-hurd
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Task server implementation and integration among the other core serv


From: Matthieu Lemerre
Subject: Re: Task server implementation and integration among the other core servers
Date: Mon, 21 Mar 2005 19:33:57 +0000
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.0.50 (gnu/linux)

Marcus Brinkmann <address@hidden> writes:

>> * The task server has three main RPCs: task_threads_create,
>>   task_threads_terminate, task_terminate (the names are maybe not well
>>   chosen, I tried to mimic Mach ones). task_threads_create is
>>   responsible for both task and threads creation.
>
> Didn't I suggest at some point that creating an empty task with no
> threads is a good idea for passing to a filesystem for suid
> invocation?  The idea was to delay the creation of the actual L4
> address space (with the first thread) until the filesystem actually
> uses it.  This makes revocation a no-op in the common case.
>
> What became of that?

In that case, you just call the task_threads_create RPC, ask it to
create a new task with 0 threads in it. That's what the
task_create_empty wrapper does.

(To quote myself: I wrote some wrappers for common operations (empty
task container creation, thread allocation ...)) Marcus, you're not
paying attention to what I say :))
>
>> * Task now decides itself on the utcb of each thread, so the utcb
>>   argument of task_thread_alloc is no longer necessary. This is
>>   because we have to store the utcb for a task, so we should take
>>   profit of it :).  I modified wortel to provide the core servers' utcb
>>   to task for their allocation.
>
> Why do we need to store it, for task_terminate?  That's a pain :)

We need to store it because we may want to delay the first thread
allocation in a task, after having created the empty task. It may also
be enough to just pass the UTCB fpage and the KIP upon first thread
allocation.

>
> Still, this is wrong. It defeats the ability to let users create
>threads which are intended for migration to other address spaces.
>We do not want to use that feature, but somebody else may. I think
>it's also important for orthogonal persistence to be able to recreate
>a thread at the right UTCB address.

OK. This would require a new RPC in task (since it requires a call to
thread_control), and change of the UTCB can occur at that moment. I
don't know if this would be enough.

But I know pretty nothing about thread migration or orthogonal
persistence, how that can be useful etc...

I already have utcb and kip arguments in task_threads_create, so I can
easily change their semantics (maybe depending on the flag argument).
>
> Well, we know by now that in the next version of L4 thread IDs will
>become mappable items, and tasks will be able to directly
> create/destroy threads they have mapped.  So if you want we can leave
> the code as is for now (we are going to rewrite it before anybody will
> care about orthogonal persistence or thread migration ;)
>
>> * Task groups are implemented using a circular singled-linked list of
>>   tasks.  Thus deletion could be quite long (we have to iterate over each
>>   task to find the parent), but in practice, I think that most of the
>>   task groups will have 1 or 2 tasks, and that structure allows the
>>   simplest algorithms I think.
>> 
>>   Insertion of a task in a group is not a problem, but deletion requires
>>   two locks (one for the parent, one for the task to be deleted), so to
>>   avoid a deadlock, I decided that if we can't acquire the two locks
>>   immediatly, we just return with EAGAIN (the client has to do the
>>   task_terminate RPC again). That should not happen very often, so I
>>   guess that it's not a problem.
>> 
>>   Deletion of a whole task group require to lock every task in a group,
>>   so there is a deadlock problem here. This problem is maybe reduced if
>>   only the manager capability (proc) can do that operation (it just have
>>   to make sure that it does not do it twice on the same task group).
>
> I have not checked out the details, but here are a couple of ideas you
> may want to think about:
>
> 1) Have a single global lock for all task group manipulation.
>
I thought about this. But every task creation or deletion is a task
group manipulation, so we would have many locking operations that are
maybe not required. Maybe I could just have a global lock just for the
task_group_terminate RPC (the main issue).
>
> 2) Have a lock for each task group, acquire the group lock for group
>manipulation. Then lock tasks individually. (Be careful about locking
>order).

One noticeable fact: you cannot deadlock with insertion of an element
in the list, only with deletion. So, maybe a lock for deletion
operation. I remember something bothered me with that solution, but I
can't remember what was that :). But sounds a good idea now.

>
> 3) Define a locking hierarchy, for example based on the task ID.
>Sort the locks you need by the hierarchy.

I tried to introduce a locking hierarchy by using a flat linked list
instead of a circular one, but this was problematic because when
destroying the first task, you would have to lock everytask to change
the pointer to the first one. So it was worse :). I did not thought
about using task ID as the locking hierarchy.


I although thought of a releasing both lock, then waiting for a
condition variable and trying again. Not a very pretty solution.

So, I estimated that not acquiring the lock would happen quite
unlikely, just asking the client to do it again was sufficient. But
I'm really not an expert on what operation is expensive or not, what
is likely to happen or not... So I can write a task-group deletion
lock.

Thanks,
Matthieu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]