[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Persistent object handles
Re: Persistent object handles
Thu, 9 Jan 2003 22:09:53 +0100
On Wed, Jan 08, 2003 at 04:25:06PM +0100, Espen Skoglund wrote:
> You don't really need to use the same thread ids (depending on how you
> design the system of course). All you need to do is for the
> persistent task to detect that a new instance of some server is
> started, and then reestablish the connection. Detection can be
> achieved using version identifiers. Reestablishing the connection can
> be as simple as recording the new thread identifiers, or use some
> special protocol to propagate client state to the new server. The
> reestablish code must be part of the client (i.e., error recovery
> after a failed IPC operation), but might be generated automatically by
> an IDL compiler. More complex reestablishment is needed whenever
> servers are stateful.
This is more or less what I was suggesting. The idea is that persistent
task would refer to IPC communication channels using the PID of the server
task (delivered by the Hurd "proc" server) and a handle id (provided by the
server when an "open" call is made). There would be a proc server for the
persistent subenvironment which would simply act as a "proxy" for its
parent proc server. Therefore, it knows which PIDs refer to persistent
tasks ("internal references") and which PIDs do not.
Upon recovery, the client communication library (or maybe a sophisticated
client stub generated by the IDL compiler?) would ask the proc server for
the new thread ID corresponding to a given PID. The proc server who knows
that this PID used to refer to a non-persistent task would either try to
transparently reestablish the connection if possible or simply tell the
client that connections to this task should be invalidated.
However, there still remains an issue with this design: How would tasks
refer to their proc server? If they refer to it using its thread id, we
end up with the same problem as before: upon recovery, the persistent tasks
are unable to talk to their proc server (since its thread id changed) so
they are isolated. Actually, I don't know how this could be solved.
> If a transient stateful server is going to serve persistent clients,
> care must be taken to ensure that it is indeed possible to reestablish
> the client connection if the system shuts down. Obviously, designing
> transient servers so that it is possible to reestablish connections
> with persitent clients is not always trivial, and providing support
> for persistence this way is not very transparent to the system
> programmer. For this reason, the focus of the paper you referred to
> was more on providing orthogonal persitence on larger, more or less
> self contained subsystems.
In the Hurd, it is be possible to start a process with a different root
server (I mean "root filesystem server" here) than the "real" one
("settrans --chroot"). Since servers are accessed through the filesystem
(except for auth, proc, and the root server itself), such a root server
acts almost as a "chief" or as a "nester" for the tasks that use it as
their root server. Therefore, we could design a "proxy" root server which
would, for instance, log I/O operations performed by persistent tasks on
non-persistent I/O servers. Upon recovery, it could try to recreate the
server state corresponding to each object handle that was used for I/O (eg.
reopen files, move file pointers to where they were before and so on). As
you pointed out, this kind of thing would not be very transparent.
However, it might be feasible to implement this mechanism for the I/O