qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 1/2] Add virtagent file system freeze/thaw


From: Michael Roth
Subject: Re: [Qemu-devel] [PATCH 1/2] Add virtagent file system freeze/thaw
Date: Fri, 04 Feb 2011 10:27:08 -0600
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); en-US; rv:1.9.2.13) Gecko/20101207 Thunderbird/3.1.7

On 02/04/2011 12:13 AM, Stefan Hajnoczi wrote:
On Thu, Feb 3, 2011 at 5:41 PM, Michael Roth<address@hidden>  wrote:
For things like logging and i/o on a frozen system...I agree we'd need some
flag for these kinds of situations. Maybe a disable_logging() flag....i
really don't like this though... I'd imagine even syslogd() could block
virtagent in this type of situation, so that would need to be disabled as
well.

But doing so completely subverts our attempts and providing proper
accounting of what the agent is doing to the user. A user can freeze the
filesystem, knowing that logging would be disabled, then prod at whatever he
wants. So the handling should be something specific to fsfreeze, with
stricter requirements:

If a user calls fsfreeze(), we disable logging, but also disable the ability
to do anything other than fsthaw() or fsstatus(). This actually solves the
potential deadlocking problem for other RPCs as well...since they cant be
executed in the first place.

So I think that addresses the agent deadlocking itself, post-freeze.

However, fsfreeze() itself might lock-up the agent as well...I'm not
confident we can really put any kind of bound on how long it'll take to
execute, and if we timeout on the client-side the agent can still block
here.

Plus there are any number of other situations where an RPC can still hang
things...in the future when we potentially allow things like script
execution, they might do something like attempt to connect to a socket
that's already in use and wait on the server for an arbitrary amount of
time, or open a file on an nfs share that in currently unresponsive.

So a solution for these situations is still needed, and I'm starting to
agree that threads are needed, but I don't think we should do RPCs
concurrently (not sure if that's what is being suggested or not). At least,
there's no pressing reason for it as things currently stand (there aren't
currently any RPCs where fast response times are all that important, so it's
okay to serialize them behind previous RPCs, and HMP/QMP are command at a
time), and it's something that Im fairly confident can be added if the need
arises in the future.

But for dealing with a situation where an RPC can hang the agent, I think
one thread should do it. Basically:

We associate each RPC with a time limit. Some RPCs, very special ones that
we'd trust with our kids, could potentially specify an unlimited timeout.
The client side should use this same timeout on it's end. In the future we
might allow the user to explicitly disable the timeout for a certain RPC.
The logic would then be:

- read in a client RPC request
- start a thread to do RPC
- if there's a timeout, register an alarm(<timeout>), with a handler that
will call something like pthread_kill(current_worker_thread). On the thread
side, this signal will induce a pthread_exit()
- wait for the thread to return (pthread_join(current_worker_thread))
- return it's response back to the caller if it finished, return a timeout
indication otherwise

I'm not sure about a timeout inside virtagent.  A client needs to
protect itself with its own timeout and shouldn't rely on the server
to prevent it from locking up - especially since the server is a guest
which we have no control over.  So the timeout does not help the
guest.

We actually have timeouts for the client already (though they'll need to be reworked a bit to handle the proposed solutions), what I'm proposing is an additional timeout on the guest/server side for the actual RPCs, since a blocking RPC can still hang the guest agent.


Aborting an RPC handler could leave the system in an inconsistent
state unless we are careful.  For example, aborting freeze requires
thawing those file systems that have been successfully frozen so far.
For other handlers it might leave temporary files around, or if they
are not carefully written may partially update files in-place and
leave them corrupted.

So instead of a blanket timeout, I think handlers that perform
operations that may block for unknown periods of time could
specifically use timeouts.  That gives the handler control to perform
cleanup.

Good point. Although, I'm not sure I want to push timeout handling to the actual RPCs though....something as simple as open()/read() can block indefinitely in certain situations, and it'll be difficult to account for every situation, and the resulting code will be tedious as well. I'd really like the make the actual RPC as simple as possible, since it's something that may be extended heavily over time.

So what if we simply allow an RPC to register a timeout handler at the beginning of the RPC call? So when the thread doing the RPC exits we:

- check to see if thread exited as a result of timeout
- check to see if a timeout handler was registered, if so, call it, reset the handler, then return a timeout indication
- if it didn't time out, return the response

The only burden this puts on the RPC author is that information they need to recover state would need to be accessible outside the thread, which is easily done by encapsulating state in static/global structs. So the timeout handler for fsfreeze, as it is currently written, would be something like:

va_fsfreeze_timeout_handler():
    foreach mnt in fsfreeze.mount_list:
        unfreeze(mnt)
    fsfreeze.mount_list = NULL

We'll need to be careful about lists/objects being in weird states due to the forced exit, but I think it's doable.


Stefan




reply via email to

[Prev in Thread] Current Thread [Next in Thread]