|
From: | Hanna Reitz |
Subject: | Re: [PATCH v2 5/7] block/block-copy: block_copy(): add timeout_ns parameter |
Date: | Fri, 1 Apr 2022 15:16:25 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 |
On 01.04.22 11:19, Vladimir Sementsov-Ogievskiy wrote:
Add possibility to limit block_copy() call in time. To be used in the next commit. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@openvz.org> --- block/block-copy.c | 26 +++++++++++++++++++------- block/copy-before-write.c | 2 +- include/block/block-copy.h | 2 +- 3 files changed, 21 insertions(+), 9 deletions(-) diff --git a/block/block-copy.c b/block/block-copy.c index ec46775ea5..b47cb188dd 100644 --- a/block/block-copy.c +++ b/block/block-copy.c
[...]
@@ -894,12 +902,16 @@ int coroutine_fn block_copy(BlockCopyState *s, int64_t start, int64_t bytes, .max_workers = BLOCK_COPY_MAX_WORKERS, };- return block_copy_common(&call_state);-} + ret = qemu_co_timeout(block_copy_async_co_entry, call_state, timeout_ns, + g_free);
A direct path for timeout_ns == 0 might still be nice to have.
+ if (ret < 0) { + /* Timeout. call_state will be freed by running coroutine. */
Maybe assert(ret == -ETIMEDOUT);?
+ return ret;
If I’m right in understanding how qemu_co_timeout() works, block_copy_common() will continue to run here. Shouldn’t we at least cancel it by setting call_state->cancelled to true?
(Besides this, I think that letting block_copy_common() running in the background should be OK. I’m not sure what the implications are if we do cancel the call here, while on-cbw-error is break-guest-write, though. Should be fine, I guess, because block_copy_common() will still correctly keep track of what it has successfully copied and what it hasn’t?)
+ }-static void coroutine_fn block_copy_async_co_entry(void *opaque)-{ - block_copy_common(opaque); + ret = call_state->ret; + + return ret;
But here we still need to free call_state, right?
}BlockCopyCallState *block_copy_async(BlockCopyState *s,
[Prev in Thread] | Current Thread | [Next in Thread] |