On 16.01.21 22:46, Vladimir Sementsov-Ogievskiy wrote:
I applied my series onto yours 129-fixing and found, that 129 fails for backup.
And setting small max-chunk and even max-workers to 1 doesn't help! (setting
speed like in v3 still helps).
And I found, that the problem is that really, the whole backup job goes during
drain, because in new architecture we do just job_yield() during the whole
This leads to modifying the existing patch in the series, which does job_enter()
from job_user_pause: we just need call job_enter() from job_pause() to cover
not only user pauses but also drained_begin.
So, now I don't need any additional fixing of 129.
Changes in v4:
- add a lot of Max's r-b's, thanks!
03: fix over-80 line (in comment), add r-b
09: was "[PATCH v3 10/25] job: call job_enter from job_user_pause",
now changed to finally fix 129 iotest, drop r-b
10: squash-in additional wording on max-chunk, fix error message, keep r-b
17: drop extra include, assert job_is_cancelled() instead of check, add r-b
18: adjust commit message, add r-b
23: add comments and assertion, about the fact that test doesn't support
paths with colon inside
Hmmm, for me, 129 sometimes fails still, because it completes too quickly...
(The error then is that 'return' does not exist in query-block-jobs’s
result, because the job is already gone.)
When I insert a print(result) after the query-block-jobs, I can see that the
job has always progressed really far, even if its still running. (Like,
generally the offset is just one MB shy of 1G.)
I suspect the problem is that block-copy just copies too much from the start
(by default); i.e., it starts 64 workers with, hm, well, 1 MB of chunk size?
Shouldn’t fill the 128 MB immediately...
Anyway, limiting the number of workers (to 1) and the chunk size (to 64k) with
x-perf does ensure that the backup job’s progress is limited to 1 MB or so,
which looks fine to me.
I suppose we should do that, then (in 129), before patch 17?