Hmmm, for me, 129 sometimes fails still, because it completes too quickly...
(The error then is that 'return[0]' does not exist in query-block-jobs’s
result, because the job is already gone.)
When I insert a print(result) after the query-block-jobs, I can see that the
job has always progressed really far, even if its still running. (Like,
generally the offset is just one MB shy of 1G.)
I suspect the problem is that block-copy just copies too much from the start
(by default); i.e., it starts 64 workers with, hm, well, 1 MB of chunk size?
Shouldn’t fill the 128 MB immediately...
Anyway, limiting the number of workers (to 1) and the chunk size (to 64k) with
x-perf does ensure that the backup job’s progress is limited to 1 MB or so,
which looks fine to me.
I suppose we should do that, then (in 129), before patch 17?