qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] qemu AIO worker threads change causes Guest OS hangup


From: Huaicheng Li
Subject: [Qemu-devel] qemu AIO worker threads change causes Guest OS hangup
Date: Tue, 1 Mar 2016 12:45:59 -0600

Hi all,

I’m trying to add some latency conditionally to I/O requests (qemu_paiocb, from 
**IDE** disk emulation, **raw** image file). 
My idea is to add this part into the work thread:

  * First, set a timer for each incoming qemu_paiocb structure (e.g. 2ms)
  * When worker thread handles this I/O, it will first check if the timer has 
expired.
     If so, it will go to normal r/w handling to image files in host. 
Otherwise, it will insert
     this I/O request back to `request_list` via `qemu_paio_submit`. Here, I 
just want to skip
     the IO until the timer condition is satisfied.

Logically, I think this should be right. 

But after I run some I/O tests inside Guest OS, the guest OS will hangup 
(freeze) with “INFO: task xxx blocked for more than 120 seconds”. 
From the guest OS’s perspective, the disk seems to be very busy. Thus, the 
kernel keeps waiting for IO and have no responsiveness to other tasks. So I 
guess it should still be the problem of worker threads.


My questions are:

  * Is it safe to call `qemu_paio_submit` from one worker thread? Since all 
request_access accesses are protected by lock, I think this is OK.

  * What are the possible reasons why guest OS hangs up? My understand is that, 
although worker threads will busy with skipping I/O for many times, they will 
eventually finish the task (guest OS freezes after my r/w test program runs 
successfully, then guest OS becomes unresponsive).

  * Any thoughts on debugging? Currently I’m do some checking (e.g. the 
request_list length, number of threads) via printf. For this part, it seems 
hard to use gdb for debugging because guest OS will trigger timeout if I stay 
at some breakpoints for “too long”. 

Any suggestions would be appreciated. 

Thanks.

Best,
Huaicheng










reply via email to

[Prev in Thread] Current Thread [Next in Thread]