qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [Bug 735454] Re: live kvm migration with non-shared storage


From: Sebastian J. Bronner
Subject: [Qemu-devel] [Bug 735454] Re: live kvm migration with non-shared storage corrupts file system
Date: Tue, 15 Mar 2011 12:56:46 -0000

** Attachment added: "A set of scripts to exercise the file system"
   
https://bugs.launchpad.net/bugs/735454/+attachment/1910025/+files/uglyfstest.tbz2

-- 
You received this bug notification because you are a member of qemu-
devel-ml, which is subscribed to QEMU.
https://bugs.launchpad.net/bugs/735454

Title:
  live kvm migration with non-shared storage corrupts file system

Status in QEMU:
  New

Bug description:
  Description of problem:

  Migrating a kvm guest on non-shared lvm-storage using block migration
  (-b flag) results in a corrupted file system if that guest is under
  considerable I/O load.


  Version-Release number of selected component (if applicable):

  qemu-kvm-0.12.3
  linux-kernel-2.6.32
  lvm2-2.02.54


  How reproducible:

  The error can be reproduced consistently.


  Steps to Reproduce:

  1. create a guest using lvm-based storage

  2. create an LV on the destination node for the guest to be migrated
  to

  3. place the attached scripts somewhere on the guest's system

  4. run 'runlots'

  5. migrate the guest using the -b flag

  6. if the migration doesn't complete in an appropriate amount of time
  (45 minutes for our 100GB image), it will be necessary to stop the
  test scripts: type 'killall python'

  7. attempt to shut down the guest, forcing it off if necessary

  8. access the partitions of the LV on the node: 'partprobe /dev/mapper
  /<volume-name>'

  9. run fsck: 'fsck -n -f /dev/mapper/<volume-name>p1'


  Actual results:

  You should see a big mess of errors, that go beyond what can be
  accounted for by an unclean shutdown.


  Expected results:

  Expected is a clean bill of health from fsck.


  Additional information:

  I suspect that there is some sort of race condition in the live
  synchronization algorithm for dirty blocks used for block migration.


  Workaround:

  The only safe way to migrate guests in this scenario is by suspending
  them just prior to the migration. That way they are first suspended,
  then everything is transferred, and finally resumed on the target
  node. When the I/O load is low, the migration works live, as well.
  However, this is too risky to use on production systems because there
  is no way to tell when the I/O load is too high for a successful live
  migration.

  Using this workaround is very dissatisfying because for a guest with a
  100GB filesystem, the migration takes 45 minutes on our systems,
  meaning that we have a downtime of 45 minutes. Having migrated other
  guests with 0 downtime got us hooked.

  The attached scripts to simulate high I/O load are somewhat artificial in 
nature. However, the bug is motivated by a real-world scenario: We migrated a 
productive mail-server that subsequently became buggy, finally crashed and 
corrupted several of our customers' e-mails. Unfortunately a bug of this nature 
can't be tested on non-productive systems, because they don't reach the
  necessary load levels. The scripts reliably reproduce the failure experienced 
by our mail-server.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]