|Subject:||Re: [Qemu-discuss] data consistency on LVM snapshots using writeback caching|
|Date:||Mon, 15 Jul 2013 10:37:49 -0700|
May or may not be related to your situation, if you create VMs on other machines than what you are using to run them, you may run into block and partition alignment issues. The major culprit is if Windows is involved in any way during creating or running as a Host (AFAIK there is no issue Windows running in the VM.
Short summary on concepts
Generally speaking, alignment issues can at least double the required reads to retrieve data because parts of the data could span multiple blocks instead of neatly fitted into the "real" physical blocks as used by the disk firmware and HostOS.
Disk blocks - depending on size of disk, manufacturer and technology, can be 256, 512, or 4k in size. Mis-matches <can> result in mis-alignment.
Windows in particular, poss elsewhere - Windows is notorious for implementing a small offset at the beginning of a partition to store its own data but this does not typically exist on *NIX storage. When laying a virtual disk fs over a real fs, can result in mis-alignment.
Note this problem only exists when VMs are moved from one machine to another. VMs created on the same machine are auto adjusted always.
The 14/07/13, Daniel Neugebauer wrote:
> I have some grave generic disk latency issues on the servers I
> virtualized under Linux/Qemu/KVM + virtio_blk. Virtual block devices are
> setup to run raw on LVM logical volumes with cache=none. Whenever some
> writes happen (according to iotop it's only a few kB every few seconds)
> disk latency in VMs goes up to 1.5 seconds. I understand that
> cache=writeback may help but I am unable to find any details about
> whether it is safe to use it in the way I use LVs:
> Backups are currently being created by calling sync inside the guest OS
> and snapshotting the LVs immediately afterwards on the host. The host
> then mounts those snapshots (usually causing journal replay on them) and
> starts saving data from it. Snapshots are discarded afterwards. Another
> side-effect of high disk latency might be that all VM hosts have issues
> releasing those LVM snapshots afterwards (they tend to need a few
> seconds or minutes before they can be deactivated and removed).
> I noticed that a server with hardware RAID + BBU and 256MB write cache
> does not suffer from these issues. However, that's far from what I can
> afford on all other servers, apart from my personal distaste for
> hardware RAIDs.
I wonder how you handle RAID for your LV because in my experience LVM
RAIDs are very slow in comparison to mdadm RAID. I have issues like you
describe for guests running on LVM RAID (not using raw LV, though).
|[Prev in Thread]||Current Thread||[Next in Thread]|