|
From: | Peter Lieven |
Subject: | Re: [Qemu-stable] Patch Round-up for stable 2.2.1, freeze on 2015-03-05 |
Date: | Mon, 09 Mar 2015 08:30:44 +0100 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 |
Am 09.03.2015 um 04:56 schrieb Michael Roth:
Quoting Peter Lieven (2015-03-08 13:02:04)Am 25.02.2015 um 14:25 schrieb Peter Lieven:Am 24.02.2015 um 22:47 schrieb Michael Roth:Hi everyone, The following new patches are queued for QEMU stable v2.2.1: https://github.com/mdroth/qemu/commits/stable-2.2-staging The release is planned for 2015-03-10: http://wiki.qemu.org/Planning/2.2 Please respond here or CC address@hidden on any patches you think should be included in the release.Please include from Kevins block repo: commit fca313a4f02bf166864636b1f47995c9fbf2716f Author: Kevin Wolf <address@hidden> Date: Wed Feb 11 17:19:57 2015 +0100 vpc: Fix size in fixed image creation commit 12d596c9b29608ede4df3844038c799014d0661f Author: Kevin Wolf <address@hidden> Date: Tue Feb 10 11:17:53 2015 +0100 coroutine: Fix use after free with qemu_coroutine_yield() PeterHi Michael, have you got these two? At least the last one is quite important.I've gone ahead and pulled them in from Stefan's block tree. It's a bit late in the test cycle so if you happen to have a workload that exercises these paths your testing would be appreciated. I've pushed the latest here: https://github.com/mdroth/qemu/commits/stable-2.2-staging
The patch was sitting in Kevins Repo for quite some time. I have asked Kevin off-list if there are any concerns merging this patch. Stefan has added a test to test-coroutine to trigger the use-after-free: http://repo.or.cz/w/qemu/kevin.git/commit/a2439f8e7e23e03f64a9388e31450084072ba0f2 Kevin told me that the bug was reported in conjunction with NBD. But I think it can occur at several places in the block layer as well. As it is a use-after-free it might work as well as trigger all kinds of behaviour. I personally have this patch in my qemu 2.2 build for some weeks and have it running on several hundreds our vServers. Without any issues so far. Peter
[Prev in Thread] | Current Thread | [Next in Thread] |