qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH V2] vhost: fix a migration failed because of vho


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH V2] vhost: fix a migration failed because of vhost region merge
Date: Wed, 26 Jul 2017 16:05:43 +0200

On Tue, 25 Jul 2017 22:47:18 +0300
"Michael S. Tsirkin" <address@hidden> wrote:

> On Tue, Jul 25, 2017 at 10:44:38AM +0200, Igor Mammedov wrote:
> > On Mon, 24 Jul 2017 23:50:00 +0300
> > "Michael S. Tsirkin" <address@hidden> wrote:
> >   
> > > On Mon, Jul 24, 2017 at 11:14:19AM +0200, Igor Mammedov wrote:  
> > > > On Sun, 23 Jul 2017 20:46:11 +0800
> > > > Peng Hao <address@hidden> wrote:
> > > >     
> > > > > When a guest that has several hotplugged dimms is migrated, on
> > > > > destination it will fail to resume. Because regions on source
> > > > > are merged and on destination the order of realizing devices
> > > > > is different from on source with dimms, so when part of devices
> > > > > are realizd some region can not be merged.That may be more than
> > > > > vhost slot limit.
> > > > > 
> > > > > Signed-off-by: Peng Hao <address@hidden>
> > > > > Signed-off-by: Wang Yechao <address@hidden>
> > > > > ---
> > > > >  hw/mem/pc-dimm.c        | 2 +-
> > > > >  include/sysemu/sysemu.h | 1 +
> > > > >  vl.c                    | 5 +++++
> > > > >  3 files changed, 7 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/hw/mem/pc-dimm.c b/hw/mem/pc-dimm.c
> > > > > index ea67b46..13f3db5 100644
> > > > > --- a/hw/mem/pc-dimm.c
> > > > > +++ b/hw/mem/pc-dimm.c
> > > > > @@ -101,7 +101,7 @@ void pc_dimm_memory_plug(DeviceState *dev, 
> > > > > MemoryHotplugState *hpms,
> > > > >          goto out;
> > > > >      }
> > > > >  
> > > > > -    if (!vhost_has_free_slot()) {
> > > > > +    if (!vhost_has_free_slot() && qemu_is_machine_init_done()) {
> > > > >          error_setg(&local_err, "a used vhost backend has no free"
> > > > >                                 " memory slots left");    
> > > > that doesn't fix issue,
> > > >    1st: number of used entries is changing after machine_init_done() is 
> > > > called
> > > >         as regions continue to mapped/unmapped during runtime    
> > > 
> > > But that's fine, we want hotplug to fail if we can not guarantee vhost
> > > will work.  
> > don't we want guarantee that vhost will work with dimm devices at startup
> > if it were requested on CLI or fail startup cleanly if it can't?  
> 
> Yes. And failure to start vhost will achieve this without need to much with
> DIMMs. The issue is only with DIMM hotplug when vhost is already running,
> specifically because notifiers have no way to report or handle errors.
> 
> > >   
> > > >    2nd: it brings regression and allows to start QEMU with number memory
> > > >         regions more than supported by backend, which combined with 
> > > > missing
> > > >         error handling in vhost will lead to qemu crashes or obscure 
> > > > bugs in
> > > >         guest breaking vhost enabled drivers.
> > > >         i.e. patch undoes what were fixed by
> > > >         
> > > > https://lists.gnu.org/archive/html/qemu-devel/2015-10/msg00789.html    
> > > 
> > > Why does it? The issue you fixed there is hotplug, and that means
> > > pc_dimm_memory_plug called after machine done.  
> > I wasn't able to crash fc24 guest with current qemu/rhen7 kernel,
> > it fallbacks back to virtio and switches off vhost.   
> 
> I think vhostforce should make vhost fail and not fall back,
> but that is another bug.
currently vhostforce is broken, qemu continues to happily run with this patch
and without patch it fails to start up so I'd just NACK this patch
on this behavioral change and ask to fix both issues in the same series.


While looking at vhost memmap, I've also noticed that option rom
"pc.rom" maps/remaps it self multiple times and it affects
number of vhost mem map slots. I suspect that vhost is not interested
in accessing pc.rom occupied memory and to hide it from vhost
we need to make it RO. Also looking at roms initialization
I've noticed that roms are typically mapped as RO if PCI is enabled.
Taking in account that vhost is available only when PCI is enabled
could be following patch be acceptable:

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index e3fcd51..de459e2 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1449,6 +1449,9 @@ void pc_memory_init(PCMachineState *pcms,
     option_rom_mr = g_malloc(sizeof(*option_rom_mr));
     memory_region_init_ram(option_rom_mr, NULL, "pc.rom", PC_ROM_SIZE,
                            &error_fatal);
+    if (pcmc->pci_enabled) {
+        memory_region_set_readonly(option_rom_mr, true);
+    }
     vmstate_register_ram_global(option_rom_mr);
     memory_region_add_subregion_overlap(rom_memory,
                                         PC_ROM_MIN_VGA,


it will keep ROM as RW on isa machine but RO for the rest PCI based.

> 
> >   
> > >   
> > > >     
> > > > >          goto out;
> > > > > diff --git a/include/sysemu/sysemu.h b/include/sysemu/sysemu.h
> > > > > index b213696..48228ad 100644
> > > > > --- a/include/sysemu/sysemu.h
> > > > > +++ b/include/sysemu/sysemu.h
> > > > > @@ -88,6 +88,7 @@ void 
> > > > > qemu_system_guest_panicked(GuestPanicInformation *info);
> > > > >  void qemu_add_exit_notifier(Notifier *notify);
> > > > >  void qemu_remove_exit_notifier(Notifier *notify);
> > > > >  
> > > > > +bool qemu_is_machine_init_done(void);
> > > > >  void qemu_add_machine_init_done_notifier(Notifier *notify);
> > > > >  void qemu_remove_machine_init_done_notifier(Notifier *notify);
> > > > >  
> > > > > diff --git a/vl.c b/vl.c
> > > > > index fb6b2ef..43aee22 100644
> > > > > --- a/vl.c
> > > > > +++ b/vl.c
> > > > > @@ -2681,6 +2681,11 @@ static void qemu_run_exit_notifiers(void)
> > > > >  
> > > > >  static bool machine_init_done;
> > > > >  
> > > > > +bool qemu_is_machine_init_done(void)
> > > > > +{
> > > > > +    return machine_init_done;
> > > > > +}
> > > > > +
> > > > >  void qemu_add_machine_init_done_notifier(Notifier *notify)
> > > > >  {
> > > > >      notifier_list_add(&machine_init_done_notifiers, notify);    




reply via email to

[Prev in Thread] Current Thread [Next in Thread]