qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] hyperv/synic: Allocate as ram_device


From: Dr. David Alan Gilbert
Subject: Re: [PATCH 2/2] hyperv/synic: Allocate as ram_device
Date: Thu, 9 Jan 2020 12:22:37 +0000
User-agent: Mutt/1.13.0 (2019-11-30)

* Michael S. Tsirkin (address@hidden) wrote:
> On Thu, Jan 09, 2020 at 12:08:20PM +0000, Dr. David Alan Gilbert wrote:
> > * Michael S. Tsirkin (address@hidden) wrote:
> > > On Wed, Jan 08, 2020 at 01:53:53PM +0000, Dr. David Alan Gilbert (git) 
> > > wrote:
> > > > From: "Dr. David Alan Gilbert" <address@hidden>
> > > > 
> > > > Mark the synic pages as ram_device so that they won't be visible
> > > > to vhost.
> > > > 
> > > > Signed-off-by: Dr. David Alan Gilbert <address@hidden>
> > > 
> > > 
> > > I think I disagree with this one.
> > >  * A RAM device represents a mapping to a physical device, such as to a 
> > > PCI
> > >  * MMIO BAR of an vfio-pci assigned device.  The memory region may be 
> > > mapped
> > >  * into the VM address space and access to the region will modify memory
> > >  * directly.  However, the memory region should not be included in a 
> > > memory
> > >  * dump (device may not be enabled/mapped at the time of the dump), and
> > >  * operations incompatible with manipulating MMIO should be avoided.  
> > > Replaces
> > >  * skip_dump flag.
> > > 
> > > Looks like an abuse of notation.
> > 
> > OK, it did feel a bit like that - any suggestions of another way to do
> > it?
> >   This clearly isn't normal RAM.
> > 
> > Dave
> 
> If it's just an optimization for vhost/postcopy/etc, then I think

Note it's not an optimisation; postcopy fails unless you can aggregate
the members of the hugepage.
And I think vhost-user will fail if you have too many sections - and
the 16 sections from synic I think will blow the slots available.

> an API that says how this isn't normal ram would be ok.
> E.g. it's not DMA'd into? Then maybe _nodma?

Do we want a new memory_region_init for that or just to be able to add
a flag?

Dave

> > > 
> > > 
> > > > ---
> > > >  hw/hyperv/hyperv.c | 14 ++++++++------
> > > >  1 file changed, 8 insertions(+), 6 deletions(-)
> > > > 
> > > > diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c
> > > > index da8ce82725..4de3ec411d 100644
> > > > --- a/hw/hyperv/hyperv.c
> > > > +++ b/hw/hyperv/hyperv.c
> > > > @@ -95,12 +95,14 @@ static void synic_realize(DeviceState *dev, Error 
> > > > **errp)
> > > >      msgp_name = g_strdup_printf("synic-%u-msg-page", vp_index);
> > > >      eventp_name = g_strdup_printf("synic-%u-event-page", vp_index);
> > > >  
> > > > -    memory_region_init_ram(&synic->msg_page_mr, obj, msgp_name,
> > > > -                           sizeof(*synic->msg_page), &error_abort);
> > > > -    memory_region_init_ram(&synic->event_page_mr, obj, eventp_name,
> > > > -                           sizeof(*synic->event_page), &error_abort);
> > > > -    synic->msg_page = memory_region_get_ram_ptr(&synic->msg_page_mr);
> > > > -    synic->event_page = 
> > > > memory_region_get_ram_ptr(&synic->event_page_mr);
> > > > +    synic->msg_page = qemu_memalign(qemu_real_host_page_size,
> > > > +                                    sizeof(*synic->msg_page));
> > > > +    synic->event_page = qemu_memalign(qemu_real_host_page_size,
> > > > +                                      sizeof(*synic->event_page));
> > > > +    memory_region_init_ram_device_ptr(&synic->msg_page_mr, obj, 
> > > > msgp_name,
> > > > +                           sizeof(*synic->msg_page), synic->msg_page);
> > > > +    memory_region_init_ram_device_ptr(&synic->event_page_mr, obj, 
> > > > eventp_name,
> > > > +                           sizeof(*synic->event_page), 
> > > > synic->event_page);
> > > >  
> > > >      g_free(msgp_name);
> > > >      g_free(eventp_name);
> > > > -- 
> > > > 2.24.1
> > > 
> > --
> > Dr. David Alan Gilbert / address@hidden / Manchester, UK
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]