qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Slow kernel/initrd loading via fw_cfg; Was Re: Hack int


From: Alexander Graf
Subject: Re: [Qemu-devel] Slow kernel/initrd loading via fw_cfg; Was Re: Hack integrating SeaBios / LinuxBoot option rom with QEMU trace backends
Date: Tue, 11 Oct 2011 16:34:55 +0200

On 11.10.2011, at 16:33, Anthony Liguori wrote:

> On 10/11/2011 09:01 AM, Daniel P. Berrange wrote:
>> On Tue, Oct 11, 2011 at 08:19:14AM -0500, Anthony Liguori wrote:
>>> On 10/11/2011 08:14 AM, Alexander Graf wrote:
>>>>>>>> And I don't see the point why we would have to shoot yet another hole 
>>>>>>>> into the guest just because we're too unwilling to make an interface 
>>>>>>>> that's perfectly valid horribly slow.
>>>>>>> 
>>>>>>> rep/ins is exactly like dma+wait for this use case: provide an address, 
>>>>>>> get a memory image in return.  There's no need to add another 
>>>>>>> interface, we should just optimize the existing one.
>>>>>> 
>>>>>> Whatever we do, the interface will never be as fast as DMA. We will 
>>>>>> always have to do sanity / permission checks for every IO operation, can 
>>>>>> batch up only so many IO requests and in QEMU again have to call our 
>>>>>> callbacks in a loop.
>>>>> 
>>>>> rep/ins is effectively equivalent to DMA except in how it's handled 
>>>>> within QEMU.
>>>> 
>>>> No, DMA has a lot bigger granularities in kvm/user interaction. We can 
>>>> easily DMA a 50MB region with a single kvm/user exit. For PIO we can at 
>>>> most do page granularity.
>>> 
>>> So make a proper PCI device for kernel loading.  It's a much more
>>> natural approach and let's use alias -kernel/-initrd/-append to
>>> -device kernel-pci,kernel=PATH,initrd=PATH
>> 
>> Adding a PCI device doesn't sound very appealing, unless you
>> can guarentee it is never visible to the guest once LinuxBoot
>> has finished its dirty work,
> 
> It'll definitely be guest visible just like fwcfg is guest visible.

Yup, just that this time it eats up one of our previous PCI slots ;)

So far it's the best proposal I've heard though.


Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]