qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] Slow kernel/initrd loading via fw_cfg; Was Re: Hack int


From: Avi Kivity
Subject: Re: [Qemu-devel] Slow kernel/initrd loading via fw_cfg; Was Re: Hack integrating SeaBios / LinuxBoot option rom with QEMU trace backends
Date: Tue, 11 Oct 2011 11:15:05 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:7.0) Gecko/20110927 Thunderbird/7.0

On 10/11/2011 10:23 AM, Daniel P. Berrange wrote:
  - Application sandbox, directly boots the regular host's kernel and
    a custom initrd image. The initrd does not contain any files except
    for the 9p kernel modules and a custom init binary, which mounts
    the guest root FS from a 9p filesystem export.

    The kernel is<  5 MB, while the initrd is approx 700 KB compressed,
    or 1.4 MB compressed. Performance for the sandbox is even more
    critical than for libguestfs. Even 10's of milliseconds make a
    difference here. The commands being run in the sandbox can be
    very short lived processes, executed reasonably frequently. The
    goal is to have end-to-end runtime overhead of<  2 seconds. This
    includes libvirt guest startup, qemu startup/shutdown, bios time,
    option ROM time, kernel boot&  shutdown time.

    The reason for using a kerenl/initrd instead of a bootable ISO,
    is that building an ISO requires time itself, and we need to be
    able to easily pass kernel boot arguments via -append.


I'm focusing on the last use case, and if the phase of the moon
is correct, I can currently executed a sandbox command with a total
overhead of 3.5 seconds (if using a compressed initrd) of which
the QEMU execution time is 2.5 seconds.

Of this, 1.4 seconds is the time required by LinuxBoot to copy the
kernel+initrd. If I used an uncompressed initrd, which I really want
to, to avoid decompression overhead, this increases to ~1.7 seconds.
So the LinuxBoot ROM is ~60% of total QEMU execution time, or 40%
of total sandbox execution overhead.

One thing we can do is boot a guest and immediately snapshot it, before it runs any application specific code. Subsequent invocations will MAP_PRIVATE the memory image and COW their way. This avoids the kernel initialization time as well.


For comparison I also did a test building a bootable ISO using ISOLinux.
This required 700 ms for the boot time, which is appoximately 1/2 the
time reqiured for direct kernel/initrd boot. But you have to then add
on time required to build the ISO on every boot, to add custom kernel
command line args. So while ISO is faster than LinuxBoot currently
there is still non-negligable overhead here that I want to avoid.

You can accept parameters from virtio-serial or some other channel. Is there any reason you need them specifically as *kernel* command line parameters?

For further comparison I tested with Rich Jones' patches which add a
DMA-like inteface to fw_cfg. With this the time spent in the LinuxBoot
option ROM was as close to zero as matters.

So obviously, my preference is for -kernel/-initrd to be made very fast
using the DMA-like patches, or any other patches which could achieve
similarly high performance for -kernel/-initd.



--
error compiling committee.c: too many arguments to function




reply via email to

[Prev in Thread] Current Thread [Next in Thread]