qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] When it's okay to treat OOM as fatal?


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] When it's okay to treat OOM as fatal?
Date: Wed, 17 Oct 2018 11:05:01 +0100
User-agent: Mutt/1.10.1 (2018-07-13)

On Tue, Oct 16, 2018 at 03:01:29PM +0200, Markus Armbruster wrote:
> Anything that pages commonly becomes unusable long before
> allocations fail.  Anything that overcommits will send you a (commonly
> lethal) signal instead.  Anything that tries handling OOM gracefully,
> and manages to dodge both these bullets somehow, will commonly get it
> wrong and crash.

In the block layer blk_try_blockalign() (previously
qemu_try_blockalign()) is used because significant amounts of memory can
be allocated by the untrusted guest or untrusted disk image files.  I
think the error handling is reasonable in those cases:
1. QEMU startup or disk hotplug fail with a nice error message
OR
2. An I/O request is failed (ultimately just EIO error reporting but
   it's better than killing the QEMU process!)

I'm pretty sure ENOMEM errors are possible even when memory overcommit
is enabled.

My thinking has been to use g_new() for small QEMU-internal structures
and g_try_new() for large amounts of memory allocated in response to
untrusted inputs.  (Untrusted inputs must never be used for unbounded
allocation sizes but those bounded sizes can still be large.)

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]