Cc'ing the ppl who responded the thread you quoted.
On 3/30/20 4:11 PM, Markus Armbruster wrote:
Philippe Mathieu-Daudé <address@hidden> writes:
> ---
> qga/commands-posix.c | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/qga/commands-posix.c b/qga/commands-posix.c
> index 93474ff770..8f127788e6 100644
> --- a/qga/commands-posix.c
> +++ b/qga/commands-posix.c
> @@ -493,7 +493,13 @@ struct GuestFileRead
*qmp_guest_file_read(int64_t handle, bool has_count,
> gfh->state = RW_STATE_NEW;
> }
>
> - buf = g_malloc0(count+1);
> + buf = g_try_malloc0(count + 1);
> + if (!buf) {
> + error_setg(errp,
> + "failed to allocate sufficient memory "
> + "to complete the requested service");
> + return NULL;
> + }
> read_count = fread(buf, 1, count, fh);
> if (ferror(fh)) {
> error_setg_errno(errp, errno, "failed to read file");
>
On 3/25/20 7:19 AM, Dietmar Maurer wrote:
but error_setg() also calls malloc, so this does not help at all?
IIUC the problem, you can send a QMP command to ask to read let's say
3GB of a file, and QEMU crashes. But this doesn't mean there the .heap
is empty, there is probably few bytes still available, enough to
respond with an error message.
We've discussed how to handle out-of-memory conditions many times.
Here's one instance:
Subject: When it's okay to treat OOM as fatal?
Message-ID: <address@hidden>
https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03212.html
No improvement since then; there's no guidance on when to check for OOM.
Actual code tends to check only "large" allocations (for subjective
values of "large").
I reiterate my opinion that whatever OOM handling we have is too
unreliable to be worth much, since it can only help when (1) allocations
actually fail (they generally don't[*]), and (2) the allocation that
fails is actually handled (they generally aren't), and (3) the handling
actually works (we don't test OOM, so it generally doesn't).
[*] Linux overcommits memory, which means malloc() pretty much always
succeeds, but when you try to use "too much" of the memory you
supposedly allocated, a lethal signal is coming your way. Reasd the
thread I quoted for examples.
So this patch takes Stefan reasoning:
https://lists.nongnu.org/archive/html/qemu-devel/2018-10/msg03525.html
My thinking has been to use g_new() for small QEMU-internal structures
and g_try_new() for large amounts of memory allocated in response to
untrusted inputs. (Untrusted inputs must never be used for unbounded
allocation sizes but those bounded sizes can still be large.)
In any cases (malloc/malloc_try) we have a denial of service
(https://www.openwall.com/lists/oss-security/2018/10/17/4) and the
service is restarted.
Daniel suggests such behavior should be catched by external firewall
guard (either on the process or on the network). This seems out of scope
of QEMU and hard to fix.
So, can we improve something? Or should we let this code as it?