qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Memory leak in via_isa_realize()


From: Thomas Huth
Subject: Re: Memory leak in via_isa_realize()
Date: Tue, 22 Mar 2022 08:55:55 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.6.0

On 21/03/2022 13.59, Peter Maydell wrote:
On Mon, 21 Mar 2022 at 12:11, BALATON Zoltan <balaton@eik.bme.hu> wrote:

On Mon, 21 Mar 2022, Peter Maydell wrote:
On Mon, 21 Mar 2022 at 10:31, Thomas Huth <thuth@redhat.com> wrote:
FYI, I'm seeing a memory leak in via_isa_realize() when building
QEMU with sanitizers enabled or when running QEMU through valgrind:
Same problem happens with qemu-system-ppc64 and the pegasos2 machine.

No clue how to properly fix this... is it safe to free the pointer
at the end of the function?

This is because the code is still using the old function
qemu_allocate_irqs(), which is almost always going to involve
it leaking memory. The fix is usually to rewrite the code to not use
that function at all, i.e. to manage its irq/gpio lines differently.
Probably the i8259 code should have a named GPIO output line
rather than wanting to be passed a qemu_irq in an init function,
and the via code should have an input GPIO line which it connects
up to the i8259. It looks from a quick glance like the i8259 and
its callers have perhaps not been completely QOMified.

Everything involving ISA emulation in QEMU is not completely QOMified and
this has caused some problems before but I did not want to try to fix it
both becuase it's too much unrelated work and because it's used by too
many things that could break that I can't even test. So I'd rather
somebody more comfortable with this would look at ISA QOMification.

Yeah, there's usually a reason that these things haven't been more
thoroughly QOMified, and that reason is often because it's a pile of
work for not very clear benefit.

In this particular case, although there is a "leak", it happens exactly
once at QEMU startup and in practice we need that memory to hang around
until QEMU exits anyway.

Yes. I guess as a workaround (to silence Valgrind et al. here), it would
be sufficient to store the pointer in the ViaISAState stucture.

The only real reason to fix this kind of leak in
my opinion is because it clutters up the output of valgrind or clang/gcc
address sanitizer runs and prevents us from having our CI do a
leak-sanitizer test run that would guard against new leaks being added
to the codebase.

Yes, that's my concern, too. It's hard to spot real problems if the output
is cluttered with a lot of these not-so-serious leaks.

We still have a fair number of this sort of one-off
startup leak in various arm boards/devices, for instance -- I occasionally
have a look through and fix some of the more tractable ones.

Hmm, yes, running the device-introspect-test shows a lot of leaks for the
arm devices...

$ QTEST_QEMU_BINARY="valgrind --leak-check=full \
  --show-leak-kinds=definite ./qemu-system-aarch64" \
  tests/qtest/device-introspect-test \
 -p /aarch64/device/introspect/concrete/defaults/none

...
==377771== 112,629 (66,304 direct, 46,325 indirect) bytes in 13 blocks are 
definitely lost in loss record 7,800 of 7,810
==377771==    at 0x4C360A5: malloc (vg_replace_malloc.c:380)
==377771==    by 0x6364475: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0xB207C3: object_new_with_type (object.c:722)
==377771==    by 0xB20864: object_new (object.c:744)
==377771==    by 0xB1C731: qmp_device_list_properties (qom-qmp-cmds.c:153)
==377771==    by 0xC900BC: qmp_marshal_device_list_properties 
(qapi-commands-qdev.c:59)
==377771==    by 0xCA4DBE: do_qmp_dispatch_bh (qmp-dispatch.c:110)
==377771==    by 0xCDF19D: aio_bh_call (async.c:136)
==377771==    by 0xCDF2A7: aio_bh_poll (async.c:164)
==377771==    by 0xCA9D08: aio_dispatch (aio-posix.c:381)
==377771==    by 0xCDF6DA: aio_ctx_dispatch (async.c:306)
==377771==    by 0x635E95C: g_main_context_dispatch (in 
/usr/lib64/libglib-2.0.so.0.5600.4)
==377771==
==377771== 530,646 (88 direct, 530,558 indirect) bytes in 1 blocks are 
definitely lost in loss record 7,806 of 7,810
==377771==    at 0x4C360A5: malloc (vg_replace_malloc.c:380)
==377771==    by 0x6364475: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0x637C086: g_slice_alloc (in 
/usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0x634CBE1: g_hash_table_new_full (in 
/usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0xB20122: object_initialize_with_type (object.c:513)
==377771==    by 0xB2080D: object_new_with_type (object.c:729)
==377771==    by 0xB20864: object_new (object.c:744)
==377771==    by 0xB1C731: qmp_device_list_properties (qom-qmp-cmds.c:153)
==377771==    by 0x78F83C: qdev_device_help (qdev-monitor.c:283)
==377771==    by 0x791001: qmp_device_add (qdev-monitor.c:801)
==377771==    by 0x79145D: hmp_device_add (qdev-monitor.c:916)
==377771==    by 0x60EC48: handle_hmp_command (hmp.c:1100)
==377771==
==377771== 536,518 (704 direct, 535,814 indirect) bytes in 8 blocks are 
definitely lost in loss record 7,807 of 7,810
==377771==    at 0x4C360A5: malloc (vg_replace_malloc.c:380)
==377771==    by 0x6364475: g_malloc (in /usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0x637C086: g_slice_alloc (in 
/usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0x634CBE1: g_hash_table_new_full (in 
/usr/lib64/libglib-2.0.so.0.5600.4)
==377771==    by 0xB20122: object_initialize_with_type (object.c:513)
==377771==    by 0xB201B5: object_initialize (object.c:534)
==377771==    by 0xB202F3: object_initialize_child_with_propsv (object.c:564)
==377771==    by 0xB2028E: object_initialize_child_with_props (object.c:547)
==377771==    by 0xB203DC: object_initialize_child_internal (object.c:601)
==377771==    by 0x8349DC: bcm2835_peripherals_init (bcm2835_peripherals.c:122)
==377771==    by 0xB1FC1B: object_init_with_type (object.c:375)
==377771==    by 0xB20140: object_initialize_with_type (object.c:515)
...

Looks like we're leaking memory in the object initializiation
some times?

 Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]