Description of problem: Memory leak in gvfs-udisks2-volume-monitor ==25578== 48 bytes in 2 blocks are definitely lost in loss record 1,757 of 2,989 ==25578== at 0x4C29F73: malloc (vg_replace_malloc.c:309) ==25578== by 0x63EC6DD: g_malloc (gmem.c:99) ==25578== by 0x6403CDD: g_slice_alloc (gslice.c:1025) ==25578== by 0x63E2E95: g_list_prepend (glist.c:314) ==25578== by 0x5EA477E: _g_get_unix_mount_points (gunixmounts.c:1027) ==25578== by 0x5EA477E: g_unix_mount_points_get (gunixmounts.c:1600) ==25578== by 0x40BC8B: update_fstab_volumes (gvfsudisks2volumemonitor.c:1562) ==25578== by 0x40BC8B: update_all (gvfsudisks2volumemonitor.c:534) ==25578== by 0x617B5DA: g_type_create_instance (gtype.c:1866) ==25578== by 0x615EF07: g_object_constructor (gobject.c:2148) ==25578== by 0x40D030: gvfs_udisks2_volume_monitor_constructor (gvfsudisks2volumemonitor.c:297) ==25578== by 0x615F8B2: g_object_new_with_custom_constructor (gobject.c:1717) ==25578== by 0x615F8B2: g_object_new_internal (gobject.c:1797) ==25578== by 0x6160B8C: g_object_new_with_properties (gobject.c:1967) ==25578== by 0x6161570: g_object_new (gobject.c:1639) It grows quickly over a day $ cat 0170-maps_day1.log | while read line; do end=$(echo $line | awk -F- '{print $2}' | awk '{ print $1 }'); start=$(echo $line | awk -F- '{print $1}'); echo "$(((0x$end - 0x$start)))K $line"; done | sort -n | tail 2097152K 7fcdfdeb4000-7fcdfe0b4000 ---p 00009000 08:03 411718 /usr/lib64/libgudev-1.0.so.0.2.0 2097152K 7fcdfe124000-7fcdfe324000 ---p 0006e000 08:03 418682 /usr/lib64/libudisks2.so.0.0.0 2535424K 0066d000-008d8000 rw-p 00000000 00:00 0 [heap] 8388608K 7fcdf09fa000-7fcdf11fa000 rw-p 00000000 00:00 0 8388608K 7fcdf1a59000-7fcdf2259000 rw-p 00000000 00:00 0 8388608K 7fcdf225a000-7fcdf2a5a000 rw-p 00000000 00:00 0 66895872K 7fcde4034000-7fcde8000000 ---p 00000000 00:00 0 66969600K 7fcde8022000-7fcdec000000 ---p 00000000 00:00 0 66973696K 7fcdec021000-7fcdf0000000 ---p 00000000 00:00 0 106180608K 7fcdf2a5a000-7fcdf8f9d000 r--p 00000000 08:03 3120 /usr/lib/locale/locale-archive ]$ cat 0190-maps_day2.log | while read line; do end=$(echo $line | awk -F- '{print $2}' | awk '{ print $1 }'); start=$(echo $line | awk -F- '{print $1}'); echo "$(((0x$end - 0x$start)))K $line"; done | sort -n | tail 2097152K 7fcdfdeb4000-7fcdfe0b4000 ---p 00009000 08:03 411718 /usr/lib64/libgudev-1.0.so.0.2.0 2097152K 7fcdfe124000-7fcdfe324000 ---p 0006e000 08:03 418682 /usr/lib64/libudisks2.so.0.0.0 2801664K 0066d000-00919000 rw-p 00000000 00:00 0 [heap] 8388608K 7fcdf09fa000-7fcdf11fa000 rw-p 00000000 00:00 0 8388608K 7fcdf1a59000-7fcdf2259000 rw-p 00000000 00:00 0 8388608K 7fcdf225a000-7fcdf2a5a000 rw-p 00000000 00:00 0 66895872K 7fcde4034000-7fcde8000000 ---p 00000000 00:00 0 66969600K 7fcde8022000-7fcdec000000 ---p 00000000 00:00 0 66973696K 7fcdec021000-7fcdf0000000 ---p 00000000 00:00 0 106180608K 7fcdf2a5a000-7fcdf8f9d000 r--p 00000000 08:03 3120 /usr/lib/locale/locale-archive Version-Release number of selected component (if applicable): gvfs-1.36.2-4.el7.x86_64 How reproducible: Always valgrind --leak-check=full --track-origins=yes --show-reachable=yes --log-file=valgrind.log /usr/libexec/gvfs-udisks2-volume-monitor --replace Actual results: gvfs-udisks2-volume-monitor tops in memory usage on systems with multiple users logged in. Expected results: gvfs-udisks2-volume-monitor not to leak.
I am missing some explanation for the awk-filtered maps outputs, does it mean that gvfsd-udisks2-volume-monitor leaked 0.3GB in one day? This concrete leak has been fixed upstream by: https://gitlab.gnome.org/GNOME/gvfs/-/commit/91f34aa87f6089c8d8437310854b83af3b6ba05b However, I am not sure that exactly this leak is responsible for such a big growth, but I could provide a testing build for verification. It would be good to see the whole valgrind output for analysis. In upstream, not that long ago, there were fixed some bigger leaks in util-linux, udisks2 and glib, which might potentially be the culprit here. What are versions of those components?
FYI the udisks2-2.8.3-1.el8 release (RHEL 8.1.0) fixed a large number of memory leaks, with majority on the daemon side, though there's still chance for some more. Full Valgrind report would be nice to have.
(not noticing this is RHEL 7, in that case the udisks2-2.8.4-1.el7 - RHEL 7.8 should contain all aforementioned fixes).
I maybe see what is going on here. I have been checking the memory consumption differently than the reporter and I have just realized that the "awk" output is probably wrong, respective its units. The addresses should be in bytes and the "awk" script doesn't convert the units, so why is there the "K" letter? So the leak in #c0 is 0.3MB (from 2.8MB), not 0.3GB (as written in #c3). And in that case, yes, I can reproduce it and the Valgrind outputs confirm the mentioned leak. If these small, but recurrent leaks, are for some reason an issue, we can surely fix them. Can you confirm my thoughts?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (gvfs bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3326