RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1684917 - udisksd is consuming a lot of memory.
Summary: udisksd is consuming a lot of memory.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: udisks2
Version: 7.5
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: 7.8
Assignee: Tomáš Bžatek
QA Contact: guazhang@redhat.com
URL:
Whiteboard:
Depends On: 1710507 1714276
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-03 20:03 UTC by Siddhant Rao
Modified: 2023-09-07 19:47 UTC (History)
9 users (show)

Fixed In Version: udisks2-2.8.4-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-31 19:59:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 4154251 0 None None None 2019-05-17 12:25:38 UTC
Red Hat Product Errata RHBA-2020:1099 0 None None None 2020-03-31 20:00:01 UTC

Description Siddhant Rao 2019-03-03 20:03:25 UTC
Description of problem:

udisks service was consuming a lot of memory, nearly 32GB

   #   USER      PID    %CPU  %MEM  VSZ-MiB  RSS-MiB  TTY    STAT  START  TIME      COMMAND 
   66  root      27247  26.6  28.2  44391    36416    ?      -     2018   71039:47  /usr/libexec/udisks2/udisksd 

That being said, we have now restarted the service and now everything seems to be fine, Need to know what is the use of this service and why does it consume so much memory?

Version-Release number of selected component (if applicable):
udisks2-2.7.3-6.el7.x86_64

How reproducible:

Not able to reproduce on my system, was seen once on a customer's system.

Steps to Reproduce:
1.
2.
3.

Actual results:
udisks service was holding a lot of memory

Expected results:
udisks should not have been holding so much of memory on the Host.

Additional info:

This service was disabled by default on a fresh installed system.
The customer managed to find out that by navigating on the cockpit interfaces, On the storage section particularly, it triggered this service to start running.
Does cockpit have a direct relation with this service?. If yes then how?. also why is it retaining so much memory.

Comment 2 Tal Nisan 2019-03-04 09:30:12 UTC
Hi Siddhant, I think you opened this bug on the wrong product, it was opened on RHEV

Comment 3 Tomáš Bžatek 2019-03-04 10:42:31 UTC
(In reply to Siddhant Rao from comment #0)
> Additional info:
> 
> This service was disabled by default on a fresh installed system.
> The customer managed to find out that by navigating on the cockpit
> interfaces, On the storage section particularly, it triggered this service
> to start running.
> Does cockpit have a direct relation with this service?. If yes then how?.
> also why is it retaining so much memory.

Cockpit uses udisks for nearly all storage-related tasks. It also activates udisks modules for LVM and other things.

Is it possible get some logs of the service? Such big memory consumption indicates memory leaks, the daemon itself should stay reasonably low-footprint on memory and CPU. 

The total CPU time spent ("71039:47") is also large enough to indicate some kind of trouble. Is this a constant load or just a spike? Udisks generally responds to uevents (udev events), normally it should be quiet down to zero CPU utilization. Would be interesting to see what is the traffic situation in `udevadm monitor` as well.

Also, `udisksctl dump` output may help to understand customer's storage topology.

Comment 13 Tomáš Bžatek 2019-04-18 12:34:32 UTC
Thanks, got it. Looks like the lvm2 udisks plugin has been activated and it's known to have some memory leaks with some fixes upstream in recent releases.

> Dec 28 14:59:10 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
> Dec 28 14:59:10 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'

> Jan 03 15:59:45 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
> Jan 03 15:59:45 juc-ucsb-3-p.j.cinglevue.com udisksd[27247]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'

(this is a harmless warning but indicates what objects are being used).

Btw. this is RHEL 7 bug judging by the udisks2-2.7.3-6.el7 version. Moved appropriately.

Comment 15 guazhang@redhat.com 2019-04-19 02:26:48 UTC
Hello

Could someone share some reproduces here so that I can add automation to cover the bug ?

thanks
Guazhang

Comment 17 Tomáš Bžatek 2019-04-29 09:46:17 UTC
(In reply to guazhang from comment #15)
> Hello
> 
> Could someone share some reproduces here so that I can add automation to
> cover the bug ?

Try running our upstream d-bus test suite in a loop and watch the memory consumption rise. It's a memory leak, a dead memory that just sits there.

Comment 18 Tomáš Bžatek 2019-05-15 15:57:58 UTC
The real work on fixing memory leaks has been done in the following branches:

https://github.com/storaged-project/libblockdev/pull/439
https://github.com/storaged-project/libblockdev/pull/440
https://github.com/storaged-project/udisks/pull/659

Still a couple of minor leaks left to fix...

Comment 23 guazhang@redhat.com 2019-05-22 03:28:20 UTC
Hello

I don't know how to test it with Cockpit or gvfs apps, in fact, I don't have the test plan for it.

and I will run full udisks regression if it released. 

Is that OK for you ?

thanks
Guazhang

Comment 29 guazhang@redhat.com 2019-09-04 04:07:53 UTC
Hello

Don't found memery leak while run 100 cycle udisks2 dbus testing, so move to verified.

thanks
Guazhang

Comment 30 Tony Pearce 2019-10-30 08:00:38 UTC
Hi guys, I am the customer whom complained of this. I did give reproduction steps in the redhat support ticket. To reproduce, do the following: 

1. First, confirm service is not running on the RHV Host:
[root@juc-ucsb-2-p ~]# systemctl status udisks2
● udisks2.service - Disk Manager
   Loaded: loaded (/usr/lib/systemd/system/udisks2.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
     Docs: man:udisks(8)
[root@juc-ucsb-2-p ~]#

2. browse to cockpit web management on the RHV Host and log in
3. navigate to the storage tab (cockpit > localhost > storage)
4. confirm service is now running on the host:
[root@juc-ucsb-2-p ~]# systemctl status udisks2
● udisks2.service - Disk Manager
   Loaded: loaded (/usr/lib/systemd/system/udisks2.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-10-30 15:44:51 AWST; 14s ago
     Docs: man:udisks(8)
 Main PID: 2399 (udisksd)
    Tasks: 6
   CGroup: /system.slice/udisks2.service
           └─2399 /usr/libexec/udisks2/udisksd

Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'physical-volume'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
Oct 30 15:44:53 juc-ucsb-2-p.j.company.co udisksd[2399]: g_object_notify: object class 'UDisksLinuxBlockObject' has no property named 'block-lvm2'
[root@juc-ucsb-2-p ~]#

5. log out of cockpit and confirm service still running: 
[root@juc-ucsb-2-p ~]# systemctl status udisks2
● udisks2.service - Disk Manager
   Loaded: loaded (/usr/lib/systemd/system/udisks2.service; disabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-10-30 15:44:51 AWST; 3min 11s ago

6. check memory consumption of the service after time and verify increasing memory consumption. Using `top` I see the `RES` value slowly and continuously increasing for this process:
 [root@juc-ucsb-2-p ~]# top -p 2399
top - 15:47:59 up 32 days,  6:13,  1 user,  load average: 3.54, 3.01, 2.76
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  7.6 us,  6.4 sy,  0.0 ni, 85.7 id,  0.2 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 13179067+total, 49212844 free, 74355768 used,  8222056 buff/cache
KiB Swap: 16441340 total, 16089556 free,   351784 used. 52538380 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2399 root      20   0  977248  31832   6144 S   0.0  0.0   0:03.17 udisksd
///
[root@juc-ucsb-2-p ~]# top -p 2399
top - 15:57:44 up 32 days,  6:23,  1 user,  load average: 2.25, 2.58, 2.66
Tasks:   1 total,   0 running,   1 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.6 us,  4.7 sy,  0.0 ni, 90.6 id,  0.1 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 13179067+total, 49202940 free, 74357848 used,  8229880 buff/cache
KiB Swap: 16441340 total, 16089556 free,   351784 used. 52536300 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2399 root      20   0  977248  33020   6144 S   0.0  0.0   0:05.10 udisksd


Rgds.

Comment 31 Tomáš Bžatek 2019-10-30 11:03:35 UTC
Thanks Tony, this should be fixed in udisks2-2.8.4-1.el7 and libblockdev-2.18-5.el7 (you need both packages). Should the memory consumption still rise, please open a new bugreport.

Comment 35 errata-xmlrpc 2020-03-31 19:59:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1099


Note You need to log in before you can comment on or make changes to this bug.