Bug 2174138 - dlm: visualize dlm_controld posix lock lockdb over time
Summary: dlm: visualize dlm_controld posix lock lockdb over time
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: dlm
Version: 8.4
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Alexander Aring
QA Contact: cluster-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-28 17:54 UTC by Alexander Aring
Modified: 2023-08-10 15:40 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Type: Feature Request
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-150128 0 None None None 2023-02-28 17:55:42 UTC
Red Hat Knowledge Base (Solution) 7003904 0 None None None 2023-03-22 16:51:23 UTC

Description Alexander Aring 2023-02-28 17:54:47 UTC
Description of problem:

Sometimes users report about performance issues when using posix locks with e.g. gfs2. gfs2 redirects all posix lock request to dlm and dlm handles them via corosync protocol in dlm_controld.

There are many layers involved. Those performance issues e.g. lock acquiring takes too long could be normal as there might be lock contention, so it works as intended.

With posix locking a lot of processes acquiring different lock ranges on a file can be involved. dlm_controld stores a local but cluster-wide posix lock database about the current lock modes from each possible cluster-wide process acquire locks.

To give an general overview and knowing if there was contention or not we can visualize posix lock modes per file in an plot diagram. Then the user can see where contention comes from and which process on which cluster node held the look in a specific time.

Note:

There are many layers and communications e.g. kernel<->user, corosync involved. Each dlm_controld distance will store their own lock database which should be compared with others to see how much overhead is involved there. However I think it should be enough to only show contention/lock states in the plot that the user can figure out which process helds a specific lock in a specific time range. Other communications e.g. corosync/kernel will result in "wider" gabs between possible contention state and lock state.


Note You need to log in before you can comment on or make changes to this bug.