Bug 2174138

Summary: dlm: visualize dlm_controld posix lock lockdb over time
Product: Red Hat Enterprise Linux 8 Reporter: Alexander Aring <aahringo>
Component: dlmAssignee: Alexander Aring <aahringo>
Status: NEW --- QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: high    
Version: 8.4CC: cluster-maint, gfs2-maint, sbradley
Target Milestone: rcKeywords: Triaged
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alexander Aring 2023-02-28 17:54:47 UTC
Description of problem:

Sometimes users report about performance issues when using posix locks with e.g. gfs2. gfs2 redirects all posix lock request to dlm and dlm handles them via corosync protocol in dlm_controld.

There are many layers involved. Those performance issues e.g. lock acquiring takes too long could be normal as there might be lock contention, so it works as intended.

With posix locking a lot of processes acquiring different lock ranges on a file can be involved. dlm_controld stores a local but cluster-wide posix lock database about the current lock modes from each possible cluster-wide process acquire locks.

To give an general overview and knowing if there was contention or not we can visualize posix lock modes per file in an plot diagram. Then the user can see where contention comes from and which process on which cluster node held the look in a specific time.

Note:

There are many layers and communications e.g. kernel<->user, corosync involved. Each dlm_controld distance will store their own lock database which should be compared with others to see how much overhead is involved there. However I think it should be enough to only show contention/lock states in the plot that the user can figure out which process helds a specific lock in a specific time range. Other communications e.g. corosync/kernel will result in "wider" gabs between possible contention state and lock state.