Bug 2174138

Summary: dlm: visualize dlm_controld posix lock lockdb over time
Product: Red Hat Enterprise Linux 8 Reporter: Alexander Aring <aahringo>
Component: dlmAssignee: Alexander Aring <aahringo>
Status: CLOSED MIGRATED QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: medium    
Version: 8.4CC: cluster-maint, gfs2-maint, sbradley
Target Milestone: rcKeywords: MigratedToJIRA, Triaged
Target Release: ---Flags: pm-rhel: mirror+
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-23 11:34:19 UTC Type: Feature Request
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alexander Aring 2023-02-28 17:54:47 UTC
Description of problem:

Sometimes users report about performance issues when using posix locks with e.g. gfs2. gfs2 redirects all posix lock request to dlm and dlm handles them via corosync protocol in dlm_controld.

There are many layers involved. Those performance issues e.g. lock acquiring takes too long could be normal as there might be lock contention, so it works as intended.

With posix locking a lot of processes acquiring different lock ranges on a file can be involved. dlm_controld stores a local but cluster-wide posix lock database about the current lock modes from each possible cluster-wide process acquire locks.

To give an general overview and knowing if there was contention or not we can visualize posix lock modes per file in an plot diagram. Then the user can see where contention comes from and which process on which cluster node held the look in a specific time.

Note:

There are many layers and communications e.g. kernel<->user, corosync involved. Each dlm_controld distance will store their own lock database which should be compared with others to see how much overhead is involved there. However I think it should be enough to only show contention/lock states in the plot that the user can figure out which process helds a specific lock in a specific time range. Other communications e.g. corosync/kernel will result in "wider" gabs between possible contention state and lock state.

Comment 4 RHEL Program Management 2023-09-23 11:31:52 UTC
Issue migration from Bugzilla to Jira is in process at this time. This will be the last message in Jira copied from the Bugzilla bug.

Comment 5 RHEL Program Management 2023-09-23 11:34:19 UTC
This BZ has been automatically migrated to the issues.redhat.com Red Hat Issue Tracker. All future work related to this report will be managed there.

Due to differences in account names between systems, some fields were not replicated.  Be sure to add yourself to Jira issue's "Watchers" field to continue receiving updates and add others to the "Need Info From" field to continue requesting information.

To find the migrated issue, look in the "Links" section for a direct link to the new issue location. The issue key will have an icon of 2 footprints next to it, and begin with "RHEL-" followed by an integer.  You can also find this issue by visiting https://issues.redhat.com/issues/?jql= and searching the "Bugzilla Bug" field for this BZ's number, e.g. a search like:

"Bugzilla Bug" = 1234567

In the event you have trouble locating or viewing this issue, you can file an issue by sending mail to rh-issues. You can also visit https://access.redhat.com/articles/7032570 for general account information.