Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 883842

Summary: clvmd not working properly if multiple clients talk to clvmd (part2)
Product: Red Hat Enterprise Linux 7 Reporter: Peter Rajnoha <prajnoha>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
lvm2 sub component: Clustering / clvmd QA Contact: cluster-qe <cluster-qe>
Status: CLOSED WONTFIX Docs Contact:
Severity: unspecified    
Priority: medium CC: agk, cmarthal, coughlan, djansa, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac
Version: 7.3Keywords: Triaged
Target Milestone: rc   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 730289 Environment:
Last Closed: 2016-01-19 00:13:16 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Peter Rajnoha 2012-12-05 12:21:20 UTC
This bz will track further clvmd fixes that should go in 6.5 and which are related to original report in bz #730289.

Comment 8 Peter Rajnoha 2015-10-16 08:51:56 UTC
Well, sorry for the lame description in this bz. It should have read:

Bug #730289 dealt with high memory consumption when there was high number of clients connected to clvmd - we've fixed this by decreasing each thread's stack size.

However, if we induce a situation in which there are still numerous clvmd clients and system is running out of resources (mainly number of open files). We concluded that the tested scenario was artificial one, not the one that normally happens in real-life production environments (also, we've never had bug reports from users about these issues).

We could possibly add some more checks for available resources and deny processing the request if the limit is reached, but I think this is not necessary for RHEL6. Moving to RHEL7 for consideration.

Comment 10 Jonathan Earl Brassow 2016-01-19 00:13:16 UTC
(In reply to Peter Rajnoha from comment #8)
> Well, sorry for the lame description in this bz. It should have read:
> 
> Bug #730289 dealt with high memory consumption when there was high number of
> clients connected to clvmd - we've fixed this by decreasing each thread's
> stack size.
> 
> However, if we induce a situation in which there are still numerous clvmd
> clients and system is running out of resources (mainly number of open
> files). We concluded that the tested scenario was artificial one, not the
> one that normally happens in real-life production environments (also, we've
> never had bug reports from users about these issues).
> 
> We could possibly add some more checks for available resources and deny
> processing the request if the limit is reached, but I think this is not
> necessary for RHEL6. Moving to RHEL7 for consideration.

We can open a new bug if there is some kind of priority around this issue someday.  Looks like a non-issue for now and I am closing this bug.