Bug 769053
Summary: | 'Internal error: Maps lock < unlock messages' when running hundreds of lvm ops | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 5 | Reporter: | Corey Marthaler <cmarthal> |
Component: | lvm2 | Assignee: | Alasdair Kergon <agk> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 5.8 | CC: | agk, dwysocha, heinzm, jbrassow, mbroz, nperic, prajnoha, prockai, thornber, zkabelac |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.88-6.el5 | Doc Type: | Bug Fix |
Doc Text: |
If preallocated memory is too low, lvm2 can issue an error message like "Internal error: Maps lock < unlock".
The message was changed to "Reserved memory not enough. Increase activation/reserved_memory?" to provide better information about the source of problem to administrator.
Preallocated memory can be changed in lvm.conf (reserved_memory option).
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2012-02-21 06:06:00 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 784372 |
Description
Corey Marthaler
2011-12-19 19:03:44 UTC
This is just stupid error report in lvm which should not be in stable release. I'll remove it in RHEL5.8 build (while upstream can discuss this another few years). There was a similar problem upstream last year and in a RHEL6 kernel patch backport (private bug 638525). Current upstream does not have this problem as far as I'm aware, suggesting there could be a problem with a RHEL5 kernel patch backport which we should try to identify. (In reply to comment #3) > There was a similar problem upstream last year and in a RHEL6 kernel patch > backport (private bug 638525). Current upstream does not have this problem as > far as I'm aware, suggesting there could be a problem with a RHEL5 kernel patch > backport which we should try to identify. It seems it reappears again in some scenarios in RHEL6 as well - bug #740868. Is it possible to repeat this test with increased size for reserved_memory - e.g. in lvm.conf activation { reserved_memory = 32768 } Is there any difference ? I've checked RHEL5.8 kernel for mlock/munlock issues - and it seems to behave as good as upstream kernel - so it should not be cause of this problem here. If it is problem with pre-allocated memory overflow, I would really suggest to detect this situation and display some more useful message to user. "Internal error: Maps lock 33738752 < unlock 35778560" says absolutely nothing to administrator, it is message for developer. It should be "Internal error: preallocated memory too low, please consider increasing reserved_memory in lvm.conf" or so. (In reply to comment #6) > If it is problem with pre-allocated memory overflow, I would really suggest to > detect this situation and display some more useful message to user. > > "Internal error: Maps lock 33738752 < unlock 35778560" says absolutely nothing > to administrator, it is message for developer. > > It should be "Internal error: preallocated memory too low, please consider > increasing reserved_memory in lvm.conf" or so. Well we have here 2 cases which cannot be easily distinguished - one is our internal problem when we allocate to much memory where we shouldn't, the other case is to small preallocated memory. Issue with preallocated memory is probably somewhat similar to our preallocated stack issue where we are going to eliminate this variable. However now I'm not sure if there is some easy formula to estimate maximum memory needed during activate when we know all metadata in front - but such surely must exist. Anyway for now - users which are experimenting with large MDA sizes need to adapt reserved_memory to higher values to avoid Maps reporting. So, it's a genuine internal error - it didn't pre-allocate sufficient memory. 500 snapshots of the same origin is not a sensible case we need to support though. This bug does not appear to happen when bumping up the reserved_memory to 32768. I'll let the tests run a few more iterations tonight however to make sure. # How much memory (in KB) to reserve for use while devices suspended #reserved_memory = 8192 reserved_memory = 32768 Tests continued to run all night with the reserved_memory bumped up to 32768. I was then able to reproduce this issue right away when I set it back to the default of 8192. Ran tests with 500 mirrors as well. in lvm.conf: reserved_memory = 32768 The mirrors were created and deleted successfully without errors. In line with Milan's suggestion, I've changed the message to: Internal error: Reserved memory (33738752) not enough: used 35778560. Increase activation/reserved_memory? upstream. Error message changed in lvm2-2.02.88-6.el5 according to comment #13. Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: If preallocated memory is too low, lvm2 can issue an error message like "Internal error: Maps lock < unlock". The message was changed to "Reserved memory not enough. Increase activation/reserved_memory?" to provide better information about the source of problem to administrator. Preallocated memory can be changed in lvm.conf (reserved_memory option). Fix verified in the latest rpms. 2.6.18-301.el5 lvm2-2.02.88-6.el5 BUILT: Wed Jan 18 03:34:29 CST 2012 lvm2-cluster-2.02.88-6.el5 BUILT: Wed Jan 18 03:33:26 CST 2012 device-mapper-1.02.67-2.el5 BUILT: Mon Oct 17 08:31:56 CDT 2011 device-mapper-event-1.02.67-2.el5 BUILT: Mon Oct 17 08:31:56 CDT 2011 cmirror-1.1.39-14.el5 BUILT: Wed Nov 2 17:25:33 CDT 2011 kmod-cmirror-0.1.22-3.el5 BUILT: Tue Dec 22 13:39:47 CST 2009 SCENARIO - [many_snaps] Create 500 snapshots of an origin volume Recreating VG and PVs to increase metadata size Writing physical volume data to disk "/dev/sdb1" Writing physical volume data to disk "/dev/sdc1" Making origin volume Making 500 snapshots of origin volume 1 2 3 4 5 6 7 8 9 10 11 12 13 Although the snap create passed, errors were found in it's output Rounding up size to full physical extent 52.00 MB Logical volume "500_13" created Internal error: Reserved memory (21938176) not enough: used 22061056. Increase activation/reserved_memory? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2012-0161.html |