| Summary: | Memory leak in vgremove | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Nenad Peric <nperic> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| Status: | CLOSED WONTFIX | QA Contact: | Cluster QE <mspqa-list> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | high | ||
| Version: | 6.3 | CC: | agk, cmarthal, dwysocha, heinzm, jbrassow, mbroz, msnitzer, prajnoha, prockai, thornber, zkabelac |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2012-04-20 12:24:52 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Nenad Peric
2012-04-03 16:58:58 UTC
Reproduced with: lvm2-libs-2.02.95-3.el6.x86_64 lvm2-cluster-2.02.95-3.el6.x86_64 lvm2-2.02.95-3.el6.x86_64 cmirror-2.02.95-3.el6.x86_64 device-mapper-1.02.74-3.el6.x86_64 device-mapper-libs-1.02.74-3.el6.x86_64 device-mapper-event-1.02.74-3.el6.x86_64 device-mapper-event-libs-1.02.74-3.el6.x86_64 Created only 200 snaps of origin this time. tried deleting the VG with vgremove -ff snapper Did go a bit faster but the memory consumption was still increasing with every removal. Here is the report around mid-way: Logical volume "500_130" successfully removed Cpu(s): 15.5%us, 45.5%sy, 0.0%ni, 6.1%id, 21.9%wa, 0.0%hi, 11.1%si, 0.0%st Mem: 5861712k total, 1763340k used, 4098372k free, 142620k buffers Swap: 2064376k total, 0k used, 2064376k free, 231816k cached PID PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10547 2 -18 1032m 920m 3512 S 15.9 16.1 1:33.69 vgremove 1324 0 -20 0 0 0 S 7.6 0.0 1:18.31 iscsi_q_8 2874 2 -18 683m 151m 97m S 2.0 2.6 0:44.83 dmeventd The problem could be probably related to assumption that using something like 200 old snapshots is something 'well' supported by lvm2 - but in fact that is only theoretically usable - the table construction related to process such beast is rather very ugly and as such there was no time spent to optimize this rather not really usable case - I think even 20 snaps of the same origin are well beyond any practical use if old-style snaps are used for this. Another issue is to optimize removal of more devices at once - this is something considered for 6.4. For now there is every device removed uniquely - which is very slow if there are hundred or even thousands devices - and it's extremely slow for old snapshots. And also quite annoying in case we want to drop i.e. whole thin pool - which should ideally deactivate all thin volumes and remove all entries from metadata - but for now there will be a large set of this writes and table updates. So it is not leak, it is just extreme case which will not work anyway with the old snapshot implementation (or will be terribly slow). Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. This bugzilla is just confirming known limitations of the tools. Both problems are already being tracked and solved elsewhere. (Multiple snaps, now using thin provisioning; improved tool speed when handling multiple LVs at once.) |