Bug 1638888 - On heavily used clusters, /etc/lvm/archive can consume all disk space preventing volumes from being deleted
Summary: On heavily used clusters, /etc/lvm/archive can consume all disk space prevent...
Keywords:
Status: CLOSED DUPLICATE of bug 1561680
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhgs-server-container
Version: cns-3.9
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Raghavendra Talur
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1561680
Blocks: OCS-3.11.1-devel-triage-done 1642792
TreeView+ depends on / blocked
 
Reported: 2018-10-12 17:35 UTC by Dan Yocum
Modified: 2021-12-10 17:54 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-08 19:32:11 UTC
Embargoed:


Attachments (Terms of Use)

Description Dan Yocum 2018-10-12 17:35:54 UTC
Description of problem:

On heavily used CNS clusters, with >1000 volumes, the /etc/lvm/archive dir can fill up the root (/) partition preventing further volume deletions.


Version-Release number of selected component (if applicable):

3.9


How reproducible:

When it fills up

Steps to Reproduce:
1.  Create/delete thousands of volumes from OCP
2.
3.

Actual results:

/ fills up!


Expected results:


/ doesn't fill up!  If a volume is deleted, the corresponding archive *.vg file should be removed.

Additional info:

Comment 3 Milind Changire 2018-10-13 05:09:53 UTC
This has got nothing to do with the redhat-storage-server RPM per se.

The volume related *.vg file can be deleted as part of a glusterfs volume delete hook script.

Could you paste the name of one volume and its related *.vg files. This will give an idea if the names of the *.vg files can be deduced in some way.

Are there more than one *.vg files created per volume ?

Comment 4 Milind Changire 2018-10-13 05:51:20 UTC
Also, since the LVM metadata is archived for future reference by the lvm tools, it will difficult to decide on its usability at the gluster volume level since the logical volume will continue to exist even after the gluster volume has been deleted.

The LVM metadata archives (/etc/lvm/archive) can ideally be deleted if the lvremove command is successful for a logical volume. This can be achieved by having a shell function wrapper around the lvremove command. Or maybe an ansible playbook section to remove the LVM metadata archives after the gluster volume/logical volume is deleted.

Comment 5 Niels de Vos 2018-10-14 14:02:27 UTC
This would be an OCS (previously CNS) issue, changing components accordingly. Bug 1561680 is fixed in a recent release. If possible, the customer should upgrade to a more recent version (then you can close this bz a duplicate/next-release).


Note You need to log in before you can comment on or make changes to this bug.