Description of problem: On heavily used CNS clusters, with >1000 volumes, the /etc/lvm/archive dir can fill up the root (/) partition preventing further volume deletions. Version-Release number of selected component (if applicable): 3.9 How reproducible: When it fills up Steps to Reproduce: 1. Create/delete thousands of volumes from OCP 2. 3. Actual results: / fills up! Expected results: / doesn't fill up! If a volume is deleted, the corresponding archive *.vg file should be removed. Additional info:
This has got nothing to do with the redhat-storage-server RPM per se. The volume related *.vg file can be deleted as part of a glusterfs volume delete hook script. Could you paste the name of one volume and its related *.vg files. This will give an idea if the names of the *.vg files can be deduced in some way. Are there more than one *.vg files created per volume ?
Also, since the LVM metadata is archived for future reference by the lvm tools, it will difficult to decide on its usability at the gluster volume level since the logical volume will continue to exist even after the gluster volume has been deleted. The LVM metadata archives (/etc/lvm/archive) can ideally be deleted if the lvremove command is successful for a logical volume. This can be achieved by having a shell function wrapper around the lvremove command. Or maybe an ansible playbook section to remove the LVM metadata archives after the gluster volume/logical volume is deleted.
This would be an OCS (previously CNS) issue, changing components accordingly. Bug 1561680 is fixed in a recent release. If possible, the customer should upgrade to a more recent version (then you can close this bz a duplicate/next-release).