Red Hat Bugzilla – Bug 1262884
Data Tiering:Clear/delete database after(or as part of final step) detaching of hot tier to avoid persistant storage wastage
Last modified: 2017-03-08 06:04:53 EST
Description of problem:
We are currently saving the heat patterns of files on database which is stored in persistant brick memory. In a case where we have say lakhs of cold files and are ready to get promoted, then these files are marked in the db. Now when a user issues a detach tier successfuly and detaches the tier , the db still remains on cold tier bricks. We need to clean this cold bricks to facilitate proper storage savings, else this can result in large disk space wasted(incase of huge number of inodes)
Version-Release number of selected component (if applicable):
[root@zod ~]# rpm -qa|grep gluster
[root@zod ~]# gluster --version
glusterfs 3.7.4 built on Sep 12 2015 01:35:35
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
Steps to Reproduce:
1.create a tiered volume
2.now have some files in cold tier and heat them such that they are to be promoted next.
3. Now note down the db size of each brick (eg: du -sh <brick_path>/.glusterfs/<brick_name>.db)
3.Now detach tier completely (start and commit)
4. Now check the db file and size in cold bricks.
It can be noted that the db still exists and can occupy lot of space when there are lot of files. This db is of no use on a regular volume.
delete db from a volume where tier has been detached
For Eg:a 1000 file db took about 160KB, then for a 1Million file in this case can take about 160-300MB atleast which is significant
We will defer this, but keep it as a future enhancement.
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.
Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.