Bug 1238549 - Data Tiering: Wastage of Inodes as all files in cold tier get hash linked(T file) if a lookup(ls -l) is performed after attaching tier( and tey remain indefinitely in that state)
Summary: Data Tiering: Wastage of Inodes as all files in cold tier get hash linked(T ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Dan Lambright
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard: tier-attach-detach
Depends On: 1275998
Blocks: 1276742
TreeView+ depends on / blocked
 
Reported: 2015-07-02 06:45 UTC by Nag Pavan Chilakam
Modified: 2017-03-25 14:23 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.7.5-19
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-25 14:23:09 UTC
Embargoed:


Attachments (Terms of Use)
QE log while raising the bug (26.48 KB, text/plain)
2015-07-02 06:47 UTC, Nag Pavan Chilakam
no flags Details

Description Nag Pavan Chilakam 2015-07-02 06:45:55 UTC
Description of problem:
=========================
Any files existing in cold tier(which were created before attach tier), are getting hash linked( T files are getting created) in the hot tier once  a lookup is issued(ls -l)

Version-Release number of selected component (if applicable):
=============================================================
[root@tettnang glusterfs]# gluster --version
glusterfs 3.7.1 built on Jun 28 2015 11:01:14
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@tettnang glusterfs]# rpm -qa|grep gluster
glusterfs-cli-3.7.1-6.el7rhgs.x86_64
glusterfs-client-xlators-3.7.1-6.el7rhgs.x86_64
glusterfs-fuse-3.7.1-6.el7rhgs.x86_64
glusterfs-geo-replication-3.7.1-6.el7rhgs.x86_64
glusterfs-libs-3.7.1-6.el7rhgs.x86_64
glusterfs-api-3.7.1-6.el7rhgs.x86_64
glusterfs-rdma-3.7.1-6.el7rhgs.x86_64
glusterfs-3.7.1-6.el7rhgs.x86_64
glusterfs-debuginfo-3.7.1-6.el7rhgs.x86_64
samba-vfs-glusterfs-4.1.17-7.el7rhgs.x86_64
glusterfs-server-3.7.1-6.el7rhgs.x86_64





Steps to Reproduce:
===================
1.create a dist-rep volume and start it
[root@tettnang glusterfs]# gluster v create tvol2 replica 2 tettnang:/rhs/brick1/tvol2  zod:/rhs/brick1/tvol2 tettnang:/rhs/brick2/tvol2 zod:/rhs/brick2/tvol2
volume create: tvol2: success: please start the volume to access data
[root@tettnang glusterfs]# gluster v start tvol2
volume start: tvol2: success


2.Now mount the volume and add some files
[root@rhs-client1 mnt]# mkdir tvol2;mount -t glusterfs tettnang:tvol2 tvol2
[root@rhs-client1 mnt]# cd tvol2
[root@rhs-client1 tvol2]# ls -alrt
total 0
drwxr-xr-x. 4 root root 78 Jul  2 11:54 .
drwxr-xr-x. 3 root root 48 Jul  2 11:54 .trashcan
drwxr-xr-x. 6 root root 54 Jul  2 12:19 ..
[root@rhs-client1 tvol2]# touch cf1
[root@rhs-client1 tvol2]# touch cf{2..5}
[root@rhs-client1 tvol2]# ls -l
total 0
-rw-r--r--. 1 root root 0 Jul  2 12:19 cf1
-rw-r--r--. 1 root root 0 Jul  2 12:20 cf2
-rw-r--r--. 1 root root 0 Jul  2 12:20 cf3
-rw-r--r--. 1 root root 0 Jul  2 12:20 cf4
-rw-r--r--. 1 root root 0 Jul  2 12:20 cf5
[root@rhs-client1 tvol2]# touch cfnew{1..3}
[root@rhs-client1 tvol2]# vi cfnew1 
[root@rhs-client1 tvol2]# cat cfnew1`
> cat c^C
[root@rhs-client1 tvol2]# cat cfnew1
created these file cfnew{1..3} after disabling performance io options on regular vol

3.Now turn of performance parameters for the volume
[root@tettnang glusterfs]# gluster v set tvol2 performance.quick-read off
volume set: success
[root@tettnang glusterfs]# gluster v set tvol2 performance.io-cache off
volume set: success

4. Now attach a tier

[root@tettnang glusterfs]# gluster v attach-tier tvol2 tettnang:/rhs/brick7/tvol2 yarrow:/rhs/brick7/tvol2;gluster v set tvol2  features.ctr-enabled on
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: tvol2: success: Rebalance on tvol2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: c099892f-edba-474c-bd20-6fd17bf24f8d

5. Now wait for say 5 min and see if the backend bricks to see if files have been moved to hot tier. They would still be in cold tier as expected as not file action was done
6.Now just issue a "ls -l" on mount point.
7. Now observe the backend bricks, it can be seen that all the files created before attach-tier would have been linked to hot tier 





Expected results:
===================
It is unneccessary action and waste of resource to simply link all legacy files to hot tier



Additional info:
===============
I don't know how relavant it is but, i noticed that the files got linked to the hot brick on that perticular local node itself. The hot brick on another node didnt contain even one such link.(This latter node had only hot brick and no cold brick)


Also, Even after 5 min I see that the hash links remain(however i have not performed any kind of action, like adding new files etc,)

Comment 2 Nag Pavan Chilakam 2015-07-02 06:47:39 UTC
Created attachment 1045366 [details]
QE log while raising the bug

Comment 3 Nag Pavan Chilakam 2015-07-02 06:54:02 UTC
Sos reports @ rhsqe-repo.lab.eng.blr.redhat.com:/home/repo/sosreports/bug.1238549

Comment 4 Nag Pavan Chilakam 2015-07-02 09:09:24 UTC
Just imagine a case of an existing customer, who wants to attach a hot-tier to an existing volume which contains say 1 million files. This means it will lead to 1million of inodes simply wasted. This is not ideal for deployments.
[root@tettnang tmp]# stat /rhs/brick*/tvol2/cfnew1  (cold brick)
  File: ‘/rhs/brick2/tvol2/cfnew1’
  Size: 85        	Blocks: 8          IO Block: 4096   regular file
Device: fd15h/64789d	Inode: 872415689   Links: 2
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:unlabeled_t:s0
Access: 2015-07-02 11:57:41.681267279 +0530
Modify: 2015-07-02 11:57:41.682267278 +0530
Change: 2015-07-02 11:57:41.685267278 +0530
 Birth: -
  File: ‘/rhs/brick7/tvol2/cfnew1’ (hot brick)
  Size: 0         	Blocks: 0          IO Block: 4096   regular empty file
Device: fd20h/64800d	Inode: 1275131981  Links: 2
Access: (1000/---------T)  Uid: (    0/    root)   Gid: (    0/    root)
Context: system_u:object_r:unlabeled_t:s0
Access: 2015-07-02 11:59:38.865263568 +0530
Modify: 2015-07-02 11:59:38.865263568 +0530
Change: 2015-07-02 11:59:38.865263568 +0530
 Birth: -


Note You need to log in before you can comment on or make changes to this bug.