Bug 1212008 - Data Tiering: Unable to access existing data in directories of EC(disperse) volume after attaching tier
Summary: Data Tiering: Unable to access existing data in directories of EC(disperse) v...
Keywords:
Status: CLOSED DUPLICATE of bug 1214222
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On:
Blocks: qe_tracker_everglades glusterfs-3.7.0 1260923
TreeView+ depends on / blocked
 
Reported: 2015-04-15 11:54 UTC by Nag Pavan Chilakam
Modified: 2015-10-30 17:32 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-01 06:57:28 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nag Pavan Chilakam 2015-04-15 11:54:34 UTC
Description of problem:
=======================
If a EC(disperse) volume has existing data which includes directories, and then if we attach a tier to it, the directories are becoming inaccessible to the user.
Whereas the files which are not in directories are accessible


Version-Release number of selected component (if applicable):
============================================================
[root@vertigo ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# rpm -qa|grep gluster
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64



How reproducible:
================
easily


Steps to Reproduce:
==================
1.Create a EC volume  as below:
[root@vertigo ~]# gluster v create rhatvol redundancy 4 vertigo:/rhs/brick1/rhatvol-1 ninja:/rhs/brick1/rhatvol-2 vertigo:/rhs/brick2/rhatvol-3 ninja:/rhs/brick2/rhatvol-4 vertigo:/rhs/brick3/rhatvol-5 ninja:/rhs/brick3/rhatvol-6 vertigo:/rhs/brick4/rhatvol-7 ninja:/rhs/brick4/rhatvol-8 vertigo:/rhs/brick1/rhatvol-9 ninja:/rhs/brick1/rhatvol-10 vertigo:/rhs/brick2/rhatvol-11 ninja:/rhs/brick2/rhatvol-12 force

2.Now mount the volume and create some files, directory and files in the dir too

3. Now attach a tier as below:
[root@vertigo ~]# gluster v attach-tier rhatvol replica 2 vertigo:/rhs/brick1/rhatvol-tier ninja:/rhs/brick3/rhatvol-tier force
volume add-brick: success

4. Now recheck and try to access the directory, it can be seen that the directory cant be accessed anymore

Workaround:
==========
detach the tier

Additional info:
=================
[root@vertigo ~]# gluster v create rhatvol redundancy 4 vertigo:/rhs/brick1/rhatvol-1 ninja:/rhs/brick1/rhatvol-2 vertigo:/rhs/brick2/rhatvol-3 ninja:/rhs/brick2/rhatvol-4 vertigo:/rhs/brick3/rhatvol-5 ninja:/rhs/brick3/rhatvol-6 vertigo:/rhs/brick4/rhatvol-7 ninja:/rhs/brick4/rhatvol-8 vertigo:/rhs/brick1/rhatvol-9 ninja:/rhs/brick1/rhatvol-10 vertigo:/rhs/brick2/rhatvol-11 ninja:/rhs/brick2/rhatvol-12 force
volume create: rhatvol: success: please start the volume to access data
[root@vertigo ~]# gluster v info rhatvol
 
Volume Name: rhatvol
Type: Disperse
Volume ID: e4594e70-9d75-47ce-b883-60d37cee989b
Status: Created
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/rhatvol-1
Brick2: ninja:/rhs/brick1/rhatvol-2
Brick3: vertigo:/rhs/brick2/rhatvol-3
Brick4: ninja:/rhs/brick2/rhatvol-4
Brick5: vertigo:/rhs/brick3/rhatvol-5
Brick6: ninja:/rhs/brick3/rhatvol-6
Brick7: vertigo:/rhs/brick4/rhatvol-7
Brick8: ninja:/rhs/brick4/rhatvol-8
Brick9: vertigo:/rhs/brick1/rhatvol-9
Brick10: ninja:/rhs/brick1/rhatvol-10
Brick11: vertigo:/rhs/brick2/rhatvol-11
Brick12: ninja:/rhs/brick2/rhatvol-12
[root@vertigo ~]# gluster v start rhat-vol
volume start: rhat-vol: failed: Volume rhat-vol does not exist
[root@vertigo ~]# gluster v start rhatvol
volume start: rhatvol: success
[root@vertigo ~]# gluster v info rhatvol
 
Volume Name: rhatvol
Type: Disperse
Volume ID: e4594e70-9d75-47ce-b883-60d37cee989b
Status: Started
Number of Bricks: 1 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/rhatvol-1
Brick2: ninja:/rhs/brick1/rhatvol-2
Brick3: vertigo:/rhs/brick2/rhatvol-3
Brick4: ninja:/rhs/brick2/rhatvol-4
Brick5: vertigo:/rhs/brick3/rhatvol-5
Brick6: ninja:/rhs/brick3/rhatvol-6
Brick7: vertigo:/rhs/brick4/rhatvol-7
Brick8: ninja:/rhs/brick4/rhatvol-8
Brick9: vertigo:/rhs/brick1/rhatvol-9
Brick10: ninja:/rhs/brick1/rhatvol-10
Brick11: vertigo:/rhs/brick2/rhatvol-11
Brick12: ninja:/rhs/brick2/rhatvol-12
[root@vertigo ~]# ls /rhs/brick*/rhatvol*
/rhs/brick1/rhatvol-1:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-9:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick2/rhatvol-11:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick2/rhatvol-3:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick3/rhatvol-5:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick4/rhatvol-7:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9
[root@vertigo ~]# gluster v status rhatvol
Status of volume: rhatvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick vertigo:/rhs/brick1/rhatvol-1         49174     0          Y       31380
Brick ninja:/rhs/brick1/rhatvol-2           49174     0          Y       4184 
Brick vertigo:/rhs/brick2/rhatvol-3         49175     0          Y       31397
Brick ninja:/rhs/brick2/rhatvol-4           49175     0          Y       4201 
Brick vertigo:/rhs/brick3/rhatvol-5         49176     0          Y       31414
Brick ninja:/rhs/brick3/rhatvol-6           49176     0          Y       4218 
Brick vertigo:/rhs/brick4/rhatvol-7         49177     0          Y       31431
Brick ninja:/rhs/brick4/rhatvol-8           49177     0          Y       4235 
Brick vertigo:/rhs/brick1/rhatvol-9         49178     0          Y       31448
Brick ninja:/rhs/brick1/rhatvol-10          49178     0          Y       4252 
Brick vertigo:/rhs/brick2/rhatvol-11        49179     0          Y       31465
Brick ninja:/rhs/brick2/rhatvol-12          49179     0          Y       4270 
NFS Server on localhost                     2049      0          Y       31483
NFS Server on ninja                         2049      0          Y       4288 
NFS Server on transformers                  2049      0          Y       61794
NFS Server on interstellar                  2049      0          Y       64027
 
Task Status of Volume rhatvol
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@vertigo ~]# gluster v attach-tier rhatvol replica 2 vertigo:/rhs/brick1/rhatvol-tier ninja:/rhs/brick3/rhatvol-tier force
volume add-brick: success
[root@vertigo ~]# ls /rhs/brick*/rhatvol*
/rhs/brick1/rhatvol-1:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-9:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-tier:

/rhs/brick2/rhatvol-11:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick2/rhatvol-3:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick3/rhatvol-5:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick4/rhatvol-7:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2  f20  f3  f4  f5  f6  f7  f8  f9
[root@vertigo ~]# 
[root@vertigo ~]# 
[root@vertigo ~]# 
[root@vertigo ~]# 
[root@vertigo ~]# ls /rhs/brick*/rhatvol*
/rhs/brick1/rhatvol-1:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-9:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-tier:
f2

/rhs/brick2/rhatvol-11:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick2/rhatvol-3:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick3/rhatvol-5:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick4/rhatvol-7:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9
[root@vertigo ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# ls /rhs/brick*/rhatvol*
/rhs/brick1/rhatvol-1:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-9:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick1/rhatvol-tier:
f2  newfile

/rhs/brick2/rhatvol-11:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick2/rhatvol-3:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick3/rhatvol-5:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9

/rhs/brick4/rhatvol-7:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9
[root@vertigo ~]# ls
anaconda-ks.cfg  install.log         yum_debug_dump-vertigo.lab.eng.blr.redhat.com-2015-04-06_14:07:51.txt.gz
core.20845       install.log.syslog  yum_debug_dump-vertigo.lab.eng.blr.redhat.com-2015-04-06_14:28:17.txt.gz
core.25895       verify              yum_debug_dump-vertigo.lab.eng.blr.redhat.com-2015-04-15_15:29:21.txt.gz
[root@vertigo ~]# ls /rhs/brick*/rhatvol*
/rhs/brick1/rhatvol-1:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick1/rhatvol-9:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick1/rhatvol-tier:
f2  f9  newfile

/rhs/brick2/rhatvol-11:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick2/rhatvol-3:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick3/rhatvol-5:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick4/rhatvol-7:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~
[root@vertigo ~]# cat mount -t glusterfs vertigo:rhatvol rhatvol
cat: mount: No such file or directory
cat: glusterfs: No such file or directory
cat: vertigo:rhatvol: No such file or directory
cat: rhatvol: No such file or directory
[root@vertigo ~]# cat /rhs/brick*/rhatvol*/f1
[root@vertigo ~]# cat /rhs/brick*/rhatvol*/f*
adasd
sa
sa

dsa

dding data to exsiitng file which is on ec layer
[root@vertigo ~]# 
[root@vertigo ~]# gluster v detach-tier rhatvol
volume remove-brick unknown: success
[root@vertigo ~]# gluster v info rhatvol
 
Volume Name: rhatvol
Type: Distributed-Disperse
Volume ID: e4594e70-9d75-47ce-b883-60d37cee989b
Status: Started
Number of Bricks: 6 x (8 + 4) = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/rhatvol-1
Brick2: ninja:/rhs/brick1/rhatvol-2
Brick3: vertigo:/rhs/brick2/rhatvol-3
Brick4: ninja:/rhs/brick2/rhatvol-4
Brick5: vertigo:/rhs/brick3/rhatvol-5
Brick6: ninja:/rhs/brick3/rhatvol-6
Brick7: vertigo:/rhs/brick4/rhatvol-7
Brick8: ninja:/rhs/brick4/rhatvol-8
Brick9: vertigo:/rhs/brick1/rhatvol-9
Brick10: ninja:/rhs/brick1/rhatvol-10
Brick11: vertigo:/rhs/brick2/rhatvol-11
Brick12: ninja:/rhs/brick2/rhatvol-12
[root@vertigo ~]# ls /rhs/brick*/rhatvol*
/rhs/brick1/rhatvol-1:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick1/rhatvol-9:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick1/rhatvol-tier:
f2  f9  newfile

/rhs/brick2/rhatvol-11:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick2/rhatvol-3:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick3/rhatvol-5:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~

/rhs/brick4/rhatvol-7:
f1  f10  f11  f12  f13  f14  f15  f16  f17  f18  f19  f2~  f20  f3  f4  f5  f6  f7  f8  f9~
[root@vertigo ~]# getfattr -d -e hex -m ./rhs/brick*/rhatvol*
Usage: getfattr [-hRLP] [-n name|-d] [-e en] [-m pattern] path...
Try `getfattr --help' for more information.
[root@vertigo ~]# getfattr -d -e hex -m . /rhs/brick*/rhatvol*
getfattr: Removing leading '/' from absolute path names
# file: rhs/brick1/rhatvol-1
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.version=0x0000000000000029
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x000000010000000000000000e3839643

# file: rhs/brick1/rhatvol-9
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.version=0x0000000000000029
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x000000010000000000000000e3839643

# file: rhs/brick1/rhatvol-tier
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x0000000100000000e3839644ffffffff

# file: rhs/brick2/rhatvol-11
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.version=0x0000000000000029
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x000000010000000000000000e3839643

# file: rhs/brick2/rhatvol-3
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.version=0x0000000000000029
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x000000010000000000000000e3839643

# file: rhs/brick3/rhatvol-5
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.version=0x0000000000000029
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x000000010000000000000000e3839643

# file: rhs/brick4/rhatvol-7
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a66696c655f743a733000
trusted.ec.version=0x0000000000000029
trusted.gfid=0x00000000000000000000000000000001
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.volume-id=0xe4594e709d7547ceb88360d37cee989b
trusted.tier-gfid=0x000000010000000000000000e3839643

[root@vertigo ~]# gluster --version
glusterfs 3.7dev built on Apr 13 2015 07:14:27
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@vertigo ~]# rpm -qa|grep gluster
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.994.gitf522001.el6.noarch
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-debuginfo-3.7dev-0.994.gitf522001.el6.x86_64
[root@vertigo ~]#

Comment 1 Nag Pavan Chilakam 2015-04-15 11:56:40 UTC
Priority is set to high, due to data not being accessible.
But not setting to urgent, as EC is a new feature along with tiering. So assuming customers would be attaching a tier to a new ec volume(as old ec volume would not have existed so far)

Comment 2 Nag Pavan Chilakam 2015-04-16 08:30:10 UTC
Moving the priority to urgent as there can be customers who may use EC w/o tiering first and then  tiering can be used later as most of the tiering use cases will require SSD investment.

Comment 3 Joseph Elwin Fernandes 2015-05-01 06:57:28 UTC
This bug is same as bug 1214222
Where the directories disappear when a tier is attached.

*** This bug has been marked as a duplicate of bug 1214222 ***


Note You need to log in before you can comment on or make changes to this bug.