Bug 1214222 - Directories are missing on the mount point after attaching tier to distribute replicate volume.
Summary: Directories are missing on the mount point after attaching tier to distribute...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: mainline
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
Assignee: Mohammed Rafi KC
QA Contact: bugs@gluster.org
URL:
Whiteboard: TIERING
: 1212008 1221032 (view as bug list)
Depends On:
Blocks: qe_tracker_everglades 1214666 1219848 1224075 1224077 1229259 1260923
TreeView+ depends on / blocked
 
Reported: 2015-04-22 09:33 UTC by Triveni Rao
Modified: 2016-06-16 12:54 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8rc2
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1219848 1224075 1224077 (view as bug list)
Environment:
Last Closed: 2016-06-16 12:54:19 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Triveni Rao 2015-04-22 09:33:04 UTC
Description of problem:
Directories are missing on the mount point after attaching tier to distribute replicate volume.

Version-Release number of selected component (if applicable):

[root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.952.gita7f1d08.el6.noarch
glusterfs-debuginfo-3.7dev-0.952.gita7f1d08.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
[root@rhsqa14-vm1 ~]# 

[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7dev built on Apr 13 2015 07:14:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# 


How reproducible:

easy

Steps to Reproduce:
1. create a distrep volume
2. fuse mount the volume and create few directories with files.
3. ls -la on mount and keep the record of output.
4. Attach a tier to the volume and execute ls -la on mount point.
5. directories will be missing.

Actual results:

 
Volume Name: testing
Type: Distributed-Replicate
Volume ID: 42ac4aff-461e-4001-b1c0-f4d42e04452f
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/T4
Brick2: 10.70.46.236:/rhs/brick1/T4
Brick3: 10.70.46.233:/rhs/brick2/T4
Brick4: 10.70.46.236:/rhs/brick2/T4


Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhsqa14vm5-lv_root
                       18G  3.2G   14G  20% /
tmpfs                 3.8G     0  3.8G   0% /dev/shm
/dev/vda1             477M   33M  419M   8% /boot
10.70.46.233:/testing
                      100G  244M  100G   1% /mnt
10.70.46.233:/mix     199G  330M  199G   1% /mnt1
10.70.46.233:/everglades
                       20G  5.2M   20G   1% /mnt2
10.70.46.233:/Tim      20G  3.3M   20G   1% /mnt3
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# cd /mnt
[root@rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x.  5 root root  110 Apr 17 02:54 .
dr-xr-xr-x. 28 root root 4096 Apr 22 01:50 ..
drwxr-xr-x.  6 root root  140 Apr 16 06:14 linux-4.0
drwxr-xr-x.  3 root root   48 Apr 16 02:52 .trashcan
[root@rhsqa14-vm5 mnt]# 


[root@rhsqa14-vm1 ~]# gluster v attach-tier testing replica 2 10.70.46.233:/rhs/brick3/mko 10.70.46.236:/rhs/brick3/mko
volume add-brick: success
[root@rhsqa14-vm1 ~]# gluster v info testing
 
Volume Name: testing
Type: Tier
Volume ID: 42ac4aff-461e-4001-b1c0-f4d42e04452f
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.236:/rhs/brick3/mko
Brick2: 10.70.46.233:/rhs/brick3/mko
Brick3: 10.70.46.233:/rhs/brick1/T4
Brick4: 10.70.46.236:/rhs/brick1/T4
Brick5: 10.70.46.233:/rhs/brick2/T4
Brick6: 10.70.46.236:/rhs/brick2/T4
[root@rhsqa14-vm1 ~]# 


[root@rhsqa14-vm5 mnt]# 
[root@rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x.  5 root root  149 Apr 22  2015 .
dr-xr-xr-x. 28 root root 4096 Apr 22 01:50 ..
drwxr-xr-x.  3 root root   48 Apr 16 02:52 .trashcan
[root@rhsqa14-vm5 mnt]# 

Expected results:
Irrespective of the tiers data must be presented to user.

Additional info:

Comment 1 Dan Lambright 2015-04-22 20:06:20 UTC
The problem here, is you did not start the migration daemon.

gluster v rebalance t tier start

This does the "fix layout" to make all directories on all bricks.

You should not have to worry about that. It should be done automatically when you attach a tier.

We will write a fix for that.

Comment 2 Anand Avati 2015-04-24 10:43:07 UTC
REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#1) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

Comment 3 Anand Avati 2015-04-24 10:44:54 UTC
REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#2) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

Comment 4 Anand Avati 2015-04-28 06:44:17 UTC
REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#3) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

Comment 5 Joseph Elwin Fernandes 2015-05-01 06:57:28 UTC
*** Bug 1212008 has been marked as a duplicate of this bug. ***

Comment 6 Anand Avati 2015-05-04 11:49:28 UTC
REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#4) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

Comment 7 Anand Avati 2015-05-04 15:30:28 UTC
REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#5) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

Comment 8 Anand Avati 2015-05-05 05:17:28 UTC
REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#6) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

Comment 9 Anoop 2015-05-13 12:40:36 UTC
Reproduced this ont the BETA2 build too, hence moving it to ASSIGNED.

Comment 10 Mohammed Rafi KC 2015-05-14 06:21:18 UTC
*** Bug 1221032 has been marked as a duplicate of this bug. ***

Comment 11 Mohammed Rafi KC 2015-05-14 06:25:23 UTC
I couldn't reproduce this using glusterfs-3.7-bet2. Can you paste the output of attach-tier in your set up.

Comment 12 Triveni Rao 2015-05-15 10:44:46 UTC

I could reproduce the same problem with new downstream build.

root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
[root@rhsqa14-vm1 ~]# 
[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7.0 built on May 15 2015 01:31:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# 




[root@rhsqa14-vm1 ~]# gluster v create vol2 replica 2  10.70.46.233:/rhs/brick1/v2 10.70.46.236:/rhs/brick1/v2 10.70.46.233:/rhs/brick2/v2  10.70.46.236:/rhs/brick2/v2
volume create: vol2: success: please start the volume to access data
You have new mail in /var/spool/mail/root
[root@rhsqa14-vm1 ~]# gluster v start vol2
volume start: vol2: success
[root@rhsqa14-vm1 ~]# gluster v info vol2
 
Volume Name: vol2
Type: Distributed-Replicate
Volume ID: 46c79842-2d5d-4f0a-9776-10504fbc93e4
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/v2
Brick2: 10.70.46.236:/rhs/brick1/v2
Brick3: 10.70.46.233:/rhs/brick2/v2
Brick4: 10.70.46.236:/rhs/brick2/v2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# 


oot@rhsqa14-vm1 ~]# gluster v attach-tier vol2 replica 2 10.70.46.233:/rhs/brick3/v2 10.70.46.236:/rhs/brick3/v2 10.70.46.233:/rhs/brick5/v2 10.70.46.236:/rhs/brick5/v2
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: vol2: success: Rebalance on vol2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 72408f67-06c1-4b2a-b4e3-01ffcb0d8b17

You have new mail in /var/spool/mail/root
[root@rhsqa14-vm1 ~]# 




root@rhsqa14-vm5 ~]# mount -t glusterfs 10.70.46.233:vol2 /mnt2
[root@rhsqa14-vm5 ~]# cd /vol2
-bash: cd: /vol2: No such file or directory
[root@rhsqa14-vm5 ~]# cd /mnt2
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  4 root root   78 May 15 06:30 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  3 root root   48 May 15 06:30 .trashcan
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# mkdir triveni
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  5 root root  106 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  3 root root   48 May 15 06:30 .trashcan
drwxr-xr-x.  2 root root   12 May 15  2015 triveni
[root@rhsqa14-vm5 mnt2]# cp -r /root/linux-4.0 .
^C
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  6 root root  138 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  6 root root  140 May 15  2015 linux-4.0
drwxr-xr-x.  3 root root   48 May 15 06:30 .trashcan
drwxr-xr-x.  2 root root   12 May 15  2015 triveni
[root@rhsqa14-vm5 mnt2]# cd linux-4.0/
[root@rhsqa14-vm5 linux-4.0]# ls -la
total 35
drwxr-xr-x.  6 root root   140 May 15  2015 .
drwxr-xr-x.  6 root root   138 May 15  2015 ..
drwxr-xr-x.  4 root root    78 May 15  2015 arch
-rw-r--r--.  1 root root 18693 May 15  2015 COPYING
-rw-r--r--.  1 root root   252 May 15  2015 Kconfig
drwxr-xr-x.  9 root root   350 May 15  2015 security
drwxr-xr-x. 22 root root   557 May 15  2015 sound
drwxr-xr-x. 19 root root 16384 May 15  2015 tools
[root@rhsqa14-vm5 linux-4.0]# cd ..
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  4 root root  216 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  3 root root   96 May 15  2015 .trashcan
[root@rhsqa14-vm5 mnt2]# touch f1
[root@rhsqa14-vm5 mnt2]#

root@rhsqa14-vm5 mnt2]# touch f2
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  4 root root  234 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
-rw-r--r--.  1 root root    0 May 15 06:36 f1
-rw-r--r--.  1 root root    0 May 15 06:36 f2
drwxr-xr-x.  3 root root   96 May 15  2015 .trashcan
[root@rhsqa14-vm5 mnt2]# 


root@rhsqa14-vm1 ~]# gluster v info vol2
 
Volume Name: vol2
Type: Tier
Volume ID: 46c79842-2d5d-4f0a-9776-10504fbc93e4
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.46.236:/rhs/brick5/v2
Brick2: 10.70.46.233:/rhs/brick5/v2
Brick3: 10.70.46.236:/rhs/brick3/v2
Brick4: 10.70.46.233:/rhs/brick3/v2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.46.233:/rhs/brick1/v2
Brick6: 10.70.46.236:/rhs/brick1/v2
Brick7: 10.70.46.233:/rhs/brick2/v2
Brick8: 10.70.46.236:/rhs/brick2/v2
Options Reconfigured:
features.uss: enable
features.inode-quota: on
features.quota: on
cluster.min-free-disk: 10
performance.readdir-ahead: on

Comment 13 Dan Lambright 2015-06-15 20:14:59 UTC
It is possible to attach a tier while a volume is offline. If you bring the volume online, and do not run fix-layout manually (gluster v rebalance <volname> tier start), the directories will only exist on the cold tier. Self heal can only be done in some cases. I will write a fix for this and use this bug.

Comment 14 Anand Avati 2015-06-15 20:46:29 UTC
REVIEW: http://review.gluster.org/11239 (cluster/tier: search for directories in cold subvolume) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 15 Anand Avati 2015-06-15 20:57:14 UTC
REVIEW: http://review.gluster.org/11239 (cluster/tier: search for directories in cold subvolume) posted (#2) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 16 Anand Avati 2015-06-23 13:38:18 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 17 Anand Avati 2015-06-23 22:36:59 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#2) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 18 Anand Avati 2015-07-06 12:15:11 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#3) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 19 Anand Avati 2015-07-08 14:40:21 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#4) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 20 Anand Avati 2015-07-12 19:53:11 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#5) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 21 Anand Avati 2015-07-13 19:52:19 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#6) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 22 Anand Avati 2015-07-13 22:45:17 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#7) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 23 Anand Avati 2015-07-14 23:22:42 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#8) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 24 Anand Avati 2015-07-20 03:01:37 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#9) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 25 Anand Avati 2015-07-21 19:29:05 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#10) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 26 Anand Avati 2015-07-27 02:07:53 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#11) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 27 Anand Avati 2015-07-28 04:14:35 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#12) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 28 Anand Avati 2015-07-29 01:06:33 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#13) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 29 Anand Avati 2015-07-29 03:37:58 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#14) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 30 Anand Avati 2015-07-29 03:38:22 UTC
REVIEW: http://review.gluster.org/11782 (cluster/tier: Do not start tiering until fix-layout completed.) posted (#1) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 31 Anand Avati 2015-07-29 03:59:49 UTC
REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#15) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 32 Anand Avati 2015-08-03 23:18:12 UTC
REVIEW: http://review.gluster.org/11782 (cluster/tier: Do not start tiering until fix-layout completed.) posted (#2) for review on master by Dan Lambright (dlambrig@redhat.com)

Comment 33 Nagaprasad Sathyanarayana 2015-10-25 15:17:54 UTC
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.

Comment 34 Niels de Vos 2016-06-16 12:54:19 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.