Bug 1214222
Summary: | Directories are missing on the mount point after attaching tier to distribute replicate volume. | |||
---|---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Triveni Rao <trao> | |
Component: | tiering | Assignee: | Mohammed Rafi KC <rkavunga> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | bugs <bugs> | |
Severity: | urgent | Docs Contact: | ||
Priority: | unspecified | |||
Version: | mainline | CC: | annair, bugs, dlambrig, josferna, nchilaka, sashinde | |
Target Milestone: | --- | Keywords: | Reopened, TestBlocker, Triaged | |
Target Release: | --- | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | TIERING | |||
Fixed In Version: | glusterfs-3.8rc2 | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1219848 1224075 1224077 (view as bug list) | Environment: | ||
Last Closed: | 2016-06-16 12:54:19 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1186580, 1214666, 1219848, 1224075, 1224077, 1229259, 1260923 |
Description
Triveni Rao
2015-04-22 09:33:04 UTC
The problem here, is you did not start the migration daemon. gluster v rebalance t tier start This does the "fix layout" to make all directories on all bricks. You should not have to worry about that. It should be done automatically when you attach a tier. We will write a fix for that. REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#1) for review on master by mohammed rafi kc (rkavunga) REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#2) for review on master by mohammed rafi kc (rkavunga) REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#3) for review on master by mohammed rafi kc (rkavunga) *** Bug 1212008 has been marked as a duplicate of this bug. *** REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#4) for review on master by mohammed rafi kc (rkavunga) REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#5) for review on master by mohammed rafi kc (rkavunga) REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#6) for review on master by mohammed rafi kc (rkavunga) Reproduced this ont the BETA2 build too, hence moving it to ASSIGNED. *** Bug 1221032 has been marked as a duplicate of this bug. *** I couldn't reproduce this using glusterfs-3.7-bet2. Can you paste the output of attach-tier in your set up. I could reproduce the same problem with new downstream build. root@rhsqa14-vm1 ~]# rpm -qa | grep gluster glusterfs-3.7.0-2.el6rhs.x86_64 glusterfs-cli-3.7.0-2.el6rhs.x86_64 glusterfs-libs-3.7.0-2.el6rhs.x86_64 glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64 glusterfs-api-3.7.0-2.el6rhs.x86_64 glusterfs-server-3.7.0-2.el6rhs.x86_64 glusterfs-fuse-3.7.0-2.el6rhs.x86_64 [root@rhsqa14-vm1 ~]# [root@rhsqa14-vm1 ~]# glusterfs --version glusterfs 3.7.0 built on May 15 2015 01:31:10 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/> GlusterFS comes with ABSOLUTELY NO WARRANTY. It is licensed to you under your choice of the GNU Lesser General Public License, version 3 or any later version (LGPLv3 or later), or the GNU General Public License, version 2 (GPLv2), in all cases as published by the Free Software Foundation. [root@rhsqa14-vm1 ~]# [root@rhsqa14-vm1 ~]# gluster v create vol2 replica 2 10.70.46.233:/rhs/brick1/v2 10.70.46.236:/rhs/brick1/v2 10.70.46.233:/rhs/brick2/v2 10.70.46.236:/rhs/brick2/v2 volume create: vol2: success: please start the volume to access data You have new mail in /var/spool/mail/root [root@rhsqa14-vm1 ~]# gluster v start vol2 volume start: vol2: success [root@rhsqa14-vm1 ~]# gluster v info vol2 Volume Name: vol2 Type: Distributed-Replicate Volume ID: 46c79842-2d5d-4f0a-9776-10504fbc93e4 Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.70.46.233:/rhs/brick1/v2 Brick2: 10.70.46.236:/rhs/brick1/v2 Brick3: 10.70.46.233:/rhs/brick2/v2 Brick4: 10.70.46.236:/rhs/brick2/v2 Options Reconfigured: performance.readdir-ahead: on [root@rhsqa14-vm1 ~]# oot@rhsqa14-vm1 ~]# gluster v attach-tier vol2 replica 2 10.70.46.233:/rhs/brick3/v2 10.70.46.236:/rhs/brick3/v2 10.70.46.233:/rhs/brick5/v2 10.70.46.236:/rhs/brick5/v2 Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y volume attach-tier: success volume rebalance: vol2: success: Rebalance on vol2 has been started successfully. Use rebalance status command to check status of the rebalance process. ID: 72408f67-06c1-4b2a-b4e3-01ffcb0d8b17 You have new mail in /var/spool/mail/root [root@rhsqa14-vm1 ~]# root@rhsqa14-vm5 ~]# mount -t glusterfs 10.70.46.233:vol2 /mnt2 [root@rhsqa14-vm5 ~]# cd /vol2 -bash: cd: /vol2: No such file or directory [root@rhsqa14-vm5 ~]# cd /mnt2 [root@rhsqa14-vm5 mnt2]# ls -la total 4 drwxr-xr-x. 4 root root 78 May 15 06:30 . dr-xr-xr-x. 30 root root 4096 May 15 04:16 .. drwxr-xr-x. 3 root root 48 May 15 06:30 .trashcan [root@rhsqa14-vm5 mnt2]# [root@rhsqa14-vm5 mnt2]# mkdir triveni [root@rhsqa14-vm5 mnt2]# ls -la total 4 drwxr-xr-x. 5 root root 106 May 15 2015 . dr-xr-xr-x. 30 root root 4096 May 15 04:16 .. drwxr-xr-x. 3 root root 48 May 15 06:30 .trashcan drwxr-xr-x. 2 root root 12 May 15 2015 triveni [root@rhsqa14-vm5 mnt2]# cp -r /root/linux-4.0 . ^C [root@rhsqa14-vm5 mnt2]# ls -la total 4 drwxr-xr-x. 6 root root 138 May 15 2015 . dr-xr-xr-x. 30 root root 4096 May 15 04:16 .. drwxr-xr-x. 6 root root 140 May 15 2015 linux-4.0 drwxr-xr-x. 3 root root 48 May 15 06:30 .trashcan drwxr-xr-x. 2 root root 12 May 15 2015 triveni [root@rhsqa14-vm5 mnt2]# cd linux-4.0/ [root@rhsqa14-vm5 linux-4.0]# ls -la total 35 drwxr-xr-x. 6 root root 140 May 15 2015 . drwxr-xr-x. 6 root root 138 May 15 2015 .. drwxr-xr-x. 4 root root 78 May 15 2015 arch -rw-r--r--. 1 root root 18693 May 15 2015 COPYING -rw-r--r--. 1 root root 252 May 15 2015 Kconfig drwxr-xr-x. 9 root root 350 May 15 2015 security drwxr-xr-x. 22 root root 557 May 15 2015 sound drwxr-xr-x. 19 root root 16384 May 15 2015 tools [root@rhsqa14-vm5 linux-4.0]# cd .. [root@rhsqa14-vm5 mnt2]# [root@rhsqa14-vm5 mnt2]# [root@rhsqa14-vm5 mnt2]# ls -la total 4 drwxr-xr-x. 4 root root 216 May 15 2015 . dr-xr-xr-x. 30 root root 4096 May 15 04:16 .. drwxr-xr-x. 3 root root 96 May 15 2015 .trashcan [root@rhsqa14-vm5 mnt2]# touch f1 [root@rhsqa14-vm5 mnt2]# root@rhsqa14-vm5 mnt2]# touch f2 [root@rhsqa14-vm5 mnt2]# ls -la total 4 drwxr-xr-x. 4 root root 234 May 15 2015 . dr-xr-xr-x. 30 root root 4096 May 15 04:16 .. -rw-r--r--. 1 root root 0 May 15 06:36 f1 -rw-r--r--. 1 root root 0 May 15 06:36 f2 drwxr-xr-x. 3 root root 96 May 15 2015 .trashcan [root@rhsqa14-vm5 mnt2]# root@rhsqa14-vm1 ~]# gluster v info vol2 Volume Name: vol2 Type: Tier Volume ID: 46c79842-2d5d-4f0a-9776-10504fbc93e4 Status: Started Number of Bricks: 8 Transport-type: tcp Hot Tier : Hot Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick1: 10.70.46.236:/rhs/brick5/v2 Brick2: 10.70.46.233:/rhs/brick5/v2 Brick3: 10.70.46.236:/rhs/brick3/v2 Brick4: 10.70.46.233:/rhs/brick3/v2 Cold Bricks: Cold Tier Type : Distributed-Replicate Number of Bricks: 2 x 2 = 4 Brick5: 10.70.46.233:/rhs/brick1/v2 Brick6: 10.70.46.236:/rhs/brick1/v2 Brick7: 10.70.46.233:/rhs/brick2/v2 Brick8: 10.70.46.236:/rhs/brick2/v2 Options Reconfigured: features.uss: enable features.inode-quota: on features.quota: on cluster.min-free-disk: 10 performance.readdir-ahead: on It is possible to attach a tier while a volume is offline. If you bring the volume online, and do not run fix-layout manually (gluster v rebalance <volname> tier start), the directories will only exist on the cold tier. Self heal can only be done in some cases. I will write a fix for this and use this bug. REVIEW: http://review.gluster.org/11239 (cluster/tier: search for directories in cold subvolume) posted (#1) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11239 (cluster/tier: search for directories in cold subvolume) posted (#2) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#1) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#2) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#3) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#4) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP - handle I/O during fix-layout) posted (#5) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#6) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#7) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#8) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#9) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#10) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#11) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#12) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#13) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: Handle I/O during fix-layout on attach-tier) posted (#14) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11782 (cluster/tier: Do not start tiering until fix-layout completed.) posted (#1) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11368 (cluster/tier: WIP Handle I/O during fix-layout on attach-tier) posted (#15) for review on master by Dan Lambright (dlambrig) REVIEW: http://review.gluster.org/11782 (cluster/tier: Do not start tiering until fix-layout completed.) posted (#2) for review on master by Dan Lambright (dlambrig) Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |