Bug 1224077
| Summary: | Directories are missing on the mount point after attaching tier to distribute replicate volume. | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Triveni Rao <trao> |
| Component: | tier | Assignee: | Bug Updates Notification Mailing List <rhs-bugs> |
| Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | urgent | Docs Contact: | |
| Priority: | high | ||
| Version: | rhgs-3.1 | CC: | annair, bugs, dlambrig, josferna, nchilaka, rhs-bugs, rkavunga, sashinde, storage-qa-internal |
| Target Milestone: | --- | Keywords: | TestBlocker, Triaged |
| Target Release: | RHGS 3.1.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | TIERING | ||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | 1214222 | Environment: | |
| Last Closed: | 2015-07-29 04:45:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1214222 | ||
| Bug Blocks: | 1186580, 1202842, 1214666, 1219848, 1224075, 1229259 | ||
|
Description
Triveni Rao
2015-05-22 07:43:37 UTC
*** Bug 1224075 has been marked as a duplicate of this bug. *** this bug is verified and found no issues
[root@rhsqa14-vm1 ~]# gluster v create venus replica 2 10.70.47.165:/rhs/brick1/m0 10.70.47.163:/rhs/brick1/m0 10.70.47.165:/rhs/brick2/m0 10.70.47.16
3:/rhs/brick2/m0 force
volume create: venus: success: please start the volume to access data
[root@rhsqa14-vm1 ~]# gluster v start venus
volume start: venus: success
[root@rhsqa14-vm1 ~]# gluster v info
Volume Name: venus
Type: Distributed-Replicate
Volume ID: ad3a7752-93f3-4a61-8b3c-b40bc5d9af4a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.47.165:/rhs/brick1/m0
Brick2: 10.70.47.163:/rhs/brick1/m0
Brick3: 10.70.47.165:/rhs/brick2/m0
Brick4: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]#
[root@rhsqa14-vm1 ~]# gluster v status
Status of volume: venus
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.70.47.165:/rhs/brick1/m0 49152 0 Y 3547
Brick 10.70.47.163:/rhs/brick1/m0 49152 0 Y 3097
Brick 10.70.47.165:/rhs/brick2/m0 49153 0 Y 3565
Brick 10.70.47.163:/rhs/brick2/m0 49153 0 Y 3115
NFS Server on localhost 2049 0 Y 3588
Self-heal Daemon on localhost N/A N/A Y 3593
NFS Server on 10.70.47.163 2049 0 Y 3138
Self-heal Daemon on 10.70.47.163 N/A N/A Y 3145
Task Status of Volume venus
------------------------------------------------------------------------------
There are no active volume tasks
[root@rhsqa14-vm1 ~]#
(reverse-i-search)`gluster v ': ^Custer v status
[root@rhsqa14-vm1 ~]# ^C
[root@rhsqa14-vm1 ~]# gluster v attach-tier venus replica 2 10.70.47.165:/rhs/brick3/m0 10.70.47.163:/rhs/brick3/m0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: venus: success: Rebalance on venus has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 1bf4b512-7246-403d-b50e-f395e4051555
[root@rhsqa14-vm1 ~]# gluster v rebalance venus status
Node Rebalanced-files size scanned failures skipped status run time in secs
--------- ----------- ----------- ----------- ----------- ----------- ------------ --------------
localhost 0 0Bytes 0 0 0 in progress 18.00
10.70.47.163 0 0Bytes 0 0 0 in progress 19.00
volume rebalance: venus: success:
[root@rhsqa14-vm1 ~]#
root@rhsqa14-vm5 mnt]# cd triveni/
[root@rhsqa14-vm5 triveni]# touch 1
[root@rhsqa14-vm5 triveni]# touch 2
[root@rhsqa14-vm5 triveni]# touch 4
[root@rhsqa14-vm5 triveni]# ls -la
total 0
drwxr-xr-x. 2 root root 36 Jun 11 13:12 .
drwxr-xr-x. 5 root root 106 Jun 11 13:12 ..
-rw-r--r--. 1 root root 0 Jun 11 13:12 1
-rw-r--r--. 1 root root 0 Jun 11 13:12 2
-rw-r--r--. 1 root root 0 Jun 11 13:12 4
[root@rhsqa14-vm5 triveni]# cd ..
[root@rhsqa14-vm5 mnt]# ls
triveni
[root@rhsqa14-vm5 mnt]#
[root@rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x. 5 root root 159 Jun 11 13:13 .
dr-xr-xr-x. 30 root root 4096 Jun 11 11:15 ..
drwxr-xr-x. 3 root root 72 Jun 11 13:13 .trashcan
drwxr-xr-x. 2 root root 42 Jun 11 13:13 triveni
[root@rhsqa14-vm5 mnt]# l s-la triveni/^C
[root@rhsqa14-vm5 mnt]# ls -la triveni/
total 0
drwxr-xr-x. 2 root root 42 Jun 11 13:13 .
drwxr-xr-x. 5 root root 159 Jun 11 13:13 ..
-rw-r--r--. 1 root root 0 Jun 11 13:12 1
-rw-r--r--. 1 root root 0 Jun 11 13:12 2
-rw-r--r--. 1 root root 0 Jun 11 13:12 4
[root@rhsqa14-vm5 mnt]#
Ran the below testcases: 1) created a dist-rep volume and started it. Mounted the volume from NFS and fuse on two different machines. Started I/Os- downloading of kernel and creating 100s of directories attached a pure distribute tier. Observations: saw that the directories were getting created on all bricks of both hot and cold tier-as expected changed one dir permission- was reflected on the bricks -as expected Tested on EC volumes too. The directories took some time to get reflected on all cold and hot tier bricks. Some observations: ================= 1)saw that as soon as i attached tier the afr self heal deamon was not showing up anymore on vol status. Bug 1231144 - Data Tiering; Self heal deamon stops showing up in "vol status" once attach tier is done 2)saw that while creating folders continuosly, I hit the following errors [root@rhs-client40 distrep2]# for i in {100..10000}; do mkdir nag.$i;sleep 1;done mkdir: cannot create directory ‘nag.171’: No such file or directory mkdir: cannot create directory ‘nag.172’: No such file or directory mkdir: cannot create directory ‘nag.173’: No such file or directory mkdir: cannot create directory ‘nag.174’: Structure needs cleaning mkdir: cannot create directory ‘nag.175’: Structure needs cleaning mkdir: cannot create directory ‘nag.176’: Structure needs cleaning ^[^[^C THis was a glusterfs mount But it was not reproducible. Will raise a bug if i see it again(as the directories existed both on mount and brick points) build details where it was verified: [root@rhsqa14-vm4 glusterfs]# gluster --version glusterfs 3.7.1 built on Jun 12 2015 00:21:18 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@rhsqa14-vm4 glusterfs]# rpm -qa|grep gluster glusterfs-libs-3.7.1-2.el6rhs.x86_64 glusterfs-cli-3.7.1-2.el6rhs.x86_64 glusterfs-rdma-3.7.1-2.el6rhs.x86_64 glusterfs-3.7.1-2.el6rhs.x86_64 glusterfs-api-3.7.1-2.el6rhs.x86_64 glusterfs-fuse-3.7.1-2.el6rhs.x86_64 glusterfs-server-3.7.1-2.el6rhs.x86_64 glusterfs-geo-replication-3.7.1-2.el6rhs.x86_64 glusterfs-debuginfo-3.7.1-2.el6rhs.x86_64 glusterfs-client-xlators-3.7.1-2.el6rhs.x86_64 [root@rhsqa14-vm4 glusterfs]# [root@rhsqa14-vm4 glusterfs]# [root@rhsqa14-vm4 glusterfs]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.7 Beta (Santiago) [root@rhsqa14-vm4 glusterfs]# sestatus SELinux status: enabled SELinuxfs mount: /selinux Current mode: enforcing Mode from config file: enforcing Policy version: 24 Policy from config file: targeted [root@rhsqa14-vm4 glusterfs]# Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html |