Bug 1224077 - Directories are missing on the mount point after attaching tier to distribute replicate volume.
Summary: Directories are missing on the mount point after attaching tier to distribute...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Bug Updates Notification Mailing List
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard: TIERING
: 1224075 (view as bug list)
Depends On: 1214222
Blocks: qe_tracker_everglades 1202842 1214666 1219848 1224075 1229259
TreeView+ depends on / blocked
 
Reported: 2015-05-22 07:43 UTC by Triveni Rao
Modified: 2016-09-17 15:42 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1214222
Environment:
Last Closed: 2015-07-29 04:45:55 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Triveni Rao 2015-05-22 07:43:37 UTC
+++ This bug was initially created as a clone of Bug #1214222 +++

Description of problem:
Directories are missing on the mount point after attaching tier to distribute replicate volume.

Version-Release number of selected component (if applicable):

[root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-devel-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-geo-replication-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-resource-agents-3.7dev-0.952.gita7f1d08.el6.noarch
glusterfs-debuginfo-3.7dev-0.952.gita7f1d08.el6.x86_64
glusterfs-libs-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-fuse-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-extra-xlators-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-regression-tests-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-rdma-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-cli-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-server-3.7dev-0.994.gitf522001.el6.x86_64
glusterfs-api-devel-3.7dev-0.994.gitf522001.el6.x86_64
[root@rhsqa14-vm1 ~]# 

[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7dev built on Apr 13 2015 07:14:26
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# 


How reproducible:

easy

Steps to Reproduce:
1. create a distrep volume
2. fuse mount the volume and create few directories with files.
3. ls -la on mount and keep the record of output.
4. Attach a tier to the volume and execute ls -la on mount point.
5. directories will be missing.

Actual results:

 
Volume Name: testing
Type: Distributed-Replicate
Volume ID: 42ac4aff-461e-4001-b1c0-f4d42e04452f
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/T4
Brick2: 10.70.46.236:/rhs/brick1/T4
Brick3: 10.70.46.233:/rhs/brick2/T4
Brick4: 10.70.46.236:/rhs/brick2/T4


Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg_rhsqa14vm5-lv_root
                       18G  3.2G   14G  20% /
tmpfs                 3.8G     0  3.8G   0% /dev/shm
/dev/vda1             477M   33M  419M   8% /boot
10.70.46.233:/testing
                      100G  244M  100G   1% /mnt
10.70.46.233:/mix     199G  330M  199G   1% /mnt1
10.70.46.233:/everglades
                       20G  5.2M   20G   1% /mnt2
10.70.46.233:/Tim      20G  3.3M   20G   1% /mnt3
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# cd /mnt
[root@rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x.  5 root root  110 Apr 17 02:54 .
dr-xr-xr-x. 28 root root 4096 Apr 22 01:50 ..
drwxr-xr-x.  6 root root  140 Apr 16 06:14 linux-4.0
drwxr-xr-x.  3 root root   48 Apr 16 02:52 .trashcan
[root@rhsqa14-vm5 mnt]# 


[root@rhsqa14-vm1 ~]# gluster v attach-tier testing replica 2 10.70.46.233:/rhs/brick3/mko 10.70.46.236:/rhs/brick3/mko
volume add-brick: success
[root@rhsqa14-vm1 ~]# gluster v info testing
 
Volume Name: testing
Type: Tier
Volume ID: 42ac4aff-461e-4001-b1c0-f4d42e04452f
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: 10.70.46.236:/rhs/brick3/mko
Brick2: 10.70.46.233:/rhs/brick3/mko
Brick3: 10.70.46.233:/rhs/brick1/T4
Brick4: 10.70.46.236:/rhs/brick1/T4
Brick5: 10.70.46.233:/rhs/brick2/T4
Brick6: 10.70.46.236:/rhs/brick2/T4
[root@rhsqa14-vm1 ~]# 


[root@rhsqa14-vm5 mnt]# 
[root@rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x.  5 root root  149 Apr 22  2015 .
dr-xr-xr-x. 28 root root 4096 Apr 22 01:50 ..
drwxr-xr-x.  3 root root   48 Apr 16 02:52 .trashcan
[root@rhsqa14-vm5 mnt]# 

Expected results:
Irrespective of the tiers data must be presented to user.

Additional info:

--- Additional comment from Dan Lambright on 2015-04-22 16:06:20 EDT ---

The problem here, is you did not start the migration daemon.

gluster v rebalance t tier start

This does the "fix layout" to make all directories on all bricks.

You should not have to worry about that. It should be done automatically when you attach a tier.

We will write a fix for that.

--- Additional comment from Anand Avati on 2015-04-24 06:43:07 EDT ---

REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#1) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Anand Avati on 2015-04-24 06:44:54 EDT ---

REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#2) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Anand Avati on 2015-04-28 02:44:17 EDT ---

REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#3) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Joseph Elwin Fernandes on 2015-05-01 02:57:28 EDT ---



--- Additional comment from Anand Avati on 2015-05-04 07:49:28 EDT ---

REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#4) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Anand Avati on 2015-05-04 11:30:28 EDT ---

REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#5) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Anand Avati on 2015-05-05 01:17:28 EDT ---

REVIEW: http://review.gluster.org/10363 (tiering: Send both attach-tier and tier-start together) posted (#6) for review on master by mohammed rafi  kc (rkavunga@redhat.com)

--- Additional comment from Anoop on 2015-05-13 08:40:36 EDT ---

Reproduced this ont the BETA2 build too, hence moving it to ASSIGNED.

--- Additional comment from Mohammed Rafi KC on 2015-05-14 02:21:18 EDT ---



--- Additional comment from Mohammed Rafi KC on 2015-05-14 02:25:23 EDT ---

I couldn't reproduce this using glusterfs-3.7-bet2. Can you paste the output of attach-tier in your set up.

--- Additional comment from Triveni Rao on 2015-05-15 06:44:46 EDT ---



I could reproduce the same problem with new downstream build.

root@rhsqa14-vm1 ~]# rpm -qa | grep gluster
glusterfs-3.7.0-2.el6rhs.x86_64
glusterfs-cli-3.7.0-2.el6rhs.x86_64
glusterfs-libs-3.7.0-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.0-2.el6rhs.x86_64
glusterfs-api-3.7.0-2.el6rhs.x86_64
glusterfs-server-3.7.0-2.el6rhs.x86_64
glusterfs-fuse-3.7.0-2.el6rhs.x86_64
[root@rhsqa14-vm1 ~]# 
[root@rhsqa14-vm1 ~]# glusterfs --version
glusterfs 3.7.0 built on May 15 2015 01:31:10
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@rhsqa14-vm1 ~]# 




[root@rhsqa14-vm1 ~]# gluster v create vol2 replica 2  10.70.46.233:/rhs/brick1/v2 10.70.46.236:/rhs/brick1/v2 10.70.46.233:/rhs/brick2/v2  10.70.46.236:/rhs/brick2/v2
volume create: vol2: success: please start the volume to access data
You have new mail in /var/spool/mail/root
[root@rhsqa14-vm1 ~]# gluster v start vol2
volume start: vol2: success
[root@rhsqa14-vm1 ~]# gluster v info vol2
 
Volume Name: vol2
Type: Distributed-Replicate
Volume ID: 46c79842-2d5d-4f0a-9776-10504fbc93e4
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.46.233:/rhs/brick1/v2
Brick2: 10.70.46.236:/rhs/brick1/v2
Brick3: 10.70.46.233:/rhs/brick2/v2
Brick4: 10.70.46.236:/rhs/brick2/v2
Options Reconfigured:
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# 


oot@rhsqa14-vm1 ~]# gluster v attach-tier vol2 replica 2 10.70.46.233:/rhs/brick3/v2 10.70.46.236:/rhs/brick3/v2 10.70.46.233:/rhs/brick5/v2 10.70.46.236:/rhs/brick5/v2
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: vol2: success: Rebalance on vol2 has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 72408f67-06c1-4b2a-b4e3-01ffcb0d8b17

You have new mail in /var/spool/mail/root
[root@rhsqa14-vm1 ~]# 




root@rhsqa14-vm5 ~]# mount -t glusterfs 10.70.46.233:vol2 /mnt2
[root@rhsqa14-vm5 ~]# cd /vol2
-bash: cd: /vol2: No such file or directory
[root@rhsqa14-vm5 ~]# cd /mnt2
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  4 root root   78 May 15 06:30 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  3 root root   48 May 15 06:30 .trashcan
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# mkdir triveni
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  5 root root  106 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  3 root root   48 May 15 06:30 .trashcan
drwxr-xr-x.  2 root root   12 May 15  2015 triveni
[root@rhsqa14-vm5 mnt2]# cp -r /root/linux-4.0 .
^C
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  6 root root  138 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  6 root root  140 May 15  2015 linux-4.0
drwxr-xr-x.  3 root root   48 May 15 06:30 .trashcan
drwxr-xr-x.  2 root root   12 May 15  2015 triveni
[root@rhsqa14-vm5 mnt2]# cd linux-4.0/
[root@rhsqa14-vm5 linux-4.0]# ls -la
total 35
drwxr-xr-x.  6 root root   140 May 15  2015 .
drwxr-xr-x.  6 root root   138 May 15  2015 ..
drwxr-xr-x.  4 root root    78 May 15  2015 arch
-rw-r--r--.  1 root root 18693 May 15  2015 COPYING
-rw-r--r--.  1 root root   252 May 15  2015 Kconfig
drwxr-xr-x.  9 root root   350 May 15  2015 security
drwxr-xr-x. 22 root root   557 May 15  2015 sound
drwxr-xr-x. 19 root root 16384 May 15  2015 tools
[root@rhsqa14-vm5 linux-4.0]# cd ..
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# 
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  4 root root  216 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
drwxr-xr-x.  3 root root   96 May 15  2015 .trashcan
[root@rhsqa14-vm5 mnt2]# touch f1
[root@rhsqa14-vm5 mnt2]#

root@rhsqa14-vm5 mnt2]# touch f2
[root@rhsqa14-vm5 mnt2]# ls -la
total 4
drwxr-xr-x.  4 root root  234 May 15  2015 .
dr-xr-xr-x. 30 root root 4096 May 15 04:16 ..
-rw-r--r--.  1 root root    0 May 15 06:36 f1
-rw-r--r--.  1 root root    0 May 15 06:36 f2
drwxr-xr-x.  3 root root   96 May 15  2015 .trashcan
[root@rhsqa14-vm5 mnt2]# 


root@rhsqa14-vm1 ~]# gluster v info vol2
 
Volume Name: vol2
Type: Tier
Volume ID: 46c79842-2d5d-4f0a-9776-10504fbc93e4
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.46.236:/rhs/brick5/v2
Brick2: 10.70.46.233:/rhs/brick5/v2
Brick3: 10.70.46.236:/rhs/brick3/v2
Brick4: 10.70.46.233:/rhs/brick3/v2
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.46.233:/rhs/brick1/v2
Brick6: 10.70.46.236:/rhs/brick1/v2
Brick7: 10.70.46.233:/rhs/brick2/v2
Brick8: 10.70.46.236:/rhs/brick2/v2
Options Reconfigured:
features.uss: enable
features.inode-quota: on
features.quota: on
cluster.min-free-disk: 10
performance.readdir-ahead: on

Comment 2 Triveni Rao 2015-05-22 07:48:18 UTC
*** Bug 1224075 has been marked as a duplicate of this bug. ***

Comment 4 Triveni Rao 2015-06-11 17:21:35 UTC
this bug is verified and found no issues
[root@rhsqa14-vm1 ~]# gluster v create venus replica 2 10.70.47.165:/rhs/brick1/m0 10.70.47.163:/rhs/brick1/m0 10.70.47.165:/rhs/brick2/m0 10.70.47.16
3:/rhs/brick2/m0 force
volume create: venus: success: please start the volume to access data
[root@rhsqa14-vm1 ~]# gluster v start venus
volume start: venus: success
[root@rhsqa14-vm1 ~]# gluster v info

Volume Name: venus
Type: Distributed-Replicate
Volume ID: ad3a7752-93f3-4a61-8b3c-b40bc5d9af4a
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp   
Bricks:
Brick1: 10.70.47.165:/rhs/brick1/m0
Brick2: 10.70.47.163:/rhs/brick1/m0
Brick3: 10.70.47.165:/rhs/brick2/m0
Brick4: 10.70.47.163:/rhs/brick2/m0
Options Reconfigured: 
performance.readdir-ahead: on
[root@rhsqa14-vm1 ~]# 


[root@rhsqa14-vm1 ~]# gluster v   status
Status of volume: venus
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.47.165:/rhs/brick1/m0           49152     0          Y       3547
Brick 10.70.47.163:/rhs/brick1/m0           49152     0          Y       3097
Brick 10.70.47.165:/rhs/brick2/m0           49153     0          Y       3565
Brick 10.70.47.163:/rhs/brick2/m0           49153     0          Y       3115
NFS Server on localhost                     2049      0          Y       3588
Self-heal Daemon on localhost               N/A       N/A        Y       3593
NFS Server on 10.70.47.163                  2049      0          Y       3138
Self-heal Daemon on 10.70.47.163            N/A       N/A        Y       3145

Task Status of Volume venus
------------------------------------------------------------------------------
There are no active volume tasks

[root@rhsqa14-vm1 ~]# 
(reverse-i-search)`gluster v ': ^Custer v   status
[root@rhsqa14-vm1 ~]# ^C
[root@rhsqa14-vm1 ~]# gluster v attach-tier venus  replica 2 10.70.47.165:/rhs/brick3/m0 10.70.47.163:/rhs/brick3/m0
Attach tier is recommended only for testing purposes in this release. Do you want to continue? (y/n) y
volume attach-tier: success
volume rebalance: venus: success: Rebalance on venus has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 1bf4b512-7246-403d-b50e-f395e4051555


[root@rhsqa14-vm1 ~]# gluster v rebalance venus status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                               localhost                0        0Bytes             0             0             0          in progress              18.00
                            10.70.47.163                0        0Bytes             0             0             0          in progress              19.00
volume rebalance: venus: success:
[root@rhsqa14-vm1 ~]#



root@rhsqa14-vm5 mnt]# cd triveni/
[root@rhsqa14-vm5 triveni]# touch 1
[root@rhsqa14-vm5 triveni]# touch 2
[root@rhsqa14-vm5 triveni]# touch 4
[root@rhsqa14-vm5 triveni]# ls -la
total 0
drwxr-xr-x. 2 root root  36 Jun 11 13:12 .
drwxr-xr-x. 5 root root 106 Jun 11 13:12 ..
-rw-r--r--. 1 root root   0 Jun 11 13:12 1
-rw-r--r--. 1 root root   0 Jun 11 13:12 2
-rw-r--r--. 1 root root   0 Jun 11 13:12 4
[root@rhsqa14-vm5 triveni]# cd ..
[root@rhsqa14-vm5 mnt]# ls
triveni
[root@rhsqa14-vm5 mnt]#
[root@rhsqa14-vm5 mnt]# ls -la
total 4
drwxr-xr-x.  5 root root  159 Jun 11 13:13 .
dr-xr-xr-x. 30 root root 4096 Jun 11 11:15 ..
drwxr-xr-x.  3 root root   72 Jun 11 13:13 .trashcan
drwxr-xr-x.  2 root root   42 Jun 11 13:13 triveni
[root@rhsqa14-vm5 mnt]# l s-la triveni/^C
[root@rhsqa14-vm5 mnt]# ls -la triveni/
total 0
drwxr-xr-x. 2 root root  42 Jun 11 13:13 .
drwxr-xr-x. 5 root root 159 Jun 11 13:13 ..
-rw-r--r--. 1 root root   0 Jun 11 13:12 1
-rw-r--r--. 1 root root   0 Jun 11 13:12 2
-rw-r--r--. 1 root root   0 Jun 11 13:12 4
[root@rhsqa14-vm5 mnt]#

Comment 5 Nag Pavan Chilakam 2015-06-12 10:00:45 UTC
Ran the below testcases:
1)
created a dist-rep volume and started it.
Mounted the volume from NFS and fuse on two different machines.
Started I/Os- downloading of kernel and creating 100s of directories
attached a pure distribute tier. 
Observations:
saw that the directories were getting created on all bricks of both hot and cold tier-as expected
changed one dir permission- was reflected on the bricks -as expected


Tested on EC volumes too. The directories took some time to get reflected on all cold and hot tier bricks.



Some observations:
=================
1)saw that as soon as i attached tier the afr self heal deamon was not showing up anymore on vol status.
Bug 1231144 - Data Tiering; Self heal deamon stops showing up in "vol status" once attach tier is done 

2)saw that while creating folders continuosly, I hit the following errors
[root@rhs-client40 distrep2]# for i in {100..10000}; do mkdir nag.$i;sleep 1;done
mkdir: cannot create directory ‘nag.171’: No such file or directory
mkdir: cannot create directory ‘nag.172’: No such file or directory
mkdir: cannot create directory ‘nag.173’: No such file or directory
mkdir: cannot create directory ‘nag.174’: Structure needs cleaning
mkdir: cannot create directory ‘nag.175’: Structure needs cleaning
mkdir: cannot create directory ‘nag.176’: Structure needs cleaning
^[^[^C


THis was a glusterfs mount


But it was not reproducible. Will raise a bug if i see it again(as the directories existed both on mount and brick points)

Comment 6 Nag Pavan Chilakam 2015-06-12 11:25:02 UTC
build details where it was verified:
[root@rhsqa14-vm4 glusterfs]# gluster --version
glusterfs 3.7.1 built on Jun 12 2015 00:21:18
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@rhsqa14-vm4 glusterfs]# rpm -qa|grep gluster
glusterfs-libs-3.7.1-2.el6rhs.x86_64
glusterfs-cli-3.7.1-2.el6rhs.x86_64
glusterfs-rdma-3.7.1-2.el6rhs.x86_64
glusterfs-3.7.1-2.el6rhs.x86_64
glusterfs-api-3.7.1-2.el6rhs.x86_64
glusterfs-fuse-3.7.1-2.el6rhs.x86_64
glusterfs-server-3.7.1-2.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-2.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-2.el6rhs.x86_64
glusterfs-client-xlators-3.7.1-2.el6rhs.x86_64
[root@rhsqa14-vm4 glusterfs]# 
[root@rhsqa14-vm4 glusterfs]# 
[root@rhsqa14-vm4 glusterfs]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.7 Beta (Santiago)
[root@rhsqa14-vm4 glusterfs]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          enforcing
Policy version:                 24
Policy from config file:        targeted
[root@rhsqa14-vm4 glusterfs]#

Comment 9 errata-xmlrpc 2015-07-29 04:45:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.