Bug 1231223 - Snapshot: When Cluster.enable-shared-storage is enable, shared storage should get mount after Node reboot
Summary: Snapshot: When Cluster.enable-shared-storage is enable, shared storage should...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
: RHGS 3.1.0
Assignee: Avra Sengupta
QA Contact: Anil Shah
URL:
Whiteboard:
Depends On:
Blocks: 1202842 1231876 1232889
TreeView+ depends on / blocked
 
Reported: 2015-06-12 12:42 UTC by Anil Shah
Modified: 2016-09-17 13:02 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.1-5
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1231876 (view as bug list)
Environment:
Last Closed: 2015-07-29 05:02:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:1495 0 normal SHIPPED_LIVE Important: Red Hat Gluster Storage 3.1 update 2015-07-29 08:26:26 UTC

Description Anil Shah 2015-06-12 12:42:17 UTC
Description of problem:

When cluster.enable-shared-storage is enabled, it creates shared storage volume and mount volume on /var/run/gluster/shared_storage. However mount entry is not added to /etc/fstab file

So that when storage node is rebooted, user don't have to mount shared storage volume again.

Version-Release number of selected component (if applicable):


[root@darkknightrises ~]# rpm -qa  | grep glusterfs
glusterfs-client-xlators-3.7.1-2.el6rhs.x86_64
glusterfs-server-3.7.1-2.el6rhs.x86_64
glusterfs-3.7.1-2.el6rhs.x86_64
glusterfs-api-3.7.1-2.el6rhs.x86_64
glusterfs-cli-3.7.1-2.el6rhs.x86_64
glusterfs-geo-replication-3.7.1-2.el6rhs.x86_64
glusterfs-libs-3.7.1-2.el6rhs.x86_64
glusterfs-fuse-3.7.1-2.el6rhs.x86_64
glusterfs-debuginfo-3.7.1-2.el6rhs.x86_64


How reproducible:

100%

Steps to Reproduce:
1. Create 2*2 distribute-replicate volume.
2. Enable cluster.enable-shared-storage 
3. 

Actual results:

Shared storage volume is created and mounted on path /var/run/gluster/shared_storage

Expected results:

Mount path entry should be added to /etc/fstab

Additional info:

[root@darkknightrises ~]# gluster v info snapvol
 
Volume Name: snapvol
Type: Distributed-Replicate
Volume ID: 961b2621-cd36-4412-9620-8f74bb70a5db
Status: Stopped
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.33.214:/rhs/brick1/b1
Brick2: 10.70.33.219:/rhs/brick1/b2
Brick3: 10.70.33.225:/rhs/brick1/b3
Brick4: 10.70.44.13:/rhs/brick1/b4
Options Reconfigured:
performance.readdir-ahead: on
features.barrier: disable
cluster.enable-shared-storage: enable
snap-max-hard-limit: 200
auto-delete: enable
snap-max-soft-limit: 70


[root@darkknightrises ~]# gluster v info
 
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: 9168a3da-3026-4ce0-a21d-a9f5c135cf79
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.44.13:/var/run/gluster/ss_brick
Brick2: 10.70.33.225:/var/run/gluster/ss_brick
Brick3: 10.70.33.214:/var/run/gluster/ss_brick
Options Reconfigured:
performance.readdir-ahead: on
cluster.enable-shared-storage: enable
snap-max-hard-limit: 200
auto-delete: enable
snap-max-soft-limit: 70

Comment 4 Avra Sengupta 2015-06-19 06:40:00 UTC
Mainline - http://review.gluster.org/#/c/11272/ - Pending Merge, NetBSD yet to run
3.7 - http://review.gluster.org/#/c/11295/ - Pending Merge, NetBSD yet to run
Downstream - https://code.engineering.redhat.com/gerrit/51101

Comment 5 Anil Shah 2015-07-01 06:01:26 UTC
bug verified on build glusterfs-3.7.1-6.el6rhs

Comment 6 errata-xmlrpc 2015-07-29 05:02:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-1495.html


Note You need to log in before you can comment on or make changes to this bug.