Bug 1018168 - Mount option "backupvolfile-server" fails to apply
Summary: Mount option "backupvolfile-server" fails to apply
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterfs
Version: 2.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-11 11:11 UTC by shilpa
Modified: 2013-12-02 09:27 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-12-02 09:27:49 UTC
Embargoed:


Attachments (Terms of Use)

Description shilpa 2013-10-11 11:11:17 UTC
Description of problem:
When I run mount command with "-o backupvolfile-server" option, it does not reflect.Could not see it in "ps aux" command.


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.34rhs-1.el6_4.x86_64
Client RHEL 6.4 

How reproducible:
Always

Steps to Reproduce:
1.Create distribute replicate volume and apply all the necessary volume options.
2.Mount the volume on a client RHEL6.4 with the command:
#mount -t glusterfs -o backupvolfile-server=10.70.37.77 10.70.37.168:vol /mnt/gluster
3.Check the output of "ps aux | grep glusterfs" to see if the option is applied

Actual results:
# ps aux | grep glusterfs
root     13189  2.0  0.4 336824 68028 ?        Ssl  15:30   0:00 /usr/sbin/glusterfs --volfile-id=cinder-vol --volfile-server=10.70.37.168 /mnt/gluster

Initially tested this on RHS-RHOS setup.. But it is also reproducible on a client without RHOS setup.

Expected results:

Backupvolfile-server should be applied successfully after the mount command

Additional info:

Volume Name: cinder-vol
Type: Distributed-Replicate
Volume ID: c2934f95-ab17-4bb3-be63-9cdcf2d5f31b
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.168:/rhs/brick2/c1
Brick2: 10.70.37.214:/rhs/brick2/c2
Brick3: 10.70.37.77:/rhs/brick2/c3
Brick4: 10.70.37.164:/rhs/brick2/c4
Brick5: 10.70.37.168:/rhs/brick2/c5
Brick6: 10.70.37.214:/rhs/brick2/c6
Brick7: 10.70.37.77:/rhs/brick2/c7
Brick8: 10.70.37.164:/rhs/brick2/c8
Brick9: 10.70.37.168:/rhs/brick2/c9
Brick10: 10.70.37.214:/rhs/brick2/c10
Brick11: 10.70.37.77:/rhs/brick2/c11
Brick12: 10.70.37.164:/rhs/brick2/c12
Options Reconfigured:
storage.owner-uid: 165
storage.owner-gid: 165
network.remote-dio: enable
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off


# gluster peer status
Number of Peers: 3

Hostname: 10.70.37.164
Uuid: baf00f31-e0cb-4dbb-9918-c2aa2e699fc6
State: Peer in Cluster (Connected)

Hostname: 10.70.37.77
Uuid: 9a9e638f-fdb9-4e5b-8cc7-5e8c9b48e542
State: Peer in Cluster (Connected)

Hostname: 10.70.37.214
Uuid: 36b99cf3-83fa-47cb-b383-e4c5b6094d78
State: Peer in Cluster (Connected)




Will be attaching the logs shortly.

Comment 2 shilpa 2013-10-11 12:23:59 UTC
sosreports attached in: http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1018168/

Comment 3 Amar Tumballi 2013-12-02 09:27:49 UTC
Saw the steps done for opening this bug:

> 3.Check the output of "ps aux | grep glusterfs" to see if the option is applied

This is not a valid step as of now in RHS2.1 (or RHS2.0) as the backup-volfile server option is handled at mount.glusterfs script.

https://github.com/gluster/glusterfs/blob/release-3.4/xlators/mount/fuse/utils/mount.glusterfs.in (line 224 onwards).


In future (when we rebase to glusterfs version 3.5.0 or higher), this will be a valid step.

https://github.com/gluster/glusterfs/blob/release-3.5/xlators/mount/fuse/utils/mount.glusterfs.in


Hope this is good enough information to close the bug.


Note You need to log in before you can comment on or make changes to this bug.