Bug 1221154 - On setting the ssl option, Volume start fails
Summary: On setting the ssl option, Volume start fails
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: core
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: Sudhir D
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-05-13 11:47 UTC by Apeksha
Modified: 2023-09-14 02:59 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-12 09:45:28 UTC
Embargoed:


Attachments (Terms of Use)

Description Apeksha 2015-05-13 11:47:32 UTC
Description of problem:

On setting the following option for the volume, 
gluster volume set MYVOLUME client.ssl on
gluster volume set MYVOLUME server.ssl on

Gluster volume start fails.

Version-Release number of selected component (if applicable):
Red Hat Storage Server 3.0 Update 4
glusterfs-3.6.0.53-1.el6rhs.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Create a replica volume
2. set the following volume options - 
  gluster volume set MYVOLUME client.ssl on
  gluster volume set MYVOLUME server.ssl on
3. Start the volume, it fails -

Running 'gluster volume start testvol --mode=script'
volume start: testvol: failed: Commit failed on localhost. Please check the log file for more details.

Actual results: Volume start fails
Expected results: Volume start should be successfull


Additional info:
/var/log/glusterfs/etc-glusterfs-glusterd.vol.log:

[2015-05-13 10:20:45.396031] E [rpc-transport.c:508:rpc_transport_unref] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_nodesvc_disconnect+0x4c) [0x7f3ee3bda62c] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_rpc_clnt_unref+0x35) [0x7f3ee3bda505] (-->/usr/lib64/libgfrpc.so.0(rpc_clnt_unref+0x63) [0x7f3eeeabf4e3]))) 0-rpc_transport: invalid argument: this
[2015-05-13 10:20:51.938094] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=testvol -o stat-prefetch=off
[2015-05-13 10:20:51.953594] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S31ganesha-set.sh --volname=testvol -o stat-prefetch=off
[2015-05-13 10:20:52.573171] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=testvol -o server.allow-insecure=on
[2015-05-13 10:20:52.591209] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S31ganesha-set.sh --volname=testvol -o server.allow-insecure=on
[2015-05-13 10:20:53.243921] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=testvol -o client.ssl=on
[2015-05-13 10:20:53.261809] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S31ganesha-set.sh --volname=testvol -o client.ssl=on
[2015-05-13 10:20:53.919523] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=testvol -o server.ssl=on
[2015-05-13 10:20:53.939287] I [run.c:190:runner_log] (-->/lib64/libpthread.so.0(+0x79d1) [0x7f3eee47d9d1] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(+0xd3fa5) [0x7f3ee3c4bfa5] (-->/usr/lib64/glusterfs/3.6.0.53/xlator/mgmt/glusterd.so(glusterd_hooks_run_hooks+0x5a6) [0x7f3ee3c4bd26]))) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S31ganesha-set.sh --volname=testvol -o server.ssl=on
[2015-05-13 10:20:54.800161] I [glusterd-pmap.c:271:pmap_registry_remove] 0-pmap: removing brick (null) on port 49152
[2015-05-13 10:20:54.802674] E [glusterd-utils.c:7034:glusterd_brick_start] 0-management: Unable to start brick rhsauto063.lab.eng.blr.redhat.com:/bricks/testvol_brick0
[2015-05-13 10:20:54.802729] E [glusterd-syncop.c:1371:gd_commit_op_phase] 0-management: Commit of operation 'Volume Start' failed on localhost
[2015-05-13 10:20:55.387327] I [glusterd-handler.c:1382:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req
[2015-05-13 10:20:55.389988] I [glusterd-handler.c:1382:__glusterd_handle_cli_get_volume] 0-glusterd: Received get vol req

 

Attached the /var/log/glusterfs/etc-glusterfs-glusterd.vol.log

Comment 1 Apeksha 2015-05-13 12:12:35 UTC
Sosreports are available in :
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1221154/

Comment 2 Atin Mukherjee 2015-05-13 12:25:29 UTC
There was couple of fixes which were missed out to be backported in 3.7 due to which volume start was failing. Both of these patches are backported now and you would need to wait for the next build to retest it.

I also see that you have raised this bug in RHGS, I believe that was by mistake and you are testing upstream bits. Could you change it to upstream glusterfs and set the component to glusterd?

Comment 3 Kaushal 2015-05-13 13:50:12 UTC
This is most likely due to the same root cause as with 1160900 and 1212684. This has been fixed on upstream master and 3.7.0 and has recently fixed in 3.6.3. The fix needs to be backported to RHS-3.0, for the next z release.

Comment 4 Atin Mukherjee 2015-05-13 13:54:56 UTC
Please ignore #comment 2, I do see the version mentioned as glusterfs-3.6.0.53-1.el6rhs.x86_64.

Comment 5 Kaushal 2015-05-13 14:00:56 UTC
The change which fixes this bug is https://review.gluster.org/9059

Comment 6 Sayan Saha 2015-05-13 16:32:48 UTC
We need to give a new set of 3.0.4+ RPMs with this fix integrated to QE so that they can test it out before we give this to a customer. A customer wants to go to production with these fixes before 3.1 is GA. This is special handling.

Comment 7 Kaushal 2015-05-14 06:41:18 UTC
I've backported the change to rhs-3.0 at https://code.engineering.redhat.com/gerrit/48185
This should fix the particular issue.

You should also know that rhs-3.0 doesn't support network encryption for management connections unlike upstream glusterfs-3.6. So the customer cannot expect complete network encryption, just encryption for the data connections.

Comment 9 Kaushal 2015-08-12 09:45:28 UTC
This was bug was opened on RHS-3.0 which didn't support network encryption. It was used to track backports of fixes from upstream. This was done at the request of a customer, but as the customer has been now convinced to use RHGS-3.1, this bug is no longer valid.

Hence, I'm closing this bug as WONTFIX.

Comment 11 Red Hat Bugzilla 2023-09-14 02:59:09 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.