Bug 1016881

Summary: re-install of server RPM causes KVM-libgfapi guests to not come up
Product: Red Hat Gluster Storage Reporter: Ben England <bengland>
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED EOL QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: high    
Version: 2.1CC: dshetty, perfbz, sasundar, vagarwal, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-12-03 17:11:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Ben England 2013-10-08 20:53:43 UTC
Description of problem:

RHS 2.1 RPM re-install will cause KVM guests backed by libgfapi to fail when they are started.  Why?  glusterfs-server RPM re-install does not preserve previous /etc/glusterfs/glusterd.vol.  But you have to edit this file to contain the line "option rpc-auth-allow-insecure on" in order to get libgfapi to work with KVM.  There is no log message that appears telling you what is wrong, so many sysadmins will be very frustrated by this.  

In my opinion it's a bug that you had to edit /etc/glusterfs/glusterd.vol at all.  It should work out of the box unedited.

Barry Marson just ran into this using libvirt KVM guests, but it should happen with RHEV and OpenStack as well


Version-Release number of selected component (if applicable):

RHS 2.1 GOLD (GA) = glusterfs-server 3.4.0.33rhs

How reproducible:

every time

Steps to Reproduce:
-- shut down all KVM guests backed by gluster volume
-- shut down libvirtd service
-- ensure all gluster volumes are unmounted on all clients
-- stop gluster volume
-- rpm -ev glusterfs-server
-- rpm -iv glusterfs-server 
-- start volume
-- start libvirtd service
-- start KVM guests


Actual results:

# virsh create vm.xml
... peer disconnected ...

Expected results:

Guest VMs all start up again

Additional info:

Barry Marson has any missing details.

Comment 2 Ben England 2013-10-09 18:48:21 UTC
From a discussion with Avati (let me know if I got it wrong): glusterd treats all RPC requests with equal authentication right now, hence they are reluctant to make glusterd.vol option "rpc-auth-allow-insecure on" the default.  Hopefully after changes to glusterd's RPC implementation then this will not need to be edited because it will be default behavior (i.e. client port number won't be used for authentication purposes).  This wasn't a problem with FUSE mount process because that process could use "secure" ports (I.e. below 1024?)

So at a minimum this needs to be documented as part of RHS installation procedure, and we should know in what release this problem goes away.  Anyone using KVM/libgfapi might encounter it.   SA training should cover when to set this parameter.

Comment 3 Vivek Agarwal 2015-12-03 17:11:05 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.